Updates from: 01/17/2022 02:07:32
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/known-issues.md
The following attributes and objects aren't supported:
- Groups. - Complex anchors (for example, ObjectTypeName+UserName). - Binary attributes.
- - On-premises applications are sometimes not federated with Azure AD and require local passwords. The on-premises provisioning preview does not support password synchronization. Provisioning initial one-time passwords is supported. Please ensure that you are using the [Redact](/azure/active-directory/app-provisioning/functions-for-customizing-application-data#redact) function to redact the passwords from the logs. In the SQL and LDAP connectors, the passwords are not exported on the initial call to the application, but rather a second call with set password.
+ - On-premises applications are sometimes not federated with Azure AD and require local passwords. The on-premises provisioning preview does not support password synchronization. Provisioning initial one-time passwords is supported. Please ensure that you are using the [Redact](./functions-for-customizing-application-data.md#redact) function to redact the passwords from the logs. In the SQL and LDAP connectors, the passwords are not exported on the initial call to the application, but rather a second call with set password.
#### SSL certificates The Azure AD ECMA Connector Host currently requires either an SSL certificate to be trusted by Azure or the provisioning agent to be used. The certificate subject must match the host name the Azure AD ECMA Connector Host is installed on.
The following attributes and objects aren't supported:
The ECMA host does not support updating the password in the connectivity page of the wizard. Please create a new connector when changing the password. ## Next steps
-[How provisioning works](how-provisioning-works.md)
+[How provisioning works](how-provisioning-works.md)
active-directory User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/user-provisioning.md
In Azure Active Directory (Azure AD), the term *app provisioning* refers to auto
Azure AD application provisioning refers to automatically creating user identities and roles in the applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Common scenarios include provisioning an Azure AD user into SaaS applications like [Dropbox](../../active-directory/saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../../active-directory/saas-apps/servicenow-provisioning-tutorial.md), and more.
-Azure AD also supports provisioning users into applications hosted on-premises or in a virtual machine, without having to open up any firewalls. If your application supports [SCIM](https://aka.ms/scimoverview), or you've built a SCIM gateway to connect to your legacy application, you can use the Azure AD Provisioning agent to [directly connect](/azure/active-directory/app-provisioning/on-premises-scim-provisioning) with your application and automate provisioning and deprovisioning. If you have legacy applications that don't support SCIM and rely on an [LDAP](/azure/active-directory/app-provisioning/on-premises-ldap-connector-configure) user store or a [SQL](/azure/active-directory/app-provisioning/tutorial-ecma-sql-connector) database, Azure AD can support those as well.
+Azure AD also supports provisioning users into applications hosted on-premises or in a virtual machine, without having to open up any firewalls. If your application supports [SCIM](https://aka.ms/scimoverview), or you've built a SCIM gateway to connect to your legacy application, you can use the Azure AD Provisioning agent to [directly connect](./on-premises-scim-provisioning.md) with your application and automate provisioning and deprovisioning. If you have legacy applications that don't support SCIM and rely on an [LDAP](./on-premises-ldap-connector-configure.md) user store or a [SQL](./tutorial-ecma-sql-connector.md) database, Azure AD can support those as well.
App provisioning lets you:
For other applications that support SCIM 2.0, follow the steps in [Build a SCIM
- [List of tutorials on how to integrate SaaS apps](../saas-apps/tutorial-list.md) - [Customizing attribute mappings for user provisioning](customize-application-attributes.md)-- [Scoping filters for user provisioning](define-conditional-rules-for-provisioning-user-accounts.md)
+- [Scoping filters for user provisioning](define-conditional-rules-for-provisioning-user-accounts.md)
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/access-tokens.md
You can adjust the lifetime of an access token to control how often the client a
Default token lifetime variation is applied to organizations that have Continuous Access Evaluation (CAE) enabled, even if CTL policies are configured. The default token lifetime for long lived token lifetime ranges from 20 to 28 hours. When the access token expires, the client must use the refresh token to (usually silently) acquire a new refresh token and access token.
-Organizations that use [Conditional Access sign-in frequency (SIF)](/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime#user-sign-in-frequency) to enforce how frequently sign-ins occur cannot override default access token lifetime variation. When using SIF, the time between credential prompts for a client is the token lifetime (ranging from 60 - 90 minutes) plus the sign-in frequency interval.
+Organizations that use [Conditional Access sign-in frequency (SIF)](../conditional-access/howto-conditional-access-session-lifetime.md#user-sign-in-frequency) to enforce how frequently sign-ins occur cannot override default access token lifetime variation. When using SIF, the time between credential prompts for a client is the token lifetime (ranging from 60 - 90 minutes) plus the sign-in frequency interval.
Here's an example of how default token lifetime variation works with sign-in frequency. Let's say an organization sets sign-in frequency to occur every hour. The actual sign-in interval will occur anywhere between 1 hour to 2.5 hours since the token is issued with lifetime ranging from 60-90 minutes (due to token lifetime variation).
Check out [Primary Refresh Tokens](../devices/concept-primary-refresh-token.md)
## Next steps * Learn about [`id_tokens` in Azure AD](id-tokens.md).
-* Learn about permission and consent ( [v1.0](../azuread-dev/v1-permissions-consent.md), [v2.0](v2-permissions-and-consent.md)).
+* Learn about permission and consent ( [v1.0](../azuread-dev/v1-permissions-consent.md), [v2.0](v2-permissions-and-consent.md)).
active-directory Developer Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/developer-glossary.md
See the [ID token reference](id-tokens.md) for more details.
## Managed identities
-Eliminate the need for developers to manage credentials. Managed identities provide an identity for applications to use when connecting to resources that support Azure AD authentication. Applications may use the managed identity to obtain Azure AD tokens. For example, an application may use a managed identity to access resources like Azure Key Vault where developers can store credentials in a secure manner or to access storage accounts. For more information, see [managed identities overview](/azure/active-directory/managed-identities-azure-resources/overview).
+Eliminate the need for developers to manage credentials. Managed identities provide an identity for applications to use when connecting to resources that support Azure AD authentication. Applications may use the managed identity to obtain Azure AD tokens. For example, an application may use a managed identity to access resources like Azure Key Vault where developers can store credentials in a secure manner or to access storage accounts. For more information, see [managed identities overview](../managed-identities-azure-resources/overview.md).
## Microsoft identity platform
Use the following comments section to provide feedback and help to refine and sh
[OAuth2-Role-Def]: https://tools.ietf.org/html/rfc6749#page-6 [OpenIDConnect]: https://openid.net/specs/openid-connect-core-1_0.html [OpenIDConnect-AuthZ-Endpoint]: https://openid.net/specs/openid-connect-core-1_0.html#AuthorizationEndpoint
-[OpenIDConnect-ID-Token]: https://openid.net/specs/openid-connect-core-1_0.html#IDToken
+[OpenIDConnect-ID-Token]: https://openid.net/specs/openid-connect-core-1_0.html#IDToken
active-directory Test Automate Integration Testing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/test-automate-integration-testing.md
Using the ROPC authentication flow is risky in a production environment, so [cre
## Create and configure a key vault
-We recommend you securely store the test usernames and passwords as [secrets](/azure/key-vault/secrets/about-secrets) in Azure Key Vault. When you run the tests later, the tests run in the context of a security principal. The security principal is an Azure AD user if you're running tests locally (for example, in Visual Studio or Visual Studio Code), or a service principal or managed identity if you're running tests in Azure Pipelines or another Azure resource. The security principal must have **Read** and **List** secrets permissions so the test runner can get the test usernames and passwords from your key vault. For more information, read [Authentication in Azure Key Vault](/azure/key-vault/general/authentication).
+We recommend you securely store the test usernames and passwords as [secrets](../../key-vault/secrets/about-secrets.md) in Azure Key Vault. When you run the tests later, the tests run in the context of a security principal. The security principal is an Azure AD user if you're running tests locally (for example, in Visual Studio or Visual Studio Code), or a service principal or managed identity if you're running tests in Azure Pipelines or another Azure resource. The security principal must have **Read** and **List** secrets permissions so the test runner can get the test usernames and passwords from your key vault. For more information, read [Authentication in Azure Key Vault](../../key-vault/general/authentication.md).
-1. [Create a new key vault](/azure/key-vault/general/quick-create-portal) if you don't have one already.
+1. [Create a new key vault](../../key-vault/general/quick-create-portal.md) if you don't have one already.
1. Take note of the **Vault URI** property value (similar to `https://<your-unique-keyvault-name>.vault.azure.net/`) which is used in the example test later in this article.
-1. [Assign an access policy](/azure/key-vault/general/assign-access-policy) for the security principal running the tests. Grant the user, service principal, or managed identity **Get** and **List** secrets permissions in the key vault.
+1. [Assign an access policy](../../key-vault/general/assign-access-policy.md) for the security principal running the tests. Grant the user, service principal, or managed identity **Get** and **List** secrets permissions in the key vault.
## Create test users
-Create some test users in your tenant for testing. Since the test users are not actual humans, we recommend you assign complex passwords and securely store these passwords as [secrets](/azure/key-vault/secrets/about-secrets) in Azure Key Vault.
+Create some test users in your tenant for testing. Since the test users are not actual humans, we recommend you assign complex passwords and securely store these passwords as [secrets](../../key-vault/secrets/about-secrets.md) in Azure Key Vault.
1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory**. 1. Go to **Users**. 1. Select **New user** and create one or more test user accounts in your directory.
-1. The example test later in this article uses a single test user. [Add the test username and password as secrets](/azure/key-vault/secrets/quick-create-portal) in the key vault you created previously. Add the username as a secret named "TestUserName" and the password as a secret named "TestPassword".
+1. The example test later in this article uses a single test user. [Add the test username and password as secrets](../../key-vault/secrets/quick-create-portal.md) in the key vault you created previously. Add the username as a secret named "TestUserName" and the password as a secret named "TestPassword".
## Create and configure an app registration Register an application that acts as your client app when calling APIs during testing. This should *not* be the same application you may already have in production. You should have a separate app to use only for testing purposes.
client_id={your_client_ID}
Replace *{tenant}* with your tenant ID, *{your_client_ID}* with the client ID of your application, and *{resource_you_want_to_call}* with the identifier URI (for example, "https://graph.microsoft.com") or app ID of the API you are trying to access. ## Exclude test apps and users from your MFA policy
-Your tenant likely has a conditional access policy that [requires multifactor authentication (MFA) for all users](/azure/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa), as recommended by Microsoft. MFA won't work with ROPC, so you'll need to exempt your test applications and test users from this requirement.
+Your tenant likely has a conditional access policy that [requires multifactor authentication (MFA) for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md), as recommended by Microsoft. MFA won't work with ROPC, so you'll need to exempt your test applications and test users from this requirement.
To exclude user accounts: 1. Navigate to the [Azure portal](https://portal.azure.com) and sign in to your tenant. Select **Azure Active Directory**. Select **Security** in the left navigation pane and then select **Conditional access**.
active-directory Workload Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/workload-identities-overview.md
An [application](app-objects-and-service-principals.md#application-object) is an
A [service principal](app-objects-and-service-principals.md#service-principal-object) is the *local* representation, or application instance, of a global application object in a specific tenant. An application object is used as a template to create a service principal object in every tenant where the application is used. The service principal object defines what the app can actually do in a specific tenant, who can access the app, and what resources the app can access.
-A [managed identity](/azure/active-directory/managed-identities-azure-resources/overview) is a special type of service principal that eliminates the need for developers to manage credentials.
+A [managed identity](../managed-identities-azure-resources/overview.md) is a special type of service principal that eliminates the need for developers to manage credentials.
Here are some ways that workload identities in Azure AD are used:
At a high level, there are two types of identities: human and machine/non-human
## Supported scenarios Here are some ways you can use workload identities:-- Review service principals and applications that are assigned to privileged directory roles in Azure AD using [access reviews for service principals](/azure/active-directory/privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review).
+- Review service principals and applications that are assigned to privileged directory roles in Azure AD using [access reviews for service principals](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md).
- Access Azure AD protected resources without needing to manage secrets (for supported scenarios) using [workload identity federation](workload-identity-federation.md).-- Apply Conditional Access policies to service principals owned by your organization using [Conditional Access for workload identities](/azure/active-directory/conditional-access/workload-identity).
+- Apply Conditional Access policies to service principals owned by your organization using [Conditional Access for workload identities](../conditional-access/workload-identity.md).
## Next steps
-Learn how to [secure access of workload identities](/azure/active-directory/conditional-access/workload-identity) with adaptive policies.
+Learn how to [secure access of workload identities](../conditional-access/workload-identity.md) with adaptive policies.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
We previously announced in April 2020, a new combined registration experience en
**Service category:** Authentications (Logins) **Product capability:** User Authentication
-A problematic interaction between Windows and a local Active Directory Federation Services (ADFS) instance can result in users attempting to sign into another account, but be silently signed into their existing account instead, with no warning. For federated IdPs such as ADFS, that support the [prompt=login](https://docs.microsoft.com/windows-server/identity/ad-fs/operations/ad-fs-prompt-login) pattern, Azure AD will now trigger a fresh login at ADFS when a user is directed to ADFS with a login hint. This ensures that the user is signed into the account they requested, rather than being silently signed into the account they're already signed in with.
+A problematic interaction between Windows and a local Active Directory Federation Services (ADFS) instance can result in users attempting to sign into another account, but be silently signed into their existing account instead, with no warning. For federated IdPs such as ADFS, that support the [prompt=login](/windows-server/identity/ad-fs/operations/ad-fs-prompt-login) pattern, Azure AD will now trigger a fresh login at ADFS when a user is directed to ADFS with a login hint. This ensures that the user is signed into the account they requested, rather than being silently signed into the account they're already signed in with.
For more information, see the [change notice](../develop/reference-breaking-changes.md).
In November 2021, we have added following 32 new applications in our App gallery
[Tide - Connector](https://gallery.ctinsuretech-tide.com/), [Virtual Risk Manager - USA](../saas-apps/virtual-risk-manager-usa-tutorial.md), [Xorlia Policy Management](https://app.xoralia.com/), [WorkPatterns](https://app.workpatterns.com/oauth2/login?data_source_type=office_365_account_calendar_workspace_sync&utm_source=azure_sso), [GHAE](../saas-apps/ghae-tutorial.md), [Nodetrax Project](../saas-apps/nodetrax-project-tutorial.md), [Touchstone Benchmarking](https://app.touchstonebenchmarking.com/), [SURFsecureID - Azure MFA](../saas-apps/surfsecureid-azure-mfa-tutorial.md), [AiDEA](https://truebluecorp.com/en/prodotti/aidea-en/),[R and D Tax Credit
-You can also find the documentation of all the applications [here](https://aka.ms/AppsTutorial).
+You can also find the documentation of all the applications [here](../saas-apps/tutorial-list.md).
-For listing your application in the Azure AD app gallery, read the details [here](https://aka.ms/AzureADAppRequest).
+For listing your application in the Azure AD app gallery, read the details [here](../develop/v2-howto-app-gallery-listing.md).
Microsoft 365 and other apps are ending support for Internet Explorer 11 on Augu
Starting October 1, 2021, Azure AD Identity Protection will no longer generate the "Malware linked IP address" detection. No action is required and customers will remain protected by the other detections provided by Identity Protection. To learn more about protection policies, refer to [Identity Protection policies](../identity-protection/concept-identity-protection-policies.md). -+
active-directory How To Connect Fed Group Claims https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-group-claims.md
Some applications might require the groups in a different format to how they are
- **Regex Pattern**: Use a regular expression (regex) to parse text strings according to the pattern you will set in this field. If the pattern you outline in a regex pattern evaluates to true, then we will run the regex replacement pattern you will outline below. - **Regex replacement pattern**: Here, outline in regular expressions (regex) notation how you would like to replace your string if your regex pattern outlined above evaluates to true. Use capture groups to match subexpressions in this replace regular expression.
-For more information about regex replace and capture groups, see [The Regular Expression Engine - The Captured Group](https://docs.microsoft.com/dotnet/standard/base-types/the-regular-expression-object-model?WT.mc_id=Portal-fx#the-captured-group).
+For more information about regex replace and capture groups, see [The Regular Expression Engine - The Captured Group](/dotnet/standard/base-types/the-regular-expression-object-model?WT.mc_id=Portal-fx#the-captured-group).
>[!NOTE] > As per the Azure AD documentation a restricted claim cannot be modified using policy. The data source cannot be changed, and no transformation is applied when generating these claims. The "Groups" claim is still a restricted claim, hence you need to customize the groups by changing the name, if you select a restricted name for the name of your custom group claim then the claim will be ignored at runtime.
To emit group names to be returned in netbiosDomain\samAccountName format as the
- [Add authorization using groups & groups claims to an ASP.NET Core web app (Code sample)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/5-WebApp-AuthZ/5-2-Groups/README.md) - [Assign a user or group to an enterprise app](../../active-directory/manage-apps/assign-user-or-group-access-portal.md)-- [Configure role claims](../../active-directory/develop/active-directory-enterprise-app-role-management.md)
+- [Configure role claims](../../active-directory/develop/active-directory-enterprise-app-role-management.md)
active-directory Whatis Phs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/whatis-phs.md
Password Hash Sync also enables [leaked credential detection](../identity-protec
> Only new leaked credentials found after you enable PHS will be processed against your tenant. Verifying against previously found credential pairs is not performed.
-Optionally, you can set up password hash synchronization as a backup if you decide to use [Federation with Active Directory Federation Services (AD FS)](/azure/active-directory/hybrid/how-to-connect-fed-whatis/) as your sign-in method.
+Optionally, you can set up password hash synchronization as a backup if you decide to use [Federation with Active Directory Federation Services (AD FS)](./how-to-connect-fed-whatis.md) as your sign-in method.
To use password hash synchronization in your environment, you need to:
For more information, see [What is hybrid identity?](whatis-hybrid-identity.md).
- [What is pass-through authentication (PTA)?](how-to-connect-pta.md) - [What is federation?](whatis-fed.md) - [What is single-sign on?](how-to-connect-sso.md)-- [How Password hash synchronization works](how-to-connect-password-hash-synchronization.md)
+- [How Password hash synchronization works](how-to-connect-password-hash-synchronization.md)
active-directory Consent And Permissions Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/consent-and-permissions-overview.md
By choosing which application consent policies apply for all users, you can set
- *You can disable user consent*. Users can't grant permissions to applications. Users continue to sign in to applications they've previously consented to or to applications that administrators have granted consent to on their behalf, but they won't be allowed to consent to new permissions to applications on their own. Only users who have been granted a directory role that includes the permission to grant consent can consent to new applications. -- *Users can consent to applications from verified publishers or your organization, but only for permissions you select*. All users can consent only to applications that were published by a [verified publisher](/azure/active-directory/develop/publisher-verification-overview) and applications that are registered in your tenant. Users can consent only to the permissions that you've classified as *low impact*. You must [classify permissions](configure-permission-classifications.md) to select which permissions users are allowed to consent to.
+- *Users can consent to applications from verified publishers or your organization, but only for permissions you select*. All users can consent only to applications that were published by a [verified publisher](../develop/publisher-verification-overview.md) and applications that are registered in your tenant. Users can consent only to the permissions that you've classified as *low impact*. You must [classify permissions](configure-permission-classifications.md) to select which permissions users are allowed to consent to.
- *Users can consent to all applications*. This option allows all users to consent to any permissions that don't require admin consent, for any application.
After the admin consent workflow is enabled, users can request admin approval fo
## Next steps - [Configure user consent settings](configure-user-consent.md)-- [Configure the admin consent workflow](configure-admin-consent-workflow.md)
+- [Configure the admin consent workflow](configure-admin-consent-workflow.md)
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
The following Azure services support managed identities for Azure resources:
| Azure Kubernetes Service (AKS) | [Use managed identities in Azure Kubernetes Service](../../aks/use-managed-identity.md) | | Azure Logic Apps | [Authenticate access to Azure resources using managed identities in Azure Logic Apps](../../logic-apps/create-managed-service-identity.md) | | Azure Log Analytics cluster | [Azure Monitor customer-managed key](../../azure-monitor/logs/customer-managed-keys.md)
-| Azure Machine Learning Services | [Use Managed identities with Azure Machine Learning](/azure/machine-learning/how-to-use-managed-identities?tabs=python) |
+| Azure Machine Learning Services | [Use Managed identities with Azure Machine Learning](../../machine-learning/how-to-use-managed-identities.md?tabs=python) |
| Azure Managed Disk | [Use the Azure portal to enable server-side encryption with customer-managed keys for managed disks](../../virtual-machines/disks-enable-customer-managed-keys-portal.md) | | Azure Media services | [Managed identities](../../media-services/latest/concept-managed-identities.md) | | Azure Monitor | [Azure Monitor customer-managed key](../../azure-monitor/logs/customer-managed-keys.md?tabs=portal) |
active-directory Services Azure Active Directory Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md
The following services support Azure AD authentication. New services are added t
| Azure IoT Hub | [Control access to IoT Hub](../../iot-hub/iot-hub-devguide-security.md) | | Azure Key Vault | [Authentication in Azure Key Vault](../../key-vault/general/authentication.md) | Azure Kubernetes Service (AKS) | [Control access to cluster resources using Kubernetes role-based access control and Azure Active Directory identities in Azure Kubernetes Service](../../aks/azure-ad-rbac.md) |
-| Azure Machine Learning Services | [Set up authentication for Azure Machine Learning resources and workflows](/azure/machine-learning/how-to-setup-authentication) |
+| Azure Machine Learning Services | [Set up authentication for Azure Machine Learning resources and workflows](../../machine-learning/how-to-setup-authentication.md) |
| Azure Maps | [Manage authentication in Azure Maps](../../azure-maps/how-to-manage-authentication.md) | | Azure Media services | [Access the Azure Media Services API with Azure AD authentication](../../media-services/previous/media-services-use-aad-auth-to-access-ams-api.md) | | Azure Monitor | [Azure AD authentication for Application Insights (Preview)](../../azure-monitor/app/azure-ad-authentication.md?tabs=net) |
active-directory Concept Provisioning Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
Use the following table to better understand how to resolve errors that you find
|ImportSkipped | When each user is evaluated, the system tries to import the user from the source system. This error commonly occurs when the user who's being imported is missing the matching property defined in your attribute mappings. Without a value present on the user object for the matching attribute, the system can't evaluate scoping, matching, or export changes. Note that the presence of this error does not indicate that the user is in scope, because you haven't yet evaluated scoping for the user.| |EntrySynchronizationSkipped | The provisioning service has successfully queried the source system and identified the user. No further action was taken on the user and they were skipped. The user might have been out of scope, or the user might have already existed in the target system with no further changes required.| |SystemForCrossDomainIdentityManagementMultipleEntriesInResponse| A GET request to retrieve a user or group received multiple users or groups in the response. The system expects to receive only one user or group in the response. [For example](../app-provisioning/use-scim-to-provision-users-and-groups.md#get-group), if you do a GET request to retrieve a group and provide a filter to exclude members, and your System for Cross-Domain Identity Management (SCIM) endpoint returns the members, you'll get this error.|
-|SystemForCrossDomainIdentityManagementServiceIncompatible|The Azure AD provisioning service is unable to parse the response from the third party application. Please work with the application developer to ensure that the SCIM server is compatible with the [Azure AD SCIM client](https://docs.microsoft.com/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups#understand-the-aad-scim-implementation).|
+|SystemForCrossDomainIdentityManagementServiceIncompatible|The Azure AD provisioning service is unable to parse the response from the third party application. Please work with the application developer to ensure that the SCIM server is compatible with the [Azure AD SCIM client](../app-provisioning/use-scim-to-provision-users-and-groups.md#understand-the-aad-scim-implementation).|
|SchemaPropertyCanOnlyAcceptValue|The property in the target system can only accept one value, but the property in the source system has multiple. Please ensure that you map a single valued attribute to the propoerty that is throwing an error, update the value in the source to be single valued, or remove the attribute from the mappings.| ## Next steps * [Check the status of user provisioning](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) * [Problem configuring user provisioning to an Azure AD Gallery application](../app-provisioning/application-provisioning-config-problem.md)
-* [Graph API for provisioning logs](/graph/api/resources/provisioningobjectsummary)
+* [Graph API for provisioning logs](/graph/api/resources/provisioningobjectsummary)
active-directory Github Enterprise Managed User Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/github-enterprise-managed-user-provisioning-tutorial.md
This tutorial describes the steps you need to perform in both GitHub Enterprise Managed User and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to GitHub Enterprise Managed User using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). > [!NOTE]
-> [GitHub Enterprise Managed Users](https://docs.github.com/enterprise-cloud@latest/admin/authentication/managing-your-enterprise-users-with-your-identity-provider/about-enterprise-managed-users) is a feature of GitHub Enterprise Cloud which is different from GitHub Enterprise's standard SAML SSO and user provisioning implementation. If you haven't specifically requested EMU instance, you have standard GitHub Enterprise Cloud plan. In that case, please refer to [the documentation](/azure/active-directory/saas-apps/github-provisioning-tutorial) to configure user provisioning in your non-EMU organisation. User provisioning is not supported for [GitHub Enteprise Accounts](https://docs.github.com/enterprise-cloud@latest/admin/overview/about-enterprise-accounts)
+> [GitHub Enterprise Managed Users](https://docs.github.com/enterprise-cloud@latest/admin/authentication/managing-your-enterprise-users-with-your-identity-provider/about-enterprise-managed-users) is a feature of GitHub Enterprise Cloud which is different from GitHub Enterprise's standard SAML SSO and user provisioning implementation. If you haven't specifically requested EMU instance, you have standard GitHub Enterprise Cloud plan. In that case, please refer to [the documentation](./github-provisioning-tutorial.md) to configure user provisioning in your non-EMU organisation. User provisioning is not supported for [GitHub Enteprise Accounts](https://docs.github.com/enterprise-cloud@latest/admin/overview/about-enterprise-accounts)
## Capabilities Supported > [!div class="checklist"]
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Github Enterprise Managed User Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/github-enterprise-managed-user-tutorial.md
In this tutorial, you'll learn how to integrate GitHub Enterprise Managed User (
* Manage your accounts in one central location - the Azure portal. > [!NOTE]
-> [GitHub Enterprise Managed Users](https://docs.github.com/enterprise-cloud@latest/admin/authentication/managing-your-enterprise-users-with-your-identity-provider/about-enterprise-managed-users) is a feature of GitHub Enterprise Cloud which is different from GitHub Enterprise's standard SAML SSO implementation. If you haven't specifically requested EMU instance, you have standard GitHub Enterprise Cloud plan. In that case, please refer to relevant documentation to configure your non-EMU [organisation](/azure/active-directory/saas-apps/github-tutorial) or [enterprise account](/azure/active-directory/saas-apps/github-enterprise-cloud-enterprise-account-tutorial) to authenticate with Azure Active Directory.
+> [GitHub Enterprise Managed Users](https://docs.github.com/enterprise-cloud@latest/admin/authentication/managing-your-enterprise-users-with-your-identity-provider/about-enterprise-managed-users) is a feature of GitHub Enterprise Cloud which is different from GitHub Enterprise's standard SAML SSO implementation. If you haven't specifically requested EMU instance, you have standard GitHub Enterprise Cloud plan. In that case, please refer to relevant documentation to configure your non-EMU [organisation](./github-tutorial.md) or [enterprise account](./github-enterprise-cloud-enterprise-account-tutorial.md) to authenticate with Azure Active Directory.
## Prerequisites
In this section, you'll take the information provided from AAD above and enter t
## Next steps
-GitHub Enterprise Managed User **requires** all accounts to be created through automatic user provisioning, you can find more details [here](./github-enterprise-managed-user-provisioning-tutorial.md) on how to configure automatic user provisioning.
+GitHub Enterprise Managed User **requires** all accounts to be created through automatic user provisioning, you can find more details [here](./github-enterprise-managed-user-provisioning-tutorial.md) on how to configure automatic user provisioning.
aks Configure Kubenet Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/configure-kubenet-dual-stack.md
curl -s "http://[${SERVICE_IP}]" | head -n5
[kubernetes-dual-stack]: https://kubernetes.io/docs/concepts/services-networking/dual-stack/ <!-- LINKS - Internal -->
-[deploy-arm-template]: /azure/azure-resource-manager/templates/quickstart-create-templates-use-the-portal
-[deploy-bicep-template]: /azure/azure-resource-manager/bicep/deploy-cli
-[kubenet]: /azure/aks/configure-kubenet
-[aks-out-of-tree]: /azure/aks/out-of-tree
-[nat-gateway]: /azure/virtual-network/nat-gateway/nat-overview
+[deploy-arm-template]: ../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md
+[deploy-bicep-template]: ../azure-resource-manager/bicep/deploy-cli.md
+[kubenet]: ./configure-kubenet.md
+[aks-out-of-tree]: ./out-of-tree.md
+[nat-gateway]: ../virtual-network/nat-gateway/nat-overview.md
[install-azure-cli]: /cli/azure/install-azure-cli [aks-network-concepts]: concepts-network.md [aks-network-nsg]: concepts-network.md#network-security-groups
curl -s "http://[${SERVICE_IP}]" | head -n5
[express-route]: ../expressroute/expressroute-introduction.md [network-comparisons]: concepts-network.md#compare-network-models [custom-route-table]: ../virtual-network/manage-route-table.md
-[user-assigned managed identity]: use-managed-identity.md#bring-your-own-control-plane-mi
+[user-assigned managed identity]: use-managed-identity.md#bring-your-own-control-plane-mi
aks Howto Deploy Java Liberty App With Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/howto-deploy-java-liberty-app-with-postgresql.md
For more information on Open Liberty, see [the Open Liberty project page](https:
* Install a Java SE implementation (for example, [AdoptOpenJDK OpenJDK 8 LTS/OpenJ9](https://adoptopenjdk.net/?variant=openjdk8&jvmVariant=openj9)). * Install [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher. * Install [Docker](https://docs.docker.com/get-docker/) for your OS.
- * Create a user-assigned managed identity and assign `Contributor` role to that identity by following the steps in [Manage user-assigned managed identities](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities). Return to this document after creating the identity and assigning it the necessary role.
+ * Create a user-assigned managed identity and assign `Contributor` role to that identity by following the steps in [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). Return to this document after creating the identity and assigning it the necessary role.
## Create a Jakarta EE runtime using the portal
The steps in this section guide you through creating an Azure Database for Postg
--end-ip-address YOUR_IP_ADDRESS ```
-If you don't want to use the CLI, you may use the Azure portal by following the steps in [Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portal](/azure/postgresql/quickstart-create-server-database-portal). You must also grant access to Azure services by following the steps in [Firewall rules in Azure Database for PostgreSQL - Single Server](/azure/postgresql/concepts-firewall-rules#connecting-from-azure). Return to this document after creating and configuring the database server.
+If you don't want to use the CLI, you may use the Azure portal by following the steps in [Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portal](../postgresql/quickstart-create-server-database-portal.md). You must also grant access to Azure services by following the steps in [Firewall rules in Azure Database for PostgreSQL - Single Server](../postgresql/concepts-firewall-rules.md#connecting-from-azure). Return to this document after creating and configuring the database server.
## Configure and deploy the sample application
az group delete --name <RESOURCE_GROUP_NAME> --yes --no-wait
* [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/) * [Open Liberty](https://openliberty.io/) * [Open Liberty Operator](https://github.com/OpenLiberty/open-liberty-operator)
-* [Open Liberty Server Configuration](https://openliberty.io/docs/ref/config/)
+* [Open Liberty Server Configuration](https://openliberty.io/docs/ref/config/)
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/howto-deploy-java-liberty-app.md
aks-nodepool1-xxxxxxxx-yyyyyyyyyy Ready agent 76s v1.18.10
The steps in this section guide you through creating an Azure SQL Database single database for use with your app. If your application doesn't require a database, you can skip this section.
-1. Create a single database in Azure SQL Database by following the steps in: [Quickstart: Create an Azure SQL Database single database](/azure/azure-sql/database/single-database-create-quickstart). Return to this document after creating and configuring the database server.
+1. Create a single database in Azure SQL Database by following the steps in: [Quickstart: Create an Azure SQL Database single database](../azure-sql/database/single-database-create-quickstart.md). Return to this document after creating and configuring the database server.
> [!NOTE] > > * At the **Basics** step, write down **Database name**, ***Server name**.database.windows.net*, **Server admin login** and **Password**.
You can learn more from references used in this guide:
* [Open Liberty Server Configuration](https://openliberty.io/docs/ref/config/) * [Liberty Maven Plugin](https://github.com/OpenLiberty/ci.maven#liberty-maven-plugin) * [Open Liberty Container Images](https://github.com/OpenLiberty/ci.docker)
-* [WebSphere Liberty Container Images](https://github.com/WASdev/ci.docker)
+* [WebSphere Liberty Container Images](https://github.com/WASdev/ci.docker)
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kub
| 1.23 | Dec 2021 | Jan 2022 | Feb 2022 | 1.26 GA | > [!NOTE]
-> AKS and the Holiday Season: To ease the burden of upgrade and change during the holiday season, AKS is extending a limited scope of support for all clusters and node pools on 1.19 as a courtesy. Customers with clusters and node pools on 1.19 after the [announced deprecation date of 2021-11-30](/azure/aks/supported-kubernetes-versions#aks-kubernetes-release-calendar) will be granted an extension of capabilities outside the [usual scope of support for deprecated versions](/azure/aks/supported-kubernetes-versions#kubernetes-version-support-policy).
+> AKS and the Holiday Season: To ease the burden of upgrade and change during the holiday season, AKS is extending a limited scope of support for all clusters and node pools on 1.19 as a courtesy. Customers with clusters and node pools on 1.19 after the [announced deprecation date of 2021-11-30](#aks-kubernetes-release-calendar) will be granted an extension of capabilities outside the [usual scope of support for deprecated versions](#kubernetes-version-support-policy).
The scope of this limited extension is effective from '2021-12-01 to 2022-01-31' and is limited to the following: > * Creation of new clusters and node pools on 1.19. > * CRUD operations on 1.19 clusters.
For information on how to upgrade your cluster, see [Upgrade an Azure Kubernetes
[az-extension-update]: /cli/azure/extension#az-extension-update [az-aks-get-versions]: /cli/azure/aks#az_aks_get_versions [preview-terms]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
-[get-azaksversion]: /powershell/module/az.aks/get-azaksversion
+[get-azaksversion]: /powershell/module/az.aks/get-azaksversion
analysis-services Analysis Services Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-backup.md
Backing up tabular model databases in Azure Analysis Services is much the same a
> > [!NOTE]
-> If the storage account is in a different region, configure storage account firewall settings to allow access from **Selected networks**. In Firewall **Address range**, specify the IP address range for the region the Analysis Services server is in. Configuring storage account firewall settings to allow access from All networks is supported, however choosing Selected networks and specifying an IP address range is preferred. To learn more, see [Network connectivity FAQ](/azure/analysis-services/analysis-services-network-faq#backup-and-restore).
+> If the storage account is in a different region, configure storage account firewall settings to allow access from **Selected networks**. In Firewall **Address range**, specify the IP address range for the region the Analysis Services server is in. Configuring storage account firewall settings to allow access from All networks is supported, however choosing Selected networks and specifying an IP address range is preferred. To learn more, see [Network connectivity FAQ](./analysis-services-network-faq.yml).
Backups are saved with an .abf extension. For in-memory tabular models, both model data and metadata are stored. For DirectQuery tabular models, only model metadata is stored. Backups can be compressed and encrypted, depending on the options you choose.
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-java.md
JBoss EAP is only available on the Premium v3 and Isolated v2 App Service Plan t
### JDK versions and maintenance
-Microsoft and Adoptium builds of OpenJDK are provided and supported on App Service for Java 8, 11, and 17. These binaries are provided as a no-cost, multi-platform, production-ready distribution of the OpenJDK for Azure. They contain all the components for building and runnning Java SE applications. For local development or testing, you can install the Microsoft build of OpenJDK from the [downloads page](https://docs.microsoft.com/java/openjdk/download). The table below describes the new Java versions included in the January 2022 App Service platform release:
+Microsoft and Adoptium builds of OpenJDK are provided and supported on App Service for Java 8, 11, and 17. These binaries are provided as a no-cost, multi-platform, production-ready distribution of the OpenJDK for Azure. They contain all the components for building and runnning Java SE applications. For local development or testing, you can install the Microsoft build of OpenJDK from the [downloads page](/java/openjdk/download). The table below describes the new Java versions included in the January 2022 App Service platform release:
| Java Version | Linux | Windows | |--||-|
Community support for Java 7 will terminate on July 29th, 2022 and [Java 7 will
If a supported Java runtime will be retired, Azure developers using the affected runtime will be given a deprecation notice at least six months before the runtime is retired. -- [Reasons to move to Java 11](https://docs.microsoft.com/java/openjdk/reasons-to-move-to-java-11?toc=/azure/developer/java/fundamentals/toc.json&bc=/azure/developer/breadcrumb/toc.json)-- [Java 7 migration guide](https://docs.microsoft.com/java/openjdk/transition-from-java-7-to-java-8?toc=/azure/developer/java/fundamentals/toc.json&bc=/azure/developer/breadcrumb/toc.json)
+- [Reasons to move to Java 11](/java/openjdk/reasons-to-move-to-java-11?bc=%2fazure%2fdeveloper%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fdeveloper%2fjava%2ffundamentals%2ftoc.json)
+- [Java 7 migration guide](/java/openjdk/transition-from-java-7-to-java-8?bc=%2fazure%2fdeveloper%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fdeveloper%2fjava%2ffundamentals%2ftoc.json)
### Local development
Developers can download the Production Edition of Azul Zulu Enterprise JDK for l
### Development support
-Product support for the [Microsoft Build of OpenJDK](https://docs.microsoft.com/java/openjdk/download) is available through Microsoft when developing for Azure or [Azure Stack](https://azure.microsoft.com/overview/azure-stack/) with a [qualified Azure support plan](https://azure.microsoft.com/support/plans/).
+Product support for the [Microsoft Build of OpenJDK](/java/openjdk/download) is available through Microsoft when developing for Azure or [Azure Stack](https://azure.microsoft.com/overview/azure-stack/) with a [qualified Azure support plan](https://azure.microsoft.com/support/plans/).
## Next steps Visit the [Azure for Java Developers](/java/azure/) center to find Azure quickstarts, tutorials, and Java reference documentation. - [App Service Linux FAQ](faq-app-service-linux.yml)-- [Environment variables and app settings reference](reference-app-settings.md)
+- [Environment variables and app settings reference](reference-app-settings.md)
app-service Deploy Container Github Action https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-container-github-action.md
In the example, replace the placeholders with your subscription ID, resource gro
OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
-1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](/azure/active-directory/develop/howto-create-service-principal-portal). Create the Active Directory application.
+1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
```azurecli-interactive az ad app create --display-name myApp
app-service Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-github-actions.md
In the example above, replace the placeholders with your subscription ID, resour
OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
-1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](/azure/active-directory/develop/howto-create-service-principal-portal). Create the Active Directory application.
+1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
```azurecli-interactive az ad app create --display-name myApp
applied-ai-services Tutorial Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/tutorial-logic-apps.md
Azure Logic Apps is a cloud-based platform that can be used to automate workflow
* Integrate workflows with software as a service (SaaS) and enterprise applications. * Automate enterprise application integration (EAI), business-to-business(B2B), and electronic data interchange (EDI) tasks.
-For more information, *see* [Logic Apps Overview](/azure/logic-apps/logic-apps-overview).
+For more information, *see* [Logic Apps Overview](../../logic-apps/logic-apps-overview.md).
In this tutorial, you'll learn how to build a Logic App connector flow to automate the following tasks:
At this point, you should have a Form Recognizer resource and a OneDrive folder
* **Subscription**. Select your current subscription. * **Resource group**. The [Azure resource group](/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group) that will contain your resource. Choose the same resource group you have for your Form Recognizer resource.
- * **Type**. Select **Consumption**. The Consumption resource type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](/azure/logic-apps/logic-apps-pricing#consumption-pricing).
+ * **Type**. Select **Consumption**. The Consumption resource type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](../../logic-apps/logic-apps-pricing.md#consumption-pricing).
* **Logic App name**. Enter a name for your resource. We recommend using a descriptive name, for example *YourNameLogicApp*. * **Region**. Select your local region. * **Enable log analytics**. For this project, select **No**.
Now that we've created the flow, the last thing to do is to test it and make sur
1. Check your email and you should see a new email with the information we pre-specified.
-1. Be sure to [disable or delete](/azure/logic-apps/manage-logic-apps-with-azure-portal#disable-or-enable-a-single-logic-app) your logic App after you're done so usage stops.
+1. Be sure to [disable or delete](../../logic-apps/manage-logic-apps-with-azure-portal.md#disable-or-enable-a-single-logic-app) your logic App after you're done so usage stops.
Congratulations! You've officially completed this tutorial. ## Next steps > [!div class="nextstepaction"]
-> [Use the invoice processing prebuilt model in Power Automate](/ai-builder/flow-invoice-processing?toc=/azure/applied-ai-services/form-recognizer/toc.json&bc=/azure/applied-ai-services/form-recognizer/breadcrumb/toc.json)
+> [Use the invoice processing prebuilt model in Power Automate](/ai-builder/flow-invoice-processing?toc=/azure/applied-ai-services/form-recognizer/toc.json&bc=/azure/applied-ai-services/form-recognizer/breadcrumb/toc.json)
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-hrw-run-runbooks.md
Azure Automation handles jobs on Hybrid Runbook Workers differently from jobs ru
Jobs for Hybrid Runbook Workers run under the local **System** account. >[!NOTE] > To run PowerShell 7.x on a Windows Hybrid Runbook Worker, see [Installing PowerShell on Windows](/powershell/scripting/install/installing-powershell-on-windows).
-> We support [Hybrid worker extension based](/azure/automation/extension-based-hybrid-runbook-worker-install) and [agent based](/azure/automation/automation-windows-hrw-install) onboarding.
+> We support [Hybrid worker extension based](./extension-based-hybrid-runbook-worker-install.md) and [agent based](./automation-windows-hrw-install.md) onboarding.
> For agent based onboarding, ensure the Windows Hybrid Runbook worker version is 7.3.1296.0 or above. Make sure the path where the *pwsh.exe* executable is located and is added to the PATH environment variable. Restart the Hybrid Runbook Worker after installation completes.
Make sure the path where the *pwsh.exe* executable is located and is added to th
>[!NOTE] > To run PowerShell 7.x on a Linux Hybrid Runbook Worker, see [Installing PowerShell on Linux](/powershell/scripting/install/installing-powershell-on-linux).
-> We support [Hybrid worker extension based](/azure/automation/extension-based-hybrid-runbook-worker-install) and [agent based](/azure/automation/automation-linux-hrw-install) onboarding.
+> We support [Hybrid worker extension based](./extension-based-hybrid-runbook-worker-install.md) and [agent based](./automation-linux-hrw-install.md) onboarding.
> For agent based onboarding, ensure the Linux Hybrid Runbook worker version is 1.7.5.0 or above.
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-linux-hrw-install.md
The Linux Hybrid Runbook Worker executes runbooks as a special user that can be
After you successfully deploy a runbook worker, review [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md) to learn how to configure your runbooks to automate processes in your on-premises datacenter or other cloud environment. > [!NOTE]
-> A hybrid worker can co-exist with both platforms: **Agent based (V1)** and **Extension based (V2)**. If you install Extension based (V2) on a hybrid worker already running Agent based (V1), then you would see two entries of the Hybrid Runbook Worker in the group. One with Platform Extension based (V2) and the other Agent based (V1). [**Learn more**](/azure/automation/extension-based-hybrid-runbook-worker-install#install-extension-based-v2-on-existing-agent-based-v1-hybrid-worker).
+> A hybrid worker can co-exist with both platforms: **Agent based (V1)** and **Extension based (V2)**. If you install Extension based (V2) on a hybrid worker already running Agent based (V1), then you would see two entries of the Hybrid Runbook Worker in the group. One with Platform Extension based (V2) and the other Agent based (V1). [**Learn more**](./extension-based-hybrid-runbook-worker-install.md#install-extension-based-v2-on-existing-agent-based-v1-hybrid-worker).
## Prerequisites
automation Automation Windows Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-windows-hrw-install.md
Azure Automation stores and manages runbooks and then delivers them to one or mo
After you successfully deploy a runbook worker, review [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md) to learn how to configure your runbooks to automate processes in your on-premises datacenter or other cloud environment. > [!NOTE]
-> A hybrid worker can co-exist with both platforms: **Agent based (V1)** and **Extension based (V2)**. If you install Extension based (V2)on a hybrid worker already running Agent based (V1), then you would see two entries of the Hybrid Runbook Worker in the group. One with Platform Extension based (V2) and the other Agent based (V1). [**Learn more**](/azure/automation/extension-based-hybrid-runbook-worker-install#install-extension-based-v2-on-existing-agent-based-v1-hybrid-worker).
+> A hybrid worker can co-exist with both platforms: **Agent based (V1)** and **Extension based (V2)**. If you install Extension based (V2)on a hybrid worker already running Agent based (V1), then you would see two entries of the Hybrid Runbook Worker in the group. One with Platform Extension based (V2) and the other Agent based (V1). [**Learn more**](./extension-based-hybrid-runbook-worker-install.md#install-extension-based-v2-on-existing-agent-based-v1-hybrid-worker).
## Prerequisites
automation Source Control Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/source-control-integration.md
Azure Automation supports three types of source control:
## Prerequisites * A source control repository (GitHub or Azure DevOps)
-* The Automation account requires either a system-assigned or user assigned [managed identity](automation-security-overview.md#managed-identities). If you haven't configured a managed identity with your Automation account, see [Enable system-assigned managed identity](enable-managed-identity-for-automation.md#enable-a-system-assigned-managed-identity-for-an-azure-automation-account) or [enable user-assigned managed identity](/azure/automation/add-user-assigned-identity) to create it.
+* The Automation account requires either a system-assigned or user assigned [managed identity](automation-security-overview.md#managed-identities). If you haven't configured a managed identity with your Automation account, see [Enable system-assigned managed identity](enable-managed-identity-for-automation.md#enable-a-system-assigned-managed-identity-for-an-azure-automation-account) or [enable user-assigned managed identity](./add-user-assigned-identity.md) to create it.
* Assign the user assigned or system-assigned managed identity to the [Contributor](automation-role-based-access-control.md#contributor) role in the Automation account. > [!NOTE]
Currently, you can't use the Azure portal to update the PAT in source control. W
## Next steps * For integrating source control in Azure Automation, see [Azure Automation: Source Control Integration in Azure Automation](https://azure.microsoft.com/blog/azure-automation-source-control-13/).
-* For integrating runbook source control with Visual Studio Codespaces, see [Azure Automation: Integrating Runbook Source Control using Visual Studio Codespaces](https://azure.microsoft.com/blog/azure-automation-integrating-runbook-source-control-using-visual-studio-online/).
+* For integrating runbook source control with Visual Studio Codespaces, see [Azure Automation: Integrating Runbook Source Control using Visual Studio Codespaces](https://azure.microsoft.com/blog/azure-automation-integrating-runbook-source-control-using-visual-studio-online/).
automation Remove Node And Configuration Package https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/state-configuration/remove-node-and-configuration-package.md
The configuration files are stored in /etc/opt/omi/conf/dsc/configuration/. Remo
## Re-register a node
-You can re-register a node just as you registered the node initially, using any of the methods described in [Enable Azure Automation State Configuration](/azure/automation/automation-dsc-onboarding)
+You can re-register a node just as you registered the node initially, using any of the methods described in [Enable Azure Automation State Configuration](../automation-dsc-onboarding.md)
## Remove the DSC package from a Linux node
availability-zones Az Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-overview.md
Datacenter locations are selected by using rigorous vulnerability risk assessmen
With availability zones, you can design and operate applications and databases that automatically transition between zones without interruption. Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple datacenter infrastructures.
-Each data center is assigned to a physical zone. Physical zones are mapped to logical zones in your Azure subscription. Azure subscriptions are automatically assigned this mapping at the time a subscription is created. You can use the dedicated ARM API called: checkZonePeers to compare zone mapping for resilient solutions that span across multiple subscriptions.
+Each data center is assigned to a physical zone. Physical zones are mapped to logical zones in your Azure subscription. Azure subscriptions are automatically assigned this mapping at the time a subscription is created. You can use the dedicated ARM API called: [checkZonePeers](/rest/api/resources/subscriptions/check-zone-peers.md) to compare zone mapping for resilient solutions that span across multiple subscriptions.
You can design resilient solutions by using Azure services that use availability zones. Co-locate your compute, storage, networking, and data resources across an availability zone, and replicate this arrangement in other availability zones.
azure-arc Upgrade Data Controller Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-data-controller-indirect-kubernetes-tools.md
In this article, you will apply a .yaml file to:
> Some of the data services tiers and modes are generally available and some are in preview. > If you install GA and preview services on the same data controller, you can't upgrade in place. > To upgrade, delete all non-GA database instances. You can find the list of generally available
-> and preview services in the [Release Notes](/azure/azure-arc/data/release-notes).
+> and preview services in the [Release Notes](./release-notes.md).
## Prerequisites
monitorstack Ready 41m
## Troubleshoot upgrade problems
-If you encounter any troubles with upgrading, see the [troubleshooting guide](troubleshoot-guide.md).
+If you encounter any troubles with upgrading, see the [troubleshooting guide](troubleshoot-guide.md).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/resource-bridge/overview.md
URLS:
## Next steps
-To learn more about how Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure, see the following [Overview](/azure/azure-arc/vmware-vsphere/overview) article.
+To learn more about how Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure, see the following [Overview](../vmware-vsphere/overview.md) article.
azure-cache-for-redis Cache Best Practices Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices-kubernetes.md
A pod running the client application can be affected by other pods running on th
## Linux-hosted client applications and TCP settings
-If your Azure Cache for Redis client application runs on a Linux-based container, we recommend updating some TCP settings as detailed in [TCP settings for Linux-hosted client applications](cache-best-practices-connection.md#tcp-settings-for-linux-hosted-client-applications).
+If your Azure Cache for Redis client application runs on a Linux-based container, we recommend updating some TCP settings. These settings are detailed in [TCP settings for Linux-hosted client applications](cache-best-practices-connection.md#tcp-settings-for-linux-hosted-client-applications).
+
+## Potential connection collision with *Istio/Envoy*
+
+Currently, Azure Cache for Redis uses ports 15000-15019 for clustered caches to expose cluster nodes to client applications. As documented [here](https://istio.io/latest/docs/ops/deployment/requirements/#ports-used-by-istio), the same ports are also used by *Istio.io* sidecar proxy called *Envoy* and could interfere with creating connections, especially on port 15006.
+
+To avoid connection interference, we recommend:
+
+- Consider using a non-clustered cache instead
+- Avoid configuring *Istio* sidecars on pods running Azure Cache for Redis client code
## Next steps
azure-cache-for-redis Cache Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-network-isolation.md
In this article, youΓÇÖll learn how to determine the best network isolation solu
## Azure Private Link
-Azure Private Link provides private connectivity from a virtual network to Azure PaaS services. It simplifies the network architecture and secures the connection between endpoints in Azure. It secures the connection by eliminating data exposure to the public internet.
+Azure Private Link provides private connectivity from a virtual network to Azure PaaS services. Private Link simplifies the network architecture and secures the connection between endpoints in Azure. Private Link also secures the connection by eliminating data exposure to the public internet.
-### Advantages
+### Advantages of Private Link
* Supported on Basic, Standard, and Premium Azure Cache for Redis instances. * By using [Azure Private Link](../private-link/private-link-overview.md), you can connect to an Azure Cache instance from your virtual network via a private endpoint. The endpoint is assigned a private IP address in a subnet within the virtual network. With this private link, cache instances are available from both within the VNet and publicly.
-* Once a private endpoint is created, access to the public network can be restricted through the `publicNetworkAccess` flag. This flag is set to `Disabled` by default, which will only allow private link access. You can set the value to `Enabled` or `Disabled` with a PATCH request. For more information, see [Azure Cache for Redis with Azure Private Link (cache-private-link.md).
+* Once a private endpoint is created, access to the public network can be restricted through the `publicNetworkAccess` flag. This flag is set to `Disabled` by default, which will only allow private link access. You can set the value to `Enabled` or `Disabled` with a PATCH request. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md).
* All external cache dependencies won't affect the VNet's NSG rules.
-### Limitations
+### Limitations of Private Link
* Network security groups (NSG) are disabled for private endpoints. However, if there are other resources on the subnet, NSG enforcement will apply to those resources.
-* Currently, portal console support, and persistence to firewall storage accounts are not supported.
+* Currently, portal console support, and persistence to firewall storage accounts aren't supported.
* To connect to a clustered cache, `publicNetworkAccess` needs to be set to `Disabled` and there can only be one private endpoint connection. > [!NOTE]
Azure Private Link provides private connectivity from a virtual network to Azure
VNet is the fundamental building block for your private network in Azure. VNet enables many Azure resources to securely communicate with each other, the internet, and on-premises networks. VNet is like a traditional network you would operate in your own data center. However, VNet also has the benefits of Azure infrastructure, scale, availability, and isolation.
-### Advantages
+### Advantages of VNet injection
* When an Azure Cache for Redis instance is configured with a VNet, it's not publicly addressable. It can only be accessed from virtual machines and applications within the VNet. * When VNet is combined with restricted NSG policies, it helps reduce the risk of data exfiltration. * VNet deployment provides enhanced security and isolation for your Azure Cache for Redis. Subnets, access control policies, and other features further restrict access. * Geo-replication is supported.
-### Limitations
+### Limitations of VNet injection
* VNet injected caches are only available for Premium Azure Cache for Redis. * When using a VNet injected cache, you must change your VNet to cache dependencies such as CRLs/PKI, AKV, Azure Storage, Azure Monitor, and more.
VNet is the fundamental building block for your private network in Azure. VNet e
[Azure Firewall](../firewall/overview.md) is a managed, cloud-based network security service that protects your Azure VNet resources. ItΓÇÖs a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. You can centrally create, enforce, and log application and network connectivity policies across subscriptions and virtual networks.
-### Advantages
+### Advantages of firewall rules
* When firewall rules are configured, only client connections from the specified IP address ranges can connect to the cache. Connections from Azure Cache for Redis monitoring systems are always permitted, even if firewall rules are configured. NSG rules that you define are also permitted.
-### Limitations
+### Limitations of firewall rules
* Firewall rules can be used with VNet injected caches, but not private endpoints currently.
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-infrastructure-as-code.md
A function app must include these application settings:
| AzureWebJobsStorage | A connection string to a storage account that the Functions runtime uses for internal queueing | See [Storage account](#storage) | | FUNCTIONS_EXTENSION_VERSION | The version of the Azure Functions runtime | `~3` | | FUNCTIONS_WORKER_RUNTIME | The language stack to be used for functions in this app | `dotnet`, `node`, `java`, `python`, or `powershell` |
-| WEBSITE_NODE_DEFAULT_VERSION | Only needed if using the `node` language stack, specifies the [version](/azure/azure-functions/functions-reference-node#node-version) to use | `~14` |
+| WEBSITE_NODE_DEFAULT_VERSION | Only needed if using the `node` language stack, specifies the [version](./functions-reference-node.md#node-version) to use | `~14` |
These properties are specified in the `appSettings` collection in the `siteConfig` property:
Learn more about how to develop and configure Azure Functions.
<!-- LINKS --> [Function app on Consumption plan]: https://azure.microsoft.com/resources/templates/function-app-create-dynamic/
-[Function app on Azure App Service plan]: https://azure.microsoft.com/resources/templates/function-app-create-dedicated/
+[Function app on Azure App Service plan]: https://azure.microsoft.com/resources/templates/function-app-create-dedicated/
azure-functions Functions Reference Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-java.md
Microsoft and [Adoptium](https://adoptium.net/) builds of OpenJDK are provided a
| Java 8 | 1.8.0_302 (Adoptium) | 1.8.0_302 (Adoptium) | | Java 11 | 11.0.12 (MSFT) | 11.0.12 (MSFT) |
-For local development or testing, you can download the [Microsoft build of OpenJDK](https://docs.microsoft.com/java/openjdk/download) or [Adoptium Temurin](https://adoptium.net/?variant=openjdk8&jvmVariant=hotspot) binaries for free. [Azure support](https://azure.microsoft.com/support/) for issues with the JDKs and function apps is available with a [qualified support plan](https://azure.microsoft.com/support/plans/).
+For local development or testing, you can download the [Microsoft build of OpenJDK](/java/openjdk/download) or [Adoptium Temurin](https://adoptium.net/?variant=openjdk8&jvmVariant=hotspot) binaries for free. [Azure support](https://azure.microsoft.com/support/) for issues with the JDKs and function apps is available with a [qualified support plan](https://azure.microsoft.com/support/plans/).
If you would like to continue using the Zulu for Azure binaries on your Function app, please [configure your app accordingly](https://github.com/Azure/azure-functions-java-worker/wiki/Customize-JVM-to-use-Zulu). You can continue to use the Azul binaries for your site, but any security patches or improvements will only be available in new versions of the OpenJDK, so we recommend that you eventually remove this configuration so that your Function apps use the latest available version of Java.
For more information about Azure Functions Java development, see the following r
* Local development and debug with [Visual Studio Code](https://code.visualstudio.com/docs/jav) * [Remote Debug Java functions using Visual Studio Code](https://code.visualstudio.com/docs/java/java-serverless#_remote-debug-functions-running-in-the-cloud) * [Maven plugin for Azure Functions](https://github.com/Microsoft/azure-maven-plugins/blob/develop/azure-functions-maven-plugin/README.md)
-* Streamline function creation through the `azure-functions:add` goal, and prepare a staging directory for [ZIP file deployment](deployment-zip-push.md).
+* Streamline function creation through the `azure-functions:add` goal, and prepare a staging directory for [ZIP file deployment](deployment-zip-push.md).
azure-maps Azure Maps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/azure-maps-authentication.md
Disabling local authentication doesn't take effect immediately. Allow a few minu
Shared Access Signature token authentication is in preview.
-Shared access signature (SAS) tokens are authentication tokens created using the JSON Web token (JWT) format and are cryptographically signed to prove authentication for an application to the Azure Maps REST API. A SAS token is created by first integrating a [user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview) with an Azure Maps account in your Azure subscription. The user-assigned managed identity is given authorization to the Azure Maps account through Azure RBAC using one of the built-in or custom role definitions.
+Shared access signature (SAS) tokens are authentication tokens created using the JSON Web token (JWT) format and are cryptographically signed to prove authentication for an application to the Azure Maps REST API. A SAS token is created by first integrating a [user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md) with an Azure Maps account in your Azure subscription. The user-assigned managed identity is given authorization to the Azure Maps account through Azure RBAC using one of the built-in or custom role definitions.
Functional key differences of SAS token from Azure AD Access tokens:
Only one CORS rule with its list of allowed origins can be specified. Each origi
### Remove CORS policy
-You can remove CORS manually in the Azure portal, or programmatically using the Azure Maps SDK, Azure Maps management REST API or an [ARM template](/azure/azure-resource-manager/templates/overview).
+You can remove CORS manually in the Azure portal, or programmatically using the Azure Maps SDK, Azure Maps management REST API or an [ARM template](../azure-resource-manager/templates/overview.md).
> [!TIP] > If you use the Azure Maps management REST API , use `PUT` or `PATCH` with an empty `corsRule` list in the request body.
To learn more about authenticating an application with Azure AD and Azure Maps,
To learn more about authenticating the Azure Maps Map Control with Azure AD, see > [!div class="nextstepaction"]
-> [Use the Azure Maps Map Control](./how-to-use-map-control.md)
+> [Use the Azure Maps Map Control](./how-to-use-map-control.md)
azure-maps How To Secure Sas App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-secure-sas-app.md
This article describes how to create an Azure Maps account with a SAS token that
This scenario assumes: - If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you continue.-- The current user must have subscription `Owner` role permissions on the Azure subscription to create an [Azure Key Vault](/azure/key-vault/general/basic-concepts), user-assigned managed identity, assign the managed identity a role, and create an Azure Maps account.
+- The current user must have subscription `Owner` role permissions on the Azure subscription to create an [Azure Key Vault](../key-vault/general/basic-concepts.md), user-assigned managed identity, assign the managed identity a role, and create an Azure Maps account.
- Azure CLI is installed to deploy the resources. Read more on [How to install the Azure CLI](/cli/azure/install-azure-cli). - The current user is signed-in to Azure CLI with an active Azure subscription using `az login`.
Find the API usage metrics for your Azure Maps account:
Explore samples that show how to integrate Azure AD with Azure Maps: > [!div class="nextstepaction"]
-> [Azure Maps samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples)
+> [Azure Maps samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples)
azure-maps Migrate From Google Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/migrate-from-google-maps.md
Here is a list of useful technical resources for Azure Maps.
- Documentation: [https://aka.ms/AzureMapsDocs](./index.yml) - Web SDK Code Samples: [https://aka.ms/AzureMapsSamples](https://aka.ms/AzureMapsSamples) - Developer Forums: [https://aka.ms/AzureMapsForums](/answers/topics/azure-maps.html)-- Videos: [https://aka.ms/AzureMapsVideos](https://aka.ms/AzureMapsVideos)
+- Videos: [https://aka.ms/AzureMapsVideos](/shows/)
- Blog: [https://aka.ms/AzureMapsBlog](https://aka.ms/AzureMapsBlog) - Tech Blog: [https://aka.ms/AzureMapsTechBlog](https://aka.ms/AzureMapsTechBlog) - Azure Maps Feedback (UserVoice): [https://aka.ms/AzureMapsFeedback](/answers/topics/25319/azure-maps.html)
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/agent-linux.md
The default cache size is 10 MB but can be modified in the [omsagent.conf file](
- Review [Troubleshooting the Linux agent](agent-linux-troubleshoot.md) if you encounter issues while installing or managing the agent. -- Review [Agent Data Sources](https://docs.microsoft.com/azure/azure-monitor/agents/agent-data-sources) to learn about data source configuration.
+- Review [Agent Data Sources](./agent-data-sources.md) to learn about data source configuration.
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-overview.md
The following table shows the current support for the Azure Monitor agent with o
| Azure service | Current support | More information | |:|:|:| | [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Private preview | [Sign-up link](https://aka.ms/AMAgent) |
-| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Forwarding Event (WEF): [Public preview](/azure/sentinel/data-connectors-reference#windows-forwarded-events-preview)</li><li>Windows Security Events: [GA](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li></ul> | <ul><li>No sign-up needed </li><li>No sign-up needed</li></ul> |
+| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows Security Events: [GA](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li></ul> | <ul><li>No sign-up needed </li><li>No sign-up needed</li></ul> |
The following table shows the current support for the Azure Monitor agent with Azure Monitor features.
New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType Azur
## Next steps - [Install the Azure Monitor agent](azure-monitor-agent-install.md) on Windows and Linux virtual machines.-- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Alerts Action Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-action-rules.md
Alert processing rules allow you to specify that logic in a single rule, instead
### Add action groups to all alert types
-Azure Monitor alert rules let you select which action groups will be triggered when their alerts are fired. However, not all Azure alert sources let you specify action groups. Some examples of such alerts include [Azure Backup alerts](/azure/backup/backup-azure-monitoring-built-in-monitor#azure-monitor-alerts-for-azure-backup-preview.md), [VM Insights guest health alerts](/azure/azure-monitor/vm/vminsights-health-alerts#configure-notifications.md), [Azure Stack Edge](/azure/databox-online/azure-stack-edge-gpu-manage-device-event-alert-notifications), and Azure Stack Hub.
+Azure Monitor alert rules let you select which action groups will be triggered when their alerts are fired. However, not all Azure alert sources let you specify action groups. Some examples of such alerts include [Azure Backup alerts](../../backup/backup-azure-monitoring-built-in-monitor.md), [VM Insights guest health alerts](../vm/vminsights-health-alerts.md), [Azure Stack Edge](../../databox-online/azure-stack-edge-gpu-manage-device-event-alert-notifications.md), and Azure Stack Hub.
For those alert types, you can use alert processing rules to add action groups. > [!NOTE]
-> Alert processing rules do not affect [Azure Service Health](/azure/service-health/service-health-overview) alerts.
+> Alert processing rules do not affect [Azure Service Health](../../service-health/service-health-overview.md) alerts.
## Alert processing rule properties <a name="filter-criteria"></a>
azure-monitor Alerts Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-activity-log.md
The following fields are the options that you can use in the Azure Resource Mana
1. `level`: Level of the activity in the activity log event that the alert should be generated on. For example: `Critical`, `Error`, `Warning`, `Informational`, or `Verbose`. 1. `operationName`: The name of the operation in the activity log event. For example: `Microsoft.Resources/deployments/write`. 1. `resourceGroup`: Name of the resource group for the impacted resource in the activity log event.
-1. `resourceProvider`: For more information, see [Azure resource providers and types](/azure/azure-resource-manager/management/resource-providers-and-types). For a list that maps resource providers to Azure services, see [Resource providers for Azure services](/azure/azure-resource-manager/management/resource-providers-and-types).
+1. `resourceProvider`: For more information, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). For a list that maps resource providers to Azure services, see [Resource providers for Azure services](../../azure-resource-manager/management/resource-providers-and-types.md).
1. `status`: String describing the status of the operation in the activity event. For example: `Started`, `In Progress`, `Succeeded`, `Failed`, `Active`, or `Resolved`. 1. `subStatus`: Usually, this field is the HTTP status code of the corresponding REST call. But it can also include other strings describing a substatus. Examples of HTTP status codes include `OK` (HTTP Status Code: 200), `No Content` (HTTP Status Code: 204), and `Service Unavailable` (HTTP Status Code: 503), among many others. 1. `resourceType`: The type of the resource that was affected by the event. For example: `Microsoft.Resources/deployments`.
You can remove activity log alert rule resources by using the Azure CLI command
- Learn about [webhook schema for activity logs](./activity-log-alerts-webhook.md). - Read an [overview of activity logs](./activity-log-alerts.md). - Learn more about [action groups](./action-groups.md). -- Learn about [service health notifications](../../service-health/service-notifications.md).
+- Learn about [service health notifications](../../service-health/service-notifications.md).
azure-monitor Alerts Troubleshoot Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-troubleshoot-log.md
Try the following steps to resolve the problem:
1. Try running the query in Azure Monitor Logs, and fix any syntax issues. 2. If your query syntax is valid, check the connection to the service. - Flush the DNS cache on your local machine, by opening a command prompt and running the following command: `ipconfig /flushdns`, and then check again. If you still get the same error message, try the next step.
- - Copy and paste this URL into the browser: [https://api.loganalytics.io/v1/version](https://api.loganalytics.io/v1/version). If you get an error, contact your IT administrator to allow the IP addresses associated with **api.loganalytics.io** listed [here](https://docs.microsoft.com/azure/azure-monitor/app/ip-addresses#application-insights--log-analytics-apis).
+ - Copy and paste this URL into the browser: [https://api.loganalytics.io/v1/version](https://api.loganalytics.io/v1/version). If you get an error, contact your IT administrator to allow the IP addresses associated with **api.loganalytics.io** listed [here](../app/ip-addresses.md#application-insights--log-analytics-apis).
## Next steps - Learn about [log alerts in Azure](./alerts-unified-log.md). - Learn more about [configuring log alerts](../logs/log-query-overview.md).-- Learn more about [log queries](../logs/log-query-overview.md).
+- Learn more about [log queries](../logs/log-query-overview.md).
azure-monitor Solutions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/solutions.md
# Monitoring solutions in Azure Monitor > [!CAUTION]
-> Many monitoring solutions are no longer in active development. We suggest you check each solution to see if it has a replacement. We suggest you not deploy new instances of solutions that have other options, even if those solutions are still available. Many have been replaced by a [newer curated visualization or insight](/azure/azure-monitor/monitor-reference#insights-and-curated-visualizations).
+> Many monitoring solutions are no longer in active development. We suggest you check each solution to see if it has a replacement. We suggest you not deploy new instances of solutions that have other options, even if those solutions are still available. Many have been replaced by a [newer curated visualization or insight](../monitor-reference.md#insights-and-curated-visualizations).
Monitoring solutions in Azure Monitor provide analysis of the operation of an Azure application or service. This article gives a brief overview of monitoring solutions in Azure and details on using and installing them.
Remove-AzMonitorLogAnalyticsSolution -ResourceGroupName MyResourceGroup -Name W
* Get a [list of monitoring solutions from Microsoft](../monitor-reference.md). * Learn how to [create queries](../logs/log-query-overview.md) to analyze data that monitoring solutions have collected.
-* See all [Azure CLI commands for Azure Monitor](/cli/azure/azure-cli-reference-for-monitor).
+* See all [Azure CLI commands for Azure Monitor](/cli/azure/azure-cli-reference-for-monitor).
azure-monitor Sql Insights Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-enable.md
When settings up your profile for SQL monitoring, you will need one of the follo
If you have these permissions, a new Key Vault access policy will be automatically created as part of creating your SQL Monitoring profile that uses the Key Vault you specified. > [!IMPORTANT]
-> You need to ensure that network and security configuration allows the monitoring VM to access Key Vault. For more information, see [Access Azure Key Vault behind a firewall](/azure/key-vault/general/access-behind-firewall) and [Configure Azure Key Vault networking settings](/azure/key-vault/general/how-to-azure-key-vault-network-security).
+> You need to ensure that network and security configuration allows the monitoring VM to access Key Vault. For more information, see [Access Azure Key Vault behind a firewall](../../key-vault/general/access-behind-firewall.md) and [Configure Azure Key Vault networking settings](../../key-vault/general/how-to-azure-key-vault-network-security.md).
## Create SQL monitoring profile Open SQL insights by selecting **SQL (preview)** from the **Insights** section of the **Azure Monitor** menu in the Azure portal. Click **Create new profile**.
If you do not see data, see [Troubleshooting SQL insights](sql-insights-troubles
## Next steps -- See [Troubleshooting SQL insights](sql-insights-troubleshoot.md) if SQL insights isn't working properly after being enabled.
+- See [Troubleshooting SQL insights](sql-insights-troubleshoot.md) if SQL insights isn't working properly after being enabled.
azure-monitor Sql Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-troubleshoot.md
During preview of SQL Insights, you may encounter the following known issues.
## Best practices
-* **Ensure access to Key Vault from the monitoring VM**. If you use Key Vault to store SQL authentication passwords (strongly recommended), you need to ensure that network and security configuration allows the monitoring VM to access Key Vault. For more information, see [Access Azure Key Vault behind a firewall](/azure/key-vault/general/access-behind-firewall) and [Configure Azure Key Vault networking settings](/azure/key-vault/general/how-to-azure-key-vault-network-security). To verify that the monitoring VM can access Key Vault, you can execute the following commands from an SSH session connected to the VM. You should be able to successfully retrieve the access token and the secret. Replace `[YOUR-KEY-VAULT-URL]`, `[YOUR-KEY-VAULT-SECRET]`, and `[YOUR-KEY-VAULT-ACCESS-TOKEN]` with actual values.
+* **Ensure access to Key Vault from the monitoring VM**. If you use Key Vault to store SQL authentication passwords (strongly recommended), you need to ensure that network and security configuration allows the monitoring VM to access Key Vault. For more information, see [Access Azure Key Vault behind a firewall](../../key-vault/general/access-behind-firewall.md) and [Configure Azure Key Vault networking settings](../../key-vault/general/how-to-azure-key-vault-network-security.md). To verify that the monitoring VM can access Key Vault, you can execute the following commands from an SSH session connected to the VM. You should be able to successfully retrieve the access token and the secret. Replace `[YOUR-KEY-VAULT-URL]`, `[YOUR-KEY-VAULT-SECRET]`, and `[YOUR-KEY-VAULT-ACCESS-TOKEN]` with actual values.
```bash # Get an access token for accessing Key Vault secrets
During preview of SQL Insights, you may encounter the following known issues.
## Next steps -- Get details on [enabling SQL insights](sql-insights-enable.md).
+- Get details on [enabling SQL insights](sql-insights-enable.md).
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/api/overview.md
The Log Analytics **Query API** is a REST API that lets you query the full set o
## Log Analytics API Authentication You must authenticate to access the Log Analytics API. -- To query your workspaces, you must use [Azure Active Directory authentication](https://azure.microsoft.com/documentation/articles/active-directory-whatis/).
+- To query your workspaces, you must use [Azure Active Directory authentication](../../../active-directory/fundamentals/active-directory-whatis.md).
- To quickly explore the API without using Azure AD authentication, you can use an API key to query sample data in a non-production environment. ### Azure AD authentication for workspace data
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/monitor-reference.md
The following table lists Azure services and the data they collect into Azure Mo
| Service | Resource Provider Namespace | Has Metrics | Has Logs | Insight | Notes |||-|--|-|--|
- | [Azure Active Directory Domain Services](../active-directory-domain-services/index.yml) | Microsoft.AAD/DomainServices | No | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftaaddomainservices) | | |
+ | [Azure Active Directory Domain Services](../active-directory-domain-services/index.yml) | Microsoft.AAD/DomainServices | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftaaddomainservices) | | |
| [Azure Active Directory](../active-directory/index.yml) | No | No | [Azure Monitor Workbooks for Azure Active Directory](../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | |
- | [Azure Analysis Services](../analysis-services/index.yml) | Microsoft.AnalysisServices/servers | [**Yes**](./essentials/metrics-supported.md#microsoftanalysisservicesservers) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftanalysisservicesservers) | | |
- | [API Management](../api-management/index.yml) | Microsoft.ApiManagement/service | [**Yes**](./essentials/metrics-supported.md#microsoftapimanagementservice) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftapimanagementservice) | | |
- | [Azure App Configuration](../azure-app-configuration/index.yml) | Microsoft.AppConfiguration/configurationStores | [**Yes**](./essentials/metrics-supported.md#microsoftappconfigurationconfigurationstores) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftappconfigurationconfigurationstores) | | |
- | [Azure Spring Cloud](/azure/spring-cloud/overview) | Microsoft.AppPlatform/Spring | [**Yes**](./essentials/metrics-supported.md#microsoftappplatformspring) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftappplatformspring) | | |
- | [Azure Attestation Service](../attestation/overview.md) | Microsoft.Attestation/attestationProviders | No | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftattestationattestationproviders) | | |
- | [Azure Automation](../automation/index.yml) | Microsoft.Automation/automationAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftautomationautomationaccounts) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftautomationautomationaccounts) | | |
- | [Azure VMware Solution](../azure-vmware/index.yml) | Microsoft.AVS/privateClouds | [**Yes**](./essentials/metrics-supported.md#microsoftavsprivateclouds) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftavsprivateclouds) | | |
- | [Azure Batch](../batch/index.yml) | Microsoft.Batch/batchAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftbatchbatchaccounts) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftbatchbatchaccounts) | | |
+ | [Azure Analysis Services](../analysis-services/index.yml) | Microsoft.AnalysisServices/servers | [**Yes**](./essentials/metrics-supported.md#microsoftanalysisservicesservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftanalysisservicesservers) | | |
+ | [API Management](../api-management/index.yml) | Microsoft.ApiManagement/service | [**Yes**](./essentials/metrics-supported.md#microsoftapimanagementservice) | [**Yes**](./essentials/resource-logs-categories.md#microsoftapimanagementservice) | | |
+ | [Azure App Configuration](../azure-app-configuration/index.yml) | Microsoft.AppConfiguration/configurationStores | [**Yes**](./essentials/metrics-supported.md#microsoftappconfigurationconfigurationstores) | [**Yes**](./essentials/resource-logs-categories.md#microsoftappconfigurationconfigurationstores) | | |
+ | [Azure Spring Cloud](../spring-cloud/overview.md) | Microsoft.AppPlatform/Spring | [**Yes**](./essentials/metrics-supported.md#microsoftappplatformspring) | [**Yes**](./essentials/resource-logs-categories.md#microsoftappplatformspring) | | |
+ | [Azure Attestation Service](../attestation/overview.md) | Microsoft.Attestation/attestationProviders | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftattestationattestationproviders) | | |
+ | [Azure Automation](../automation/index.yml) | Microsoft.Automation/automationAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftautomationautomationaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftautomationautomationaccounts) | | |
+ | [Azure VMware Solution](../azure-vmware/index.yml) | Microsoft.AVS/privateClouds | [**Yes**](./essentials/metrics-supported.md#microsoftavsprivateclouds) | [**Yes**](./essentials/resource-logs-categories.md#microsoftavsprivateclouds) | | |
+ | [Azure Batch](../batch/index.yml) | Microsoft.Batch/batchAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftbatchbatchaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftbatchbatchaccounts) | | |
| [Azure Batch](../batch/index.yml) | Microsoft.BatchAI/workspaces | No | No | | | | [Azure Cognitive Services- Bing Search API](../cognitive-services/bing-web-search/index.yml) | Microsoft.Bing/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftbingaccounts) | No | | |
- | [Azure Blockchain Service](../blockchain/workbench/index.yml) | Microsoft.Blockchain/blockchainMembers | [**Yes**](./essentials/metrics-supported.md#microsoftblockchainblockchainmembers) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftblockchainblockchainmembers) | | |
- | [Azure Blockchain Service](../blockchain/workbench/index.yml) | Microsoft.Blockchain/cordaMembers | No | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftblockchaincordamembers) | | |
- | [Azure Bot Service](/azure/bot-service/) | Microsoft.BotService/botServices | [**Yes**](./essentials/metrics-supported.md#microsoftbotservicebotservices) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftbotservicebotservices) | | |
- | [Azure Cache for Redis](../azure-cache-for-redis/index.yml) | Microsoft.Cache/Redis | [**Yes**](./essentials/metrics-supported.md#microsoftcacheredis) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftcacheredis) | [Azure Monitor for Azure Cache for Redis (preview)](./insights/redis-cache-insights-overview.md) | |
+ | [Azure Blockchain Service](../blockchain/workbench/index.yml) | Microsoft.Blockchain/blockchainMembers | [**Yes**](./essentials/metrics-supported.md#microsoftblockchainblockchainmembers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftblockchainblockchainmembers) | | |
+ | [Azure Blockchain Service](../blockchain/workbench/index.yml) | Microsoft.Blockchain/cordaMembers | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftblockchaincordamembers) | | |
+ | [Azure Bot Service](/azure/bot-service/) | Microsoft.BotService/botServices | [**Yes**](./essentials/metrics-supported.md#microsoftbotservicebotservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftbotservicebotservices) | | |
+ | [Azure Cache for Redis](../azure-cache-for-redis/index.yml) | Microsoft.Cache/Redis | [**Yes**](./essentials/metrics-supported.md#microsoftcacheredis) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcacheredis) | [Azure Monitor for Azure Cache for Redis (preview)](./insights/redis-cache-insights-overview.md) | |
| [Azure Cache for Redis](../azure-cache-for-redis/index.yml) | Microsoft.Cache/redisEnterprise | [**Yes**](./essentials/metrics-supported.md#microsoftcacheredisenterprise) | No | [Azure Monitor for Azure Cache for Redis (preview)](./insights/redis-cache-insights-overview.md) | |
- | [Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/CdnWebApplicationFirewallPolicies | [**Yes**](./essentials/metrics-supported.md#microsoftcdncdnwebapplicationfirewallpolicies) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftcdncdnwebapplicationfirewallpolicies) | | |
- | [Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/profiles | [**Yes**](./essentials/metrics-supported.md#microsoftcdnprofiles) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftcdnprofiles) | | |
- | [Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/profiles/endpoints | No | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftcdnprofilesendpoints) | | |
+ | [Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/CdnWebApplicationFirewallPolicies | [**Yes**](./essentials/metrics-supported.md#microsoftcdncdnwebapplicationfirewallpolicies) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdncdnwebapplicationfirewallpolicies) | | |
+ | [Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/profiles | [**Yes**](./essentials/metrics-supported.md#microsoftcdnprofiles) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdnprofiles) | | |
+ | [Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/profiles/endpoints | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdnprofilesendpoints) | | |
| [Azure Virtual Machines - Classic](../virtual-machines/index.yml) | Microsoft.ClassicCompute/domainNames/slots/roles | [**Yes**](./essentials/metrics-supported.md#microsoftclassiccomputedomainnamesslotsroles) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | | | [Azure Virtual Machines - Classic](../virtual-machines/index.yml) | Microsoft.ClassicCompute/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftclassiccomputevirtualmachines) | No | | |
- | [Virtual Network (Classic)](../virtual-network/network-security-groups-overview.md) | Microsoft.ClassicNetwork/networkSecurityGroups | No | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftclassicnetworknetworksecuritygroups) | | |
+ | [Virtual Network (Classic)](../virtual-network/network-security-groups-overview.md) | Microsoft.ClassicNetwork/networkSecurityGroups | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftclassicnetworknetworksecuritygroups) | | |
| [Azure Storage (Classic)](../storage/index.yml) | Microsoft.ClassicStorage/storageAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccounts) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | | [Azure Storage Blobs (Classic)](../storage/blobs/index.yml) | Microsoft.ClassicStorage/storageAccounts/blobServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsblobservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | | [Azure Storage Files (Classic)](../storage/files/index.yml) | Microsoft.ClassicStorage/storageAccounts/fileServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsfileservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Storage Queues (Classic)](/azure/storage/queues/) | Microsoft.ClassicStorage/storageAccounts/queueServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsqueueservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
+ | [Azure Storage Queues (Classic)](../storage/queues/index.yml) | Microsoft.ClassicStorage/storageAccounts/queueServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsqueueservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
| [Azure Storage Tables (Classic)](../storage/tables/index.yml) | Microsoft.ClassicStorage/storageAccounts/tableServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountstableservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | | Microsoft Cloud Test Platform | Microsoft.Cloudtest/hostedpools | [**Yes**](./essentials/metrics-supported.md#microsoftcloudtesthostedpools) | No | | | | Microsoft Cloud Test Platform | Microsoft.Cloudtest/pools | [**Yes**](./essentials/metrics-supported.md#microsoftcloudtestpools) | No | | | | [Cray ClusterStor in Azure](https://azure.microsoft.com/blog/supercomputing-in-the-cloud-announcing-three-new-cray-in-azure-offers/) | Microsoft.ClusterStor/nodes | [**Yes**](./essentials/metrics-supported.md#microsoftclusterstornodes) | No | | |
- | [Azure Cognitive Services](../cognitive-services/index.yml) | Microsoft.CognitiveServices/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftcognitiveservicesaccounts) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftcognitiveservicesaccounts) | | |
- | [Azure Communication Services](../communication-services/index.yml) | Microsoft.Communication/CommunicationServices | [**Yes**](./essentials/metrics-supported.md#microsoftcommunicationcommunicationservices) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftcommunicationcommunicationservices) | | |
+ | [Azure Cognitive Services](../cognitive-services/index.yml) | Microsoft.CognitiveServices/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftcognitiveservicesaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcognitiveservicesaccounts) | | |
+ | [Azure Communication Services](../communication-services/index.yml) | Microsoft.Communication/CommunicationServices | [**Yes**](./essentials/metrics-supported.md#microsoftcommunicationcommunicationservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcommunicationcommunicationservices) | | |
| [Azure Cloud Services](../cloud-services-extended-support/index.yml) | Microsoft.Compute/cloudServices | [**Yes**](./essentials/metrics-supported.md#microsoftcomputecloudservices) | No | | Agent required to monitor guest operating system and workflows.| | [Azure Cloud Services](../cloud-services-extended-support/index.yml) | Microsoft.Compute/cloudServices/roles | [**Yes**](./essentials/metrics-supported.md#microsoftcomputecloudservicesroles) | No | | Agent required to monitor guest operating system and workflows.| | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/disks | [**Yes**](./essentials/metrics-supported.md#microsoftcomputedisks) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | | | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachines) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.| | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachineScaleSets | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.| | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachineScaleSets/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesetsvirtualmachines) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.|
- | [Microsoft Connected Vehicle Platform](https://azure.microsoft.com/blog/microsoft-connected-vehicle-platform-trends-and-investment-areas/) | Microsoft.ConnectedVehicle/platformAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftconnectedvehicleplatformaccounts) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftconnectedvehicleplatformaccounts) | | |
+ | [Microsoft Connected Vehicle Platform](https://azure.microsoft.com/blog/microsoft-connected-vehicle-platform-trends-and-investment-areas/) | Microsoft.ConnectedVehicle/platformAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftconnectedvehicleplatformaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftconnectedvehicleplatformaccounts) | | |
| [Azure Container Instances](../container-instances/index.yml) | Microsoft.ContainerInstance/containerGroups | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerinstancecontainergroups) | No | [Container Insights](/azure/azure-monitor/insights/container-insights-overview) | |
- | [Azure Container Registry](../container-registry/index.yml) | Microsoft.ContainerRegistry/registries | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerregistryregistries) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftcontainerregistryregistries) | | |
- | [Azure Kubernetes Service (AKS)](../aks/index.yml) | Microsoft.ContainerService/managedClusters | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerservicemanagedclusters) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftcontainerservicemanagedclusters) | [Container Insights](/azure/azure-monitor/insights/container-insights-overview) | |
- | [Azure Custom Providers](../azure-resource-manager/custom-providers/index.yml) | Microsoft.CustomProviders/resourceProviders | [**Yes**](./essentials/metrics-supported.md#microsoftcustomprovidersresourceproviders) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftcustomprovidersresourceproviders) | | |
- | [Microsoft Dynamics 365 Customer Insights](/dynamics365/customer-insights/) | Microsoft.D365CustomerInsights/instances | No | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftd365customerinsightsinstances) | | |
+ | [Azure Container Registry](../container-registry/index.yml) | Microsoft.ContainerRegistry/registries | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerregistryregistries) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcontainerregistryregistries) | | |
+ | [Azure Kubernetes Service (AKS)](../aks/index.yml) | Microsoft.ContainerService/managedClusters | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerservicemanagedclusters) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcontainerservicemanagedclusters) | [Container Insights](/azure/azure-monitor/insights/container-insights-overview) | |
+ | [Azure Custom Providers](../azure-resource-manager/custom-providers/index.yml) | Microsoft.CustomProviders/resourceProviders | [**Yes**](./essentials/metrics-supported.md#microsoftcustomprovidersresourceproviders) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcustomprovidersresourceproviders) | | |
+ | [Microsoft Dynamics 365 Customer Insights](/dynamics365/customer-insights/) | Microsoft.D365CustomerInsights/instances | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftd365customerinsightsinstances) | | |
| [Azure Stack Edge](../databox-online/azure-stack-edge-overview.md) | Microsoft.DataBoxEdge/DataBoxEdgeDevices | [**Yes**](./essentials/metrics-supported.md#microsoftdataboxedgedataboxedgedevices) | No | | |
- | [Azure Databricks](/azure/azure-databricks/) | Microsoft.Databricks/workspaces | No | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdatabricksworkspaces) | | |
- | Project CI | Microsoft.DataCollaboration/workspaces | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftdatacollaborationworkspaces) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdatacollaborationworkspaces) | | |
- | [Azure Data Factory](/azure/data-factory/) | Microsoft.DataFactory/dataFactories | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftdatafactorydatafactories) | No | | |
- | [Azure Data Factory](/azure/data-factory/) | Microsoft.DataFactory/factories | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftdatafactoryfactories) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdatafactoryfactories) | | |
- | [Azure Data Lake Analytics](/azure/data-lake-analytics/) | Microsoft.DataLakeAnalytics/accounts | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftdatalakeanalyticsaccounts) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdatalakeanalyticsaccounts) | | |
- | [Azure Data Lake Storage Gen2](/azure/storage/blobs/data-lake-storage-introduction) | Microsoft.DataLakeStore/accounts | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftdatalakestoreaccounts) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdatalakestoreaccounts) | | |
- | [Azure Data Share](/azure/data-share/) | Microsoft.DataShare/accounts | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftdatashareaccounts) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdatashareaccounts) | | |
- | [Azure Database for MariaDB](/azure/mariadb/) | Microsoft.DBforMariaDB/servers | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftdbformariadbservers) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdbformariadbservers) | | |
- | [Azure Database for MySQL](/azure/mysql/) | Microsoft.DBforMySQL/flexibleServers | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftdbformysqlflexibleservers) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdbformysqlflexibleservers) | | |
- | [Azure Database for MySQL](/azure/mysql/) | Microsoft.DBforMySQL/servers | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftdbformysqlservers) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdbformysqlservers) | | |
- | [Azure Database for PostgreSQL](/azure/postgresql/) | Microsoft.DBforPostgreSQL/flexibleServers | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftdbforpostgresqlflexibleservers) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdbforpostgresqlflexibleservers) | | |
- | [Azure Database for PostgreSQL](/azure/postgresql/) | Microsoft.DBforPostgreSQL/serverGroupsv2 | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftdbforpostgresqlservergroupsv2) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdbforpostgresqlservergroupsv2) | | |
- | [Azure Database for PostgreSQL](/azure/postgresql/) | Microsoft.DBforPostgreSQL/servers | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftdbforpostgresqlservers) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdbforpostgresqlservers) | | |
- | [Azure Database for PostgreSQL](/azure/postgresql/) | Microsoft.DBforPostgreSQL/serversv2 | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftdbforpostgresqlserversv2) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdbforpostgresqlserversv2) | | |
- | [Microsoft Windows Virtual Desktop](/azure/virtual-desktop/) | Microsoft.DesktopVirtualization/applicationgroups | No | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdesktopvirtualizationapplicationgroups) | [Windows Virtual Desktop Insights](/azure/virtual-desktop/azure-monitor) | |
- | [Microsoft Windows Virtual Desktop](/azure/virtual-desktop/) | Microsoft.DesktopVirtualization/hostpools | No | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdesktopvirtualizationhostpools) | [Windows Virtual Desktop Insights](/azure/virtual-desktop/azure-monitor) | |
- | [Microsoft Windows Virtual Desktop](/azure/virtual-desktop/) | Microsoft.DesktopVirtualization/workspaces | No | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdesktopvirtualizationworkspaces) | | |
- | [Azure IoT Hub](/azure/iot-hub/) | Microsoft.Devices/ElasticPools | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftdeviceselasticpools) | No | | |
- | [Azure IoT Hub](/azure/iot-hub/) | Microsoft.Devices/ElasticPools/IotHubTenants | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftdeviceselasticpoolsiothubtenants) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdeviceselasticpoolsiothubtenants) | | |
- | [Azure IoT Hub](/azure/iot-hub/) | Microsoft.Devices/IotHubs | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftdevicesiothubs) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdevicesiothubs) | | |
- | [Azure IoT Hub Device Provisioning Service](/azure/iot-dps/) | Microsoft.Devices/ProvisioningServices | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftdevicesprovisioningservices) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdevicesprovisioningservices) | | |
- | [Azure Digital Twins](/azure/digital-twins/overview) | Microsoft.DigitalTwins/digitalTwinsInstances | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftdigitaltwinsdigitaltwinsinstances) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdigitaltwinsdigitaltwinsinstances) | | |
- | [Azure Cosmos DB](/azure/cosmos-db/) | Microsoft.DocumentDB/databaseAccounts | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftdocumentdbdatabaseaccounts) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdocumentdbdatabaseaccounts) | [Azure Cosmos DB Insights](/azure/azure-monitor/insights/cosmosdb-insights-overview) | |
- | [Azure Grid](/azure/event-grid/) | Microsoft.EventGrid/domains | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsofteventgriddomains) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsofteventgriddomains) | | |
- | [Azure Grid](/azure/event-grid/) | Microsoft.EventGrid/eventSubscriptions | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsofteventgrideventsubscriptions) | No | | |
- | [Azure Grid](/azure/event-grid/) | Microsoft.EventGrid/extensionTopics | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsofteventgridextensiontopics) | No | | |
- | [Azure Grid](/azure/event-grid/) | Microsoft.EventGrid/partnerNamespaces | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsofteventgridpartnernamespaces) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsofteventgridpartnernamespaces) | | |
- | [Azure Grid](/azure/event-grid/) | Microsoft.EventGrid/partnerTopics | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsofteventgridpartnertopics) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsofteventgridpartnertopics) | | |
- | [Azure Grid](/azure/event-grid/) | Microsoft.EventGrid/systemTopics | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsofteventgridsystemtopics) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsofteventgridsystemtopics) | | |
- | [Azure Grid](/azure/event-grid/) | Microsoft.EventGrid/topics | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsofteventgridtopics) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsofteventgridtopics) | | |
- | [Azure Event Hubs](/azure/event-hubs/) | Microsoft.EventHub/clusters | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsofteventhubclusters) | No | 0 | |
- | [Azure Event Hubs](/azure/event-hubs/) | Microsoft.EventHub/namespaces | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsofteventhubnamespaces) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsofteventhubnamespaces) | 0 | |
- | [Microsoft Experimentation Platform](https://www.microsoft.com/research/group/experimentation-platform-exp/) | microsoft.experimentation/experimentWorkspaces | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftexperimentationexperimentworkspaces) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftexperimentationexperimentworkspaces) | | |
- | [Azure HDInsight](/azure/hdinsight/) | Microsoft.HDInsight/clusters | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsofthdinsightclusters) | No | [Azure HDInsight (preview)](/azure/hdinsight/log-analytics-migration#insights) | |
- | [Azure API for FHIR](/azure/healthcare-apis/) | Microsoft.HealthcareApis/services | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsofthealthcareapisservices) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsofthealthcareapisservices) | | |
- | [Azure API for FHIR](/azure/healthcare-apis/) | Microsoft.HealthcareApis/workspaces/iotconnectors | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsofthealthcareapisworkspacesiotconnectors) | No | | |
- | [StorSimple](/azure/storsimple/) | microsoft.hybridnetwork/networkfunctions | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsofthybridnetworknetworkfunctions) | No | | |
- | [StorSimple](/azure/storsimple/) | microsoft.hybridnetwork/virtualnetworkfunctions | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsofthybridnetworkvirtualnetworkfunctions) | No | | |
- | [Azure Monitor](/azure/azure-monitor/) | microsoft.insights/autoscalesettings | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftinsightsautoscalesettings) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftinsightsautoscalesettings) | | |
- | [Azure Monitor](/azure/azure-monitor/) | microsoft.insights/components | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftinsightscomponents) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftinsightscomponents) | [Azure Monitor Application Insights](/azure/azure-monitor/app/app-insights-overview) | |
- | [Azure IoT Central](/azure/iot-central/) | Microsoft.IoTCentral/IoTApps | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftiotcentraliotapps) | No | | |
- | [Azure Key Vault](/azure/key-vault/) | Microsoft.KeyVault/managedHSMs | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftkeyvaultmanagedhsms) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftkeyvaultmanagedhsms) | [Azure Key Vault Insights (preview)](/azure/azure-monitor/insights/key-vault-insights-overview) | |
- | [Azure Key Vault](/azure/key-vault/) | Microsoft.KeyVault/vaults | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftkeyvaultvaults) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftkeyvaultvaults) | [Azure Key Vault Insights (preview)](/azure/azure-monitor/insights/key-vault-insights-overview) | |
- | [Azure Kubernetes Service (AKS)](/azure/aks/) | Microsoft.Kubernetes/connectedClusters | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftkubernetesconnectedclusters) | No | | |
- | [Azure Data Explorer](/azure/data-explorer/) | Microsoft.Kusto/clusters | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftkustoclusters) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftkustoclusters) | | |
- | [Azure Logic Apps](/azure/logic-apps/) | Microsoft.Logic/integrationAccounts | No | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftlogicintegrationaccounts) | | |
- | [Azure Logic Apps](/azure/logic-apps/) | Microsoft.Logic/integrationServiceEnvironments | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftlogicintegrationserviceenvironments) | No | | |
- | [Azure Logic Apps](/azure/logic-apps/) | Microsoft.Logic/workflows | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftlogicworkflows) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftlogicworkflows) | | |
- | [Azure Machine Learning](/azure/machine-learning/) | Microsoft.MachineLearningServices/workspaces | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftmachinelearningservicesworkspaces) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftmachinelearningservicesworkspaces) | | |
- | [Azure Maps](/azure/azure-maps/) | Microsoft.Maps/accounts | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftmapsaccounts) | No | | |
- | [Azure Media Services](/azure/media-services/) | Microsoft.Media/mediaservices | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftmediamediaservices) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftmediamediaservices) | | |
- | [Azure Media Services](/azure/media-services/) | Microsoft.Media/mediaservices/liveEvents | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftmediamediaservicesliveevents) | No | | |
- | [Azure Media Services](/azure/media-services/) | Microsoft.Media/mediaservices/streamingEndpoints | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftmediamediaservicesstreamingendpoints) | No | | |
- | [Azure Media Services](/azure/media-services/) | Microsoft.Media/videoAnalyzers | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftmediavideoanalyzers) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftmediavideoanalyzers) | | |
- | [Azure Spatial Anchors](/azure/spatial-anchors/) | Microsoft.MixedReality/remoteRenderingAccounts | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftmixedrealityremoterenderingaccounts) | No | | |
- | [Azure Spatial Anchors](/azure/spatial-anchors/) | Microsoft.MixedReality/spatialAnchorsAccounts | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftmixedrealityspatialanchorsaccounts) | No | | |
- | [Azure NetApp Files](/azure/azure-netapp-files/) | Microsoft.NetApp/netAppAccounts/capacityPools | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetappnetappaccountscapacitypools) | No | | |
- | [Azure NetApp Files](/azure/azure-netapp-files/) | Microsoft.NetApp/netAppAccounts/capacityPools/volumes | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetappnetappaccountscapacitypoolsvolumes) | No | | |
- | [Application Gateway](/azure/application-gateway/) | Microsoft.Network/applicationGateways | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkapplicationgateways) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworkapplicationgateways) | | |
- | [Azure Firewall](/azure/firewall/) | Microsoft.Network/azureFirewalls | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkazurefirewalls) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworkazurefirewalls) | | |
- | [Azure Bastion](/azure/bastion/) | Microsoft.Network/bastionHosts | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkbastionhosts) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworkbastionhosts) | | |
- | [VPN Gateway](/azure/vpn-gateway/) | Microsoft.Network/connections | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkconnections) | No | | |
- | [Azure DNS](/azure/dns/) | Microsoft.Network/dnszones | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkdnszones) | No | | |
- | [Azure ExpressRoute](/azure/expressroute/) | Microsoft.Network/expressRouteCircuits | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkexpressroutecircuits) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworkexpressroutecircuits) | | |
- | [Azure ExpressRoute](/azure/expressroute/) | Microsoft.Network/expressRouteGateways | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkexpressroutegateways) | No | | |
- | [Azure ExpressRoute](/azure/expressroute/) | Microsoft.Network/expressRoutePorts | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkexpressrouteports) | No | | |
- | [Azure Front Door](/azure/frontdoor/) | Microsoft.Network/frontdoors | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkfrontdoors) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworkfrontdoors) | | |
- | [Azure Load Balancer](/azure/load-balancer/) | Microsoft.Network/loadBalancers | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkloadbalancers) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworkloadbalancers) | | |
- | [Azure Load Balancer](/azure/load-balancer/) | Microsoft.Network/natGateways | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworknatgateways) | No | | |
- | [Azure Virtual Network](/azure/virtual-network/) | Microsoft.Network/networkInterfaces | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworknetworkinterfaces) | No | [Azure Network Insights](/azure/azure-monitor/insights/network-insights-overview) | |
- | [Azure Virtual Network](/azure/virtual-network/) | Microsoft.Network/networkSecurityGroups | No | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworknetworksecuritygroups) | [Azure Network Insights](/azure/azure-monitor/insights/network-insights-overview) | |
- | [Azure Network Watcher](/azure/network-watcher/network-watcher-monitoring-overview) | Microsoft.Network/networkWatchers/connectionMonitors | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworknetworkwatchersconnectionmonitors) | No | | |
- | [Azure Virtual WAN](/azure/virtual-wan/virtual-wan-about) | Microsoft.Network/p2sVpnGateways | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkp2svpngateways) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworkp2svpngateways) | | |
- | [Azure DNS Private Zones](/azure/dns/private-dns-privatednszone) | Microsoft.Network/privateDnsZones | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkprivatednszones) | No | | |
- | [Azure Private Link](/azure/private-link/private-link-overview) | Microsoft.Network/privateEndpoints | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkprivateendpoints) | No | | |
- | [Azure Private Link](/azure/private-link/private-link-overview) | Microsoft.Network/privateLinkServices | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkprivatelinkservices) | No | | |
- | [Azure Virtual Network](/azure/virtual-network/) | Microsoft.Network/publicIPAddresses | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkpublicipaddresses) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworkpublicipaddresses) | [Azure Network Insights](/azure/azure-monitor/insights/network-insights-overview) | |
- | [Azure Traffic Manager](/azure/traffic-manager/traffic-manager-overview) | Microsoft.Network/trafficmanagerprofiles | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworktrafficmanagerprofiles) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworktrafficmanagerprofiles) | | |
- | [Azure Virtual WAN](/azure/virtual-wan/virtual-wan-about) | Microsoft.Network/virtualHubs | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkvirtualhubs) | No | | |
- | [Azure VPN Gateway](/azure/vpn-gateway/) | Microsoft.Network/virtualNetworkGateways | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkvirtualnetworkgateways) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworkvirtualnetworkgateways) | | |
- | [Azure Virtual Network](/azure/virtual-network/) | Microsoft.Network/virtualNetworks | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkvirtualnetworks) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworkvirtualnetworks) | [Azure Network Insights](/azure/azure-monitor/insights/network-insights-overview) | |
- | [Azure Virtual Network](/azure/virtual-network/) | Microsoft.Network/virtualRouters | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkvirtualrouters) | No | | |
- | [Azure Virtual WAN](/azure/virtual-wan/virtual-wan-about) | Microsoft.Network/vpnGateways | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnetworkvpngateways) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworkvpngateways) | | |
- | [Azure Notification Hubs](/azure/notification-hubs/) | Microsoft.NotificationHubs/namespaces/notificationHubs | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftnotificationhubsnamespacesnotificationhubs) | No | | |
- | [Azure Monitor](/azure/azure-monitor/) | Microsoft.OperationalInsights/workspaces | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftoperationalinsightsworkspaces) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftoperationalinsightsworkspaces) | | |
- | [Azure Peering Service](/azure/peering-service/) | Microsoft.Peering/peerings | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftpeeringpeerings) | No | | |
- | [Azure Peering Service](/azure/peering-service/) | Microsoft.Peering/peeringServices | [**Yes**](/azure/azure-monitor/essentials/metrics-supported#microsoftpeeringpeeringservices) | No | | |
- | [Microsoft Power BI](/power-bi/power-bi-overview) | Microsoft.PowerBI/tenants | No | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftpowerbitenants) | | |
- | [Microsoft Power BI](/power-bi/power-bi-overview) | Microsoft.PowerBI/tenants/workspaces | No | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftpowerbitenantsworkspaces) | | |
- | [Power BI Embedded](/azure/power-bi-embedded/) | Microsoft.PowerBIDedicated/capacities | [**Yes**](./essentials/metrics-supported.md#microsoftpowerbidedicatedcapacities) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftpowerbidedicatedcapacities) | | |
- | [Azure Purview](../purview/index.yml) | Microsoft.Purview/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftpurviewaccounts) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftpurviewaccounts) | | |
- | [Azure Site Recovery](../site-recovery/index.yml) | Microsoft.RecoveryServices/vaults | [**Yes**](./essentials/metrics-supported.md#microsoftrecoveryservicesvaults) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftrecoveryservicesvaults) | | |
- | [Azure Relay](/azure/azure-relay/relay-what-is-it) | Microsoft.Relay/namespaces | [**Yes**](./essentials/metrics-supported.md#microsoftrelaynamespaces) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftrelaynamespaces) | | |
+ | [Azure Databricks](/azure/azure-databricks/) | Microsoft.Databricks/workspaces | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftdatabricksworkspaces) | | |
+ | Project CI | Microsoft.DataCollaboration/workspaces | [**Yes**](./essentials/metrics-supported.md#microsoftdatacollaborationworkspaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdatacollaborationworkspaces) | | |
+ | [Azure Data Factory](../data-factory/index.yml) | Microsoft.DataFactory/dataFactories | [**Yes**](./essentials/metrics-supported.md#microsoftdatafactorydatafactories) | No | | |
+ | [Azure Data Factory](../data-factory/index.yml) | Microsoft.DataFactory/factories | [**Yes**](./essentials/metrics-supported.md#microsoftdatafactoryfactories) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdatafactoryfactories) | | |
+ | [Azure Data Lake Analytics](../data-lake-analytics/index.yml) | Microsoft.DataLakeAnalytics/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftdatalakeanalyticsaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdatalakeanalyticsaccounts) | | |
+ | [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) | Microsoft.DataLakeStore/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftdatalakestoreaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdatalakestoreaccounts) | | |
+ | [Azure Data Share](../data-share/index.yml) | Microsoft.DataShare/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftdatashareaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdatashareaccounts) | | |
+ | [Azure Database for MariaDB](../mariadb/index.yml) | Microsoft.DBforMariaDB/servers | [**Yes**](./essentials/metrics-supported.md#microsoftdbformariadbservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbformariadbservers) | | |
+ | [Azure Database for MySQL](../mysql/index.yml) | Microsoft.DBforMySQL/flexibleServers | [**Yes**](./essentials/metrics-supported.md#microsoftdbformysqlflexibleservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbformysqlflexibleservers) | | |
+ | [Azure Database for MySQL](../mysql/index.yml) | Microsoft.DBforMySQL/servers | [**Yes**](./essentials/metrics-supported.md#microsoftdbformysqlservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbformysqlservers) | | |
+ | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/flexibleServers | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlflexibleservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlflexibleservers) | | |
+ | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/serverGroupsv2 | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlservergroupsv2) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlservergroupsv2) | | |
+ | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/servers | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlservers) | | |
+ | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/serversv2 | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlserversv2) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlserversv2) | | |
+ | [Microsoft Windows Virtual Desktop](../virtual-desktop/index.yml) | Microsoft.DesktopVirtualization/applicationgroups | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftdesktopvirtualizationapplicationgroups) | [Windows Virtual Desktop Insights](../virtual-desktop/azure-monitor.md) | |
+ | [Microsoft Windows Virtual Desktop](../virtual-desktop/index.yml) | Microsoft.DesktopVirtualization/hostpools | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftdesktopvirtualizationhostpools) | [Windows Virtual Desktop Insights](../virtual-desktop/azure-monitor.md) | |
+ | [Microsoft Windows Virtual Desktop](../virtual-desktop/index.yml) | Microsoft.DesktopVirtualization/workspaces | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftdesktopvirtualizationworkspaces) | | |
+ | [Azure IoT Hub](../iot-hub/index.yml) | Microsoft.Devices/ElasticPools | [**Yes**](./essentials/metrics-supported.md#microsoftdeviceselasticpools) | No | | |
+ | [Azure IoT Hub](../iot-hub/index.yml) | Microsoft.Devices/ElasticPools/IotHubTenants | [**Yes**](./essentials/metrics-supported.md#microsoftdeviceselasticpoolsiothubtenants) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdeviceselasticpoolsiothubtenants) | | |
+ | [Azure IoT Hub](../iot-hub/index.yml) | Microsoft.Devices/IotHubs | [**Yes**](./essentials/metrics-supported.md#microsoftdevicesiothubs) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdevicesiothubs) | | |
+ | [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml) | Microsoft.Devices/ProvisioningServices | [**Yes**](./essentials/metrics-supported.md#microsoftdevicesprovisioningservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdevicesprovisioningservices) | | |
+ | [Azure Digital Twins](../digital-twins/overview.md) | Microsoft.DigitalTwins/digitalTwinsInstances | [**Yes**](./essentials/metrics-supported.md#microsoftdigitaltwinsdigitaltwinsinstances) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdigitaltwinsdigitaltwinsinstances) | | |
+ | [Azure Cosmos DB](../cosmos-db/index.yml) | Microsoft.DocumentDB/databaseAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftdocumentdbdatabaseaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdocumentdbdatabaseaccounts) | [Azure Cosmos DB Insights](./insights/cosmosdb-insights-overview.md) | |
+ | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/domains | [**Yes**](./essentials/metrics-supported.md#microsofteventgriddomains) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventgriddomains) | | |
+ | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/eventSubscriptions | [**Yes**](./essentials/metrics-supported.md#microsofteventgrideventsubscriptions) | No | | |
+ | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/extensionTopics | [**Yes**](./essentials/metrics-supported.md#microsofteventgridextensiontopics) | No | | |
+ | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/partnerNamespaces | [**Yes**](./essentials/metrics-supported.md#microsofteventgridpartnernamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventgridpartnernamespaces) | | |
+ | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/partnerTopics | [**Yes**](./essentials/metrics-supported.md#microsofteventgridpartnertopics) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventgridpartnertopics) | | |
+ | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/systemTopics | [**Yes**](./essentials/metrics-supported.md#microsofteventgridsystemtopics) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventgridsystemtopics) | | |
+ | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/topics | [**Yes**](./essentials/metrics-supported.md#microsofteventgridtopics) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventgridtopics) | | |
+ | [Azure Event Hubs](../event-hubs/index.yml) | Microsoft.EventHub/clusters | [**Yes**](./essentials/metrics-supported.md#microsofteventhubclusters) | No | 0 | |
+ | [Azure Event Hubs](../event-hubs/index.yml) | Microsoft.EventHub/namespaces | [**Yes**](./essentials/metrics-supported.md#microsofteventhubnamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventhubnamespaces) | 0 | |
+ | [Microsoft Experimentation Platform](https://www.microsoft.com/research/group/experimentation-platform-exp/) | microsoft.experimentation/experimentWorkspaces | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md#microsoftexperimentationexperimentworkspaces) | | |
+ | [Azure HDInsight](../hdinsight/index.yml) | Microsoft.HDInsight/clusters | [**Yes**](./essentials/metrics-supported.md#microsofthdinsightclusters) | No | [Azure HDInsight (preview)](../hdinsight/log-analytics-migration.md#insights) | |
+ | [Azure API for FHIR](../healthcare-apis/index.yml) | Microsoft.HealthcareApis/services | [**Yes**](./essentials/metrics-supported.md#microsofthealthcareapisservices) | [**Yes**](./essentials/resource-logs-categories.md#microsofthealthcareapisservices) | | |
+ | [Azure API for FHIR](../healthcare-apis/index.yml) | Microsoft.HealthcareApis/workspaces/iotconnectors | [**Yes**](./essentials/metrics-supported.md#microsofthealthcareapisworkspacesiotconnectors) | No | | |
+ | [StorSimple](../storsimple/index.yml) | microsoft.hybridnetwork/networkfunctions | [**Yes**](./essentials/metrics-supported.md#microsofthybridnetworknetworkfunctions) | No | | |
+ | [StorSimple](../storsimple/index.yml) | microsoft.hybridnetwork/virtualnetworkfunctions | [**Yes**](./essentials/metrics-supported.md#microsofthybridnetworkvirtualnetworkfunctions) | No | | |
+ | [Azure Monitor](./index.yml) | microsoft.insights/autoscalesettings | [**Yes**](./essentials/metrics-supported.md#microsoftinsightsautoscalesettings) | [**Yes**](./essentials/resource-logs-categories.md#microsoftinsightsautoscalesettings) | | |
+ | [Azure Monitor](./index.yml) | microsoft.insights/components | [**Yes**](./essentials/metrics-supported.md#microsoftinsightscomponents) | [**Yes**](./essentials/resource-logs-categories.md#microsoftinsightscomponents) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | |
+ | [Azure IoT Central](../iot-central/index.yml) | Microsoft.IoTCentral/IoTApps | [**Yes**](./essentials/metrics-supported.md#microsoftiotcentraliotapps) | No | | |
+ | [Azure Key Vault](../key-vault/index.yml) | Microsoft.KeyVault/managedHSMs | [**Yes**](./essentials/metrics-supported.md#microsoftkeyvaultmanagedhsms) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkeyvaultmanagedhsms) | [Azure Key Vault Insights (preview)](./insights/key-vault-insights-overview.md) | |
+ | [Azure Key Vault](../key-vault/index.yml) | Microsoft.KeyVault/vaults | [**Yes**](./essentials/metrics-supported.md#microsoftkeyvaultvaults) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkeyvaultvaults) | [Azure Key Vault Insights (preview)](./insights/key-vault-insights-overview.md) | |
+ | [Azure Kubernetes Service (AKS)](../aks/index.yml) | Microsoft.Kubernetes/connectedClusters | [**Yes**](./essentials/metrics-supported.md#microsoftkubernetesconnectedclusters) | No | | |
+ | [Azure Data Explorer](/azure/data-explorer/) | Microsoft.Kusto/clusters | [**Yes**](./essentials/metrics-supported.md#microsoftkustoclusters) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkustoclusters) | | |
+ | [Azure Logic Apps](../logic-apps/index.yml) | Microsoft.Logic/integrationAccounts | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftlogicintegrationaccounts) | | |
+ | [Azure Logic Apps](../logic-apps/index.yml) | Microsoft.Logic/integrationServiceEnvironments | [**Yes**](./essentials/metrics-supported.md#microsoftlogicintegrationserviceenvironments) | No | | |
+ | [Azure Logic Apps](../logic-apps/index.yml) | Microsoft.Logic/workflows | [**Yes**](./essentials/metrics-supported.md#microsoftlogicworkflows) | [**Yes**](./essentials/resource-logs-categories.md#microsoftlogicworkflows) | | |
+ | [Azure Machine Learning](../machine-learning/index.yml) | Microsoft.MachineLearningServices/workspaces | [**Yes**](./essentials/metrics-supported.md#microsoftmachinelearningservicesworkspaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftmachinelearningservicesworkspaces) | | |
+ | [Azure Maps](../azure-maps/index.yml) | Microsoft.Maps/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftmapsaccounts) | No | | |
+ | [Azure Media Services](../media-services/index.yml) | Microsoft.Medi#microsoftmediamediaservices) | | |
+ | [Azure Media Services](../media-services/index.yml) | Microsoft.Medi#microsoftmediamediaservicesliveevents) | No | | |
+ | [Azure Media Services](../media-services/index.yml) | Microsoft.Medi#microsoftmediamediaservicesstreamingendpoints) | No | | |
+ | [Azure Media Services](../media-services/index.yml) | Microsoft.Medi#microsoftmediavideoanalyzers) | | |
+ | [Azure Spatial Anchors](../spatial-anchors/index.yml) | Microsoft.MixedReality/remoteRenderingAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftmixedrealityremoterenderingaccounts) | No | | |
+ | [Azure Spatial Anchors](../spatial-anchors/index.yml) | Microsoft.MixedReality/spatialAnchorsAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftmixedrealityspatialanchorsaccounts) | No | | |
+ | [Azure NetApp Files](../azure-netapp-files/index.yml) | Microsoft.NetApp/netAppAccounts/capacityPools | [**Yes**](./essentials/metrics-supported.md#microsoftnetappnetappaccountscapacitypools) | No | | |
+ | [Azure NetApp Files](../azure-netapp-files/index.yml) | Microsoft.NetApp/netAppAccounts/capacityPools/volumes | [**Yes**](./essentials/metrics-supported.md#microsoftnetappnetappaccountscapacitypoolsvolumes) | No | | |
+ | [Application Gateway](../application-gateway/index.yml) | Microsoft.Network/applicationGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkapplicationgateways) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkapplicationgateways) | | |
+ | [Azure Firewall](../firewall/index.yml) | Microsoft.Network/azureFirewalls | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkazurefirewalls) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkazurefirewalls) | | |
+ | [Azure Bastion](../bastion/index.yml) | Microsoft.Network/bastionHosts | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkbastionhosts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkbastionhosts) | | |
+ | [VPN Gateway](../vpn-gateway/index.yml) | Microsoft.Network/connections | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkconnections) | No | | |
+ | [Azure DNS](../dns/index.yml) | Microsoft.Network/dnszones | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkdnszones) | No | | |
+ | [Azure ExpressRoute](../expressroute/index.yml) | Microsoft.Network/expressRouteCircuits | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkexpressroutecircuits) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkexpressroutecircuits) | | |
+ | [Azure ExpressRoute](../expressroute/index.yml) | Microsoft.Network/expressRouteGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkexpressroutegateways) | No | | |
+ | [Azure ExpressRoute](../expressroute/index.yml) | Microsoft.Network/expressRoutePorts | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkexpressrouteports) | No | | |
+ | [Azure Front Door](../frontdoor/index.yml) | Microsoft.Network/frontdoors | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkfrontdoors) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkfrontdoors) | | |
+ | [Azure Load Balancer](../load-balancer/index.yml) | Microsoft.Network/loadBalancers | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkloadbalancers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkloadbalancers) | | |
+ | [Azure Load Balancer](../load-balancer/index.yml) | Microsoft.Network/natGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworknatgateways) | No | | |
+ | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/networkInterfaces | [**Yes**](./essentials/metrics-supported.md#microsoftnetworknetworkinterfaces) | No | [Azure Network Insights](./insights/network-insights-overview.md) | |
+ | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/networkSecurityGroups | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworknetworksecuritygroups) | [Azure Network Insights](./insights/network-insights-overview.md) | |
+ | [Azure Network Watcher](../network-watcher/network-watcher-monitoring-overview.md) | Microsoft.Network/networkWatchers/connectionMonitors | [**Yes**](./essentials/metrics-supported.md#microsoftnetworknetworkwatchersconnectionmonitors) | No | | |
+ | [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md) | Microsoft.Network/p2sVpnGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkp2svpngateways) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkp2svpngateways) | | |
+ | [Azure DNS Private Zones](../dns/private-dns-privatednszone.md) | Microsoft.Network/privateDnsZones | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkprivatednszones) | No | | |
+ | [Azure Private Link](../private-link/private-link-overview.md) | Microsoft.Network/privateEndpoints | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkprivateendpoints) | No | | |
+ | [Azure Private Link](../private-link/private-link-overview.md) | Microsoft.Network/privateLinkServices | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkprivatelinkservices) | No | | |
+ | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/publicIPAddresses | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkpublicipaddresses) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkpublicipaddresses) | [Azure Network Insights](./insights/network-insights-overview.md) | |
+ | [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) | Microsoft.Network/trafficmanagerprofiles | [**Yes**](./essentials/metrics-supported.md#microsoftnetworktrafficmanagerprofiles) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworktrafficmanagerprofiles) | | |
+ | [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md) | Microsoft.Network/virtualHubs | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvirtualhubs) | No | | |
+ | [Azure VPN Gateway](../vpn-gateway/index.yml) | Microsoft.Network/virtualNetworkGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvirtualnetworkgateways) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkvirtualnetworkgateways) | | |
+ | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/virtualNetworks | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvirtualnetworks) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkvirtualnetworks) | [Azure Network Insights](./insights/network-insights-overview.md) | |
+ | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/virtualRouters | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvirtualrouters) | No | | |
+ | [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md) | Microsoft.Network/vpnGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvpngateways) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkvpngateways) | | |
+ | [Azure Notification Hubs](../notification-hubs/index.yml) | Microsoft.NotificationHubs/namespaces/notificationHubs | [**Yes**](./essentials/metrics-supported.md#microsoftnotificationhubsnamespacesnotificationhubs) | No | | |
+ | [Azure Monitor](./index.yml) | Microsoft.OperationalInsights/workspaces | [**Yes**](./essentials/metrics-supported.md#microsoftoperationalinsightsworkspaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftoperationalinsightsworkspaces) | | |
+ | [Azure Peering Service](../peering-service/index.yml) | Microsoft.Peering/peerings | [**Yes**](./essentials/metrics-supported.md#microsoftpeeringpeerings) | No | | |
+ | [Azure Peering Service](../peering-service/index.yml) | Microsoft.Peering/peeringServices | [**Yes**](./essentials/metrics-supported.md#microsoftpeeringpeeringservices) | No | | |
+ | [Microsoft Power BI](/power-bi/power-bi-overview) | Microsoft.PowerBI/tenants | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftpowerbitenants) | | |
+ | [Microsoft Power BI](/power-bi/power-bi-overview) | Microsoft.PowerBI/tenants/workspaces | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftpowerbitenantsworkspaces) | | |
+ | [Power BI Embedded](/azure/power-bi-embedded/) | Microsoft.PowerBIDedicated/capacities | [**Yes**](./essentials/metrics-supported.md#microsoftpowerbidedicatedcapacities) | [**Yes**](./essentials/resource-logs-categories.md#microsoftpowerbidedicatedcapacities) | | |
+ | [Azure Purview](../purview/index.yml) | Microsoft.Purview/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftpurviewaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftpurviewaccounts) | | |
+ | [Azure Site Recovery](../site-recovery/index.yml) | Microsoft.RecoveryServices/vaults | [**Yes**](./essentials/metrics-supported.md#microsoftrecoveryservicesvaults) | [**Yes**](./essentials/resource-logs-categories.md#microsoftrecoveryservicesvaults) | | |
+ | [Azure Relay](../azure-relay/relay-what-is-it.md) | Microsoft.Relay/namespaces | [**Yes**](./essentials/metrics-supported.md#microsoftrelaynamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftrelaynamespaces) | | |
| [Azure Resource Manager](../azure-resource-manager/index.yml) | Microsoft.Resources/subscriptions | [**Yes**](./essentials/metrics-supported.md#microsoftresourcessubscriptions) | No | | |
- | [Azure Cognitive Search](../search/index.yml) | Microsoft.Search/searchServices | [**Yes**](./essentials/metrics-supported.md#microsoftsearchsearchservices) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftsearchsearchservices) | | |
- | [Azure Service Bus](/azure/service-bus/) | Microsoft.ServiceBus/namespaces | [**Yes**](./essentials/metrics-supported.md#microsoftservicebusnamespaces) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftservicebusnamespaces) | [Azure Service Bus](/azure/service-bus/) | |
- | [Service Fabric](/azure/service-fabric/) | Microsoft.ServiceFabric | No | No | [Service Fabric](/azure/service-fabric/) | Agent required to monitor guest operating system and workflows.|
- | [Azure SignalR Service](../azure-signalr/index.yml) | Microsoft.SignalRService/SignalR | [**Yes**](./essentials/metrics-supported.md#microsoftsignalrservicesignalr) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftsignalrservicesignalr) | | |
- | [Azure SignalR Service](../azure-signalr/index.yml) | Microsoft.SignalRService/WebPubSub | [**Yes**](./essentials/metrics-supported.md#microsoftsignalrservicewebpubsub) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftsignalrservicewebpubsub) | | |
- | [Azure SQL Managed Instance](../azure-sql/database/monitoring-tuning-index.yml) | Microsoft.Sql/managedInstances | [**Yes**](./essentials/metrics-supported.md#microsoftsqlmanagedinstances) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftsqlmanagedinstances) | [Azure SQL insights](./insights/sql-insights-overview.md) | |
+ | [Azure Cognitive Search](../search/index.yml) | Microsoft.Search/searchServices | [**Yes**](./essentials/metrics-supported.md#microsoftsearchsearchservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsearchsearchservices) | | |
+ | [Azure Service Bus](/azure/service-bus/) | Microsoft.ServiceBus/namespaces | [**Yes**](./essentials/metrics-supported.md#microsoftservicebusnamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftservicebusnamespaces) | [Azure Service Bus](/azure/service-bus/) | |
+ | [Service Fabric](../service-fabric/index.yml) | Microsoft.ServiceFabric | No | No | [Service Fabric](../service-fabric/index.yml) | Agent required to monitor guest operating system and workflows.|
+ | [Azure SignalR Service](../azure-signalr/index.yml) | Microsoft.SignalRService/SignalR | [**Yes**](./essentials/metrics-supported.md#microsoftsignalrservicesignalr) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsignalrservicesignalr) | | |
+ | [Azure SignalR Service](../azure-signalr/index.yml) | Microsoft.SignalRService/WebPubSub | [**Yes**](./essentials/metrics-supported.md#microsoftsignalrservicewebpubsub) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsignalrservicewebpubsub) | | |
+ | [Azure SQL Managed Instance](../azure-sql/database/monitoring-tuning-index.yml) | Microsoft.Sql/managedInstances | [**Yes**](./essentials/metrics-supported.md#microsoftsqlmanagedinstances) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsqlmanagedinstances) | [Azure SQL insights](./insights/sql-insights-overview.md) | |
| [Azure SQL Database](../azure-sql/database/index.yml) | Microsoft.Sql/servers/databases | [**Yes**](./essentials/metrics-supported.md#microsoftsqlserversdatabases) | No | [Azure SQL insights](./insights/sql-insights-overview.md) | | | [Azure SQL Database](../azure-sql/database/index.yml) | Microsoft.Sql/servers/elasticpools | [**Yes**](./essentials/metrics-supported.md#microsoftsqlserverselasticpools) | No | [Azure SQL insights](./insights/sql-insights-overview.md) | | | [Azure Storage](../storage/index.yml) | Microsoft.Storage/storageAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccounts) | No | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Storage Blobs](../storage/blobs/index.yml) | Microsoft.Storage/storageAccounts/blobServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsblobservices) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftstoragestorageaccountsblobservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Storage Files](../storage/files/index.yml) | Microsoft.Storage/storageAccounts/fileServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsfileservices) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftstoragestorageaccountsfileservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Storage Queue Services](../storage/queues/index.yml) | Microsoft.Storage/storageAccounts/queueServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsqueueservices) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftstoragestorageaccountsqueueservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Table Services](../storage/tables/index.yml) | Microsoft.Storage/storageAccounts/tableServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountstableservices) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftstoragestorageaccountstableservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
+ | [Azure Storage Blobs](../storage/blobs/index.yml) | Microsoft.Storage/storageAccounts/blobServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsblobservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountsblobservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
+ | [Azure Storage Files](../storage/files/index.yml) | Microsoft.Storage/storageAccounts/fileServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsfileservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountsfileservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
+ | [Azure Storage Queue Services](../storage/queues/index.yml) | Microsoft.Storage/storageAccounts/queueServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsqueueservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountsqueueservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
+ | [Azure Table Services](../storage/tables/index.yml) | Microsoft.Storage/storageAccounts/tableServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountstableservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountstableservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
| [Azure HPC Cache](../hpc-cache/index.yml) | Microsoft.StorageCache/caches | [**Yes**](./essentials/metrics-supported.md#microsoftstoragecachecaches) | No | | | | [Azure Storage](../storage/index.yml) | Microsoft.StorageSync/storageSyncServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragesyncstoragesyncservices) | No | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Stream Analytics](../stream-analytics/index.yml) | Microsoft.StreamAnalytics/streamingjobs | [**Yes**](./essentials/metrics-supported.md#microsoftstreamanalyticsstreamingjobs) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftstreamanalyticsstreamingjobs) | | |
- | [Azure Synapse Analytics](/azure/sql-data-warehouse/) | Microsoft.Synapse/workspaces | [**Yes**](./essentials/metrics-supported.md#microsoftsynapseworkspaces) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftsynapseworkspaces) | | |
- | [Azure Synapse Analytics](/azure/sql-data-warehouse/) | Microsoft.Synapse/workspaces/bigDataPools | [**Yes**](./essentials/metrics-supported.md#microsoftsynapseworkspacesbigdatapools) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftsynapseworkspacesbigdatapools) | | |
- | [Azure Synapse Analytics](/azure/sql-data-warehouse/) | Microsoft.Synapse/workspaces/sqlPools | [**Yes**](./essentials/metrics-supported.md#microsoftsynapseworkspacessqlpools) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftsynapseworkspacessqlpools) | | |
- | [Azure Time Series Insights](../time-series-insights/index.yml) | Microsoft.TimeSeriesInsights/environments | [**Yes**](./essentials/metrics-supported.md#microsofttimeseriesinsightsenvironments) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsofttimeseriesinsightsenvironments) | | |
- | [Azure Time Series Insights](../time-series-insights/index.yml) | Microsoft.TimeSeriesInsights/environments/eventsources | [**Yes**](./essentials/metrics-supported.md#microsofttimeseriesinsightsenvironmentseventsources) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsofttimeseriesinsightsenvironmentseventsources) | | |
+ | [Azure Stream Analytics](../stream-analytics/index.yml) | Microsoft.StreamAnalytics/streamingjobs | [**Yes**](./essentials/metrics-supported.md#microsoftstreamanalyticsstreamingjobs) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstreamanalyticsstreamingjobs) | | |
+ | [Azure Synapse Analytics](/azure/sql-data-warehouse/) | Microsoft.Synapse/workspaces | [**Yes**](./essentials/metrics-supported.md#microsoftsynapseworkspaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsynapseworkspaces) | | |
+ | [Azure Synapse Analytics](/azure/sql-data-warehouse/) | Microsoft.Synapse/workspaces/bigDataPools | [**Yes**](./essentials/metrics-supported.md#microsoftsynapseworkspacesbigdatapools) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsynapseworkspacesbigdatapools) | | |
+ | [Azure Synapse Analytics](/azure/sql-data-warehouse/) | Microsoft.Synapse/workspaces/sqlPools | [**Yes**](./essentials/metrics-supported.md#microsoftsynapseworkspacessqlpools) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsynapseworkspacessqlpools) | | |
+ | [Azure Time Series Insights](../time-series-insights/index.yml) | Microsoft.TimeSeriesInsights/environments | [**Yes**](./essentials/metrics-supported.md#microsofttimeseriesinsightsenvironments) | [**Yes**](./essentials/resource-logs-categories.md#microsofttimeseriesinsightsenvironments) | | |
+ | [Azure Time Series Insights](../time-series-insights/index.yml) | Microsoft.TimeSeriesInsights/environments/eventsources | [**Yes**](./essentials/metrics-supported.md#microsofttimeseriesinsightsenvironmentseventsources) | [**Yes**](./essentials/resource-logs-categories.md#microsofttimeseriesinsightsenvironmentseventsources) | | |
| [Azure VMware Solution](../azure-vmware/index.yml) | Microsoft.VMwareCloudSimple/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftvmwarecloudsimplevirtualmachines) | No | | | | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/connections | [**Yes**](./essentials/metrics-supported.md#microsoftwebconnections) | No | | |
- | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/hostingEnvironments | [**Yes**](./essentials/metrics-supported.md#microsoftwebhostingenvironments) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftwebhostingenvironments) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | |
+ | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/hostingEnvironments | [**Yes**](./essentials/metrics-supported.md#microsoftwebhostingenvironments) | [**Yes**](./essentials/resource-logs-categories.md#microsoftwebhostingenvironments) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | |
| [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/hostingEnvironments/multiRolePools | [**Yes**](./essentials/metrics-supported.md#microsoftwebhostingenvironmentsmultirolepools) | No | [Azure Monitor Application Insights](./app/app-insights-overview.md) | | | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/hostingEnvironments/workerPools | [**Yes**](./essentials/metrics-supported.md#microsoftwebhostingenvironmentsworkerpools) | No | [Azure Monitor Application Insights](./app/app-insights-overview.md) | | | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/serverFarms | [**Yes**](./essentials/metrics-supported.md#microsoftwebserverfarms) | No | [Azure Monitor Application Insights](./app/app-insights-overview.md) | |
- | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/sites | [**Yes**](./essentials/metrics-supported.md#microsoftwebsites) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftwebsites) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | |
- | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/sites/slots | [**Yes**](./essentials/metrics-supported.md#microsoftwebsitesslots) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftwebsitesslots) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | |
+ | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/sites | [**Yes**](./essentials/metrics-supported.md#microsoftwebsites) | [**Yes**](./essentials/resource-logs-categories.md#microsoftwebsites) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | |
+ | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/sites/slots | [**Yes**](./essentials/metrics-supported.md#microsoftwebsitesslots) | [**Yes**](./essentials/resource-logs-categories.md#microsoftwebsitesslots) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | |
| [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/staticSites | [**Yes**](./essentials/metrics-supported.md#microsoftwebstaticsites) | No | [Azure Monitor Application Insights](./app/app-insights-overview.md) | |
azure-portal Azure Portal Markdown Tile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/azure-portal-markdown-tile.md
You can add a markdown tile to your Azure dashboards to display custom, static c
![Screenshot showing entering URL](./media/azure-portal-markdown-tile/azure-portal-dashboard-markdown-url.png) > [!NOTE]
- > For added security, create a markdown file and store it in an [Azure storage account blob where encryption is enabled](../storage/common/storage-service-encryption.md). For additional control, configure the encryption with [customer-managed keys stored in Azure Key Vault](/azure/storage/common/customer-managed-keys-configure-key-vault?tabs=portal). You can then point to the file using the **Insert content using URL** option. Only users with permissions to the file can see the markdown content on the dashboard. You might need to set a [cross-origin resource sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) rule on the storage account so that the Azure portal (_https://portal.azure.com/_) can access the markdown file in the blob.
+ > For added security, create a markdown file and store it in an [Azure storage account blob where encryption is enabled](../storage/common/storage-service-encryption.md). For additional control, configure the encryption with [customer-managed keys stored in Azure Key Vault](../storage/common/customer-managed-keys-configure-key-vault.md?tabs=portal). You can then point to the file using the **Insert content using URL** option. Only users with permissions to the file can see the markdown content on the dashboard. You might need to set a [cross-origin resource sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) rule on the storage account so that the Azure portal (_https://portal.azure.com/_) can access the markdown file in the blob.
1. Select **Done** to dismiss the **Edit Markdown** pane. Your content appears on the Markdown tile, which you can resize by dragging the handle in the lower right-hand corner.
You can use any combination of plain text, Markdown syntax, and HTML content on
## Next steps - Learn more about [creating dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md).-- Learn how to [share a dashboard by using Azure role-based access control](azure-portal-dashboard-share-access.md).
+- Learn how to [share a dashboard by using Azure role-based access control](azure-portal-dashboard-share-access.md).
azure-portal Networking Quota Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/supportability/networking-quota-requests.md
Follow these instructions to create a networking quota increase request from **U
## Next steps - Review details on [networking limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits).-- Learn about [Azure subscription and service limits, quotas, and constraints](/azure/azure-resource-manager/management/azure-subscription-service-limits).
+- Learn about [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md).
azure-portal Per Vm Quota Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/supportability/per-vm-quota-requests.md
To request multiple increases together, first go to the **Usage + quotas** page
## Next steps -- Learn more about [vCPU quotas](/azure/virtual-machines/windows/quotas).-- Learn about [Azure subscription and service limits, quotas, and constraints](/azure/azure-resource-manager/management/azure-subscription-service-limits).
+- Learn more about [vCPU quotas](../../virtual-machines/windows/quotas.md).
+- Learn about [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md).
azure-portal Spot Quota https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/supportability/spot-quota.md
From there, follow the steps as described above to complete your spot quota incr
## Next steps - Learn more about [Azure spot virtual machines](../../virtual-machines/spot-vms.md).-- Learn about [Azure subscription and service limits, quotas, and constraints](/azure/azure-resource-manager/management/azure-subscription-service-limits).
+- Learn about [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md).
azure-resource-manager Linter Rule Use Protectedsettings For Commandtoexecute Secrets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/linter-rule-use-protectedsettings-for-commandtoexecute-secrets.md
Use the following value in the [Bicep configuration file](bicep-config-linter.md
For custom script resources, the `commandToExecute` value should be placed under the `protectedSettings` property object instead of the `settings` property object if it includes secret data such as a password. For example, secret data could be found in secure parameters, [`list*`](./bicep-functions-resource.md#list) functions such as listKeys, or in custom scripts arguments.
-Don't use secret data in the `settings` object because it uses clear text. For more information, see [Microsoft.Compute virtualMachines/extensions](/azure/templates/microsoft.compute/virtualmachines/extensions), [Custom Script Extension for Windows](/azure/virtual-machines/extensions/custom-script-windows), and [Use the Azure Custom Script Extension Version 2 with Linux virtual machines](/azure/virtual-machines/extensions/custom-script-linux).
+Don't use secret data in the `settings` object because it uses clear text. For more information, see [Microsoft.Compute virtualMachines/extensions](/azure/templates/microsoft.compute/virtualmachines/extensions), [Custom Script Extension for Windows](../../virtual-machines/extensions/custom-script-windows.md), and [Use the Azure Custom Script Extension Version 2 with Linux virtual machines](../../virtual-machines/extensions/custom-script-linux.md).
The following example fails because `commandToExecute` is specified under `settings` and uses a secure parameter.
resource customScriptExtension 'Microsoft.HybridCompute/machines/extensions@2019
} } }
-```
+```
azure-resource-manager Scenarios Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/scenarios-rbac.md
A role assignment's resource name must be a globally unique identifier (GUID). I
### Role definition ID
-The role you assign can be a built-in role definition or a [custom role definition](#custom-role-definitions). To use a built-in role definition, [find the appropriate role definition ID](/azure/role-based-access-control/built-in-roles). For example, the *Contributor* role has a role definition ID of `b24988ac-6180-42a0-ab88-20f7382dd24c`.
+The role you assign can be a built-in role definition or a [custom role definition](#custom-role-definitions). To use a built-in role definition, [find the appropriate role definition ID](../../role-based-access-control/built-in-roles.md). For example, the *Contributor* role has a role definition ID of `b24988ac-6180-42a0-ab88-20f7382dd24c`.
When you create the role assignment resource, you need to specify a fully qualified resource ID. Built-in role definition IDs are subscription-scoped resources. It's a good practice to use an `existing` resource to refer to the built-in role, and to access its fully qualified resource ID by using the `.id` property:
Role definition resource names must be unique within the Azure Active Directory
- [Create a new role def via a subscription level deployment](https://azure.microsoft.com/resources/templates/create-role-def/) - [Assign a role at subscription scope](https://azure.microsoft.com/resources/templates/subscription-role-assignment/) - [Assign a role at tenant scope](https://azure.microsoft.com/resources/templates/tenant-role-assignment/)
- - [Create a resourceGroup, apply a lock and RBAC](https://azure.microsoft.com/resources/templates/create-rg-lock-role-assignment/)
+ - [Create a resourceGroup, apply a lock and RBAC](https://azure.microsoft.com/resources/templates/create-rg-lock-role-assignment/)
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resources providers that are marked with **- registered** are registered by
| Microsoft.Migrate | [Azure Migrate](../../migrate/migrate-services-overview.md) | | Microsoft.MixedReality | [Azure Spatial Anchors](../../spatial-anchors/index.yml) | | Microsoft.NetApp | [Azure NetApp Files](../../azure-netapp-files/index.yml) |
-| Microsoft.Network | [Application Gateway](../../application-gateway/index.yml)<br />[Azure Bastion](../../bastion/index.yml)<br />[Azure DDoS Protection](../../ddos-protection/ddos-protection-overview.md)<br />[Azure DNS](../../dns/index.yml)<br />[Azure ExpressRoute](../../expressroute/index.yml)<br />[Azure Firewall](../../firewall/index.yml)<br />[Azure Front Door Service](../../frontdoor/index.yml)<br />[Azure Private Link](../../private-link/index.yml)<br />[Load Balancer](../../load-balancer/index.yml)<br />[Network Watcher](../../network-watcher/index.yml)<br />[Traffic Manager](../../traffic-manager/index.yml)<br />[Virtual Network](../../virtual-network/index.yml)<br />[Virtual Network NAT](/azure/virtual-network/nat-gateway/nat-overview)<br />[Virtual WAN](../../virtual-wan/index.yml)<br />[VPN Gateway](../../vpn-gateway/index.yml)<br /> |
+| Microsoft.Network | [Application Gateway](../../application-gateway/index.yml)<br />[Azure Bastion](../../bastion/index.yml)<br />[Azure DDoS Protection](../../ddos-protection/ddos-protection-overview.md)<br />[Azure DNS](../../dns/index.yml)<br />[Azure ExpressRoute](../../expressroute/index.yml)<br />[Azure Firewall](../../firewall/index.yml)<br />[Azure Front Door Service](../../frontdoor/index.yml)<br />[Azure Private Link](../../private-link/index.yml)<br />[Load Balancer](../../load-balancer/index.yml)<br />[Network Watcher](../../network-watcher/index.yml)<br />[Traffic Manager](../../traffic-manager/index.yml)<br />[Virtual Network](../../virtual-network/index.yml)<br />[Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md)<br />[Virtual WAN](../../virtual-wan/index.yml)<br />[VPN Gateway](../../vpn-gateway/index.yml)<br /> |
| Microsoft.Notebooks | [Azure Notebooks](https://notebooks.azure.com/help/introduction) | | Microsoft.NotificationHubs | [Notification Hubs](../../notification-hubs/index.yml) | | Microsoft.ObjectStore | Object Store |
ResourceType : Microsoft.KeyVault/vaults
## Next steps
-For more information about resource providers, including how to register a resource provider, see [Azure resource providers and types](resource-providers-and-types.md).
+For more information about resource providers, including how to register a resource provider, see [Azure resource providers and types](resource-providers-and-types.md).
azure-sql Authentication Azure Ad User Assigned Managed Identity Create Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-azure-ad-user-assigned-managed-identity-create-server.md
Last updated 12/15/2021
> [!NOTE] > User-assigned managed identity for Azure SQL is in **public preview**. If you're looking for a guide on Azure SQL Managed Instance, see [Create an Azure SQL Managed Instance with a user-assigned managed identity](../managed-instance/authentication-azure-ad-user-assigned-managed-identity-create-managed-instance.md).
-This how-to guide outlines the steps to create a [logical server](logical-servers.md) for Azure SQL Database with a [user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types). For more information on the benefits of using a user-assigned managed identity for the server identity in Azure SQL Database, see [User-assigned managed identity in Azure AD for Azure SQL](authentication-azure-ad-user-assigned-managed-identity.md).
+This how-to guide outlines the steps to create a [logical server](logical-servers.md) for Azure SQL Database with a [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types). For more information on the benefits of using a user-assigned managed identity for the server identity in Azure SQL Database, see [User-assigned managed identity in Azure AD for Azure SQL](authentication-azure-ad-user-assigned-managed-identity.md).
## Prerequisites - To provision a SQL Database server with a user-assigned managed identity, the [SQL Server Contributor](../../role-based-access-control/built-in-roles.md#sql-server-contributor) role (or a role with greater permissions), along with an Azure RBAC role containing the following action is required:
- - Microsoft.ManagedIdentity/userAssignedIdentities/*/assign/action - For example, the [Managed Identity Operator](/azure/role-based-access-control/built-in-roles#managed-identity-operator) has this action.
-- Create a user-assigned managed identity and assign it the necessary permission to be a server or managed instance identity. For more information, see [Manage user-assigned managed identities](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities) and [user-assigned managed identity permissions for Azure SQL](authentication-azure-ad-user-assigned-managed-identity.md#permissions).
+ - Microsoft.ManagedIdentity/userAssignedIdentities/*/assign/action - For example, the [Managed Identity Operator](../../role-based-access-control/built-in-roles.md#managed-identity-operator) has this action.
+- Create a user-assigned managed identity and assign it the necessary permission to be a server or managed instance identity. For more information, see [Manage user-assigned managed identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) and [user-assigned managed identity permissions for Azure SQL](authentication-azure-ad-user-assigned-managed-identity.md#permissions).
- [Az.Sql module 3.4](https://www.powershellgallery.com/packages/Az.Sql/3.4.0) or higher is required when using PowerShell for user-assigned managed identities. - [The Azure CLI 2.26.0](/cli/azure/install-azure-cli) or higher is required to use the Azure CLI with user-assigned managed identities. - For a list of limitations and known issues with using user-assigned managed identity, see [User-assigned managed identity in Azure AD for Azure SQL](authentication-azure-ad-user-assigned-managed-identity.md#limitations-and-known-issues)
azure-sql Authentication Azure Ad User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-azure-ad-user-assigned-managed-identity.md
Last updated 12/15/2021
> [!NOTE] > User-assigned managed identity for Azure SQL is in **public preview**.
-Azure Active Directory (AD) supports two types of managed identities: System-assigned managed identity (SMI) and user-assigned managed identity (UMI). For more information, see [Managed identity types](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types).
+Azure Active Directory (AD) supports two types of managed identities: System-assigned managed identity (SMI) and user-assigned managed identity (UMI). For more information, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
When using Azure AD authentication with Azure SQL Managed Instance, a managed identity must be assigned to the server identity. Previously, only a system-assigned managed identity could be assigned to the Managed Instance or SQL Database server identity. With support for user-assigned managed identity, the UMI can be assigned to Azure SQL Managed Instance or Azure SQL Database as the instance or server identity. This feature is now supported for SQL Database.
There are several benefits of using UMI as a server identity.
## Creating a user-assigned managed identity
-For information on how to create a user-assigned managed identity, see [Manage user-assigned managed identities](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities).
+For information on how to create a user-assigned managed identity, see [Manage user-assigned managed identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
## Permissions
azure-sql Authentication Mfa Ssms Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-mfa-ssms-overview.md
After the database user or group is created, then the user `steve@gmail.com` can
- SSMS version 17.2 provides DacFx Wizard support for Export/Extract/Deploy Data database. Once a specific user is authenticated through the initial authentication dialog using Universal Authentication, the DacFx Wizard functions the same way it does for all other authentication methods. - The SSMS Table Designer does not support Universal Authentication. - There are no additional software requirements for Active Directory Universal Authentication except that you must use a supported version of SSMS. -- See the following link for the latest Microsoft Authentication Library (MSAL) version for Universal authentication: [Overview of the Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-overview#languages-and-frameworks).
+- See the following link for the latest Microsoft Authentication Library (MSAL) version for Universal authentication: [Overview of the Microsoft Authentication Library (MSAL)](../../active-directory/develop/msal-overview.md#languages-and-frameworks).
## Next steps
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale.md
Previously updated : 1/10/2022 Last updated : 1/14/2022 # Hyperscale service tier
The log service accepts transaction log records from the primary compute replica
### Azure storage
-Azure Storage contains all data files in a database. Page servers keep data files in Azure Storage up to date. This storage is used for backup purposes, as well as for replication between Azure regions. Backups are implemented using storage snapshots of data files. Restore operations using snapshots are fast regardless of data size. Data can be restored to any point in time within the backup retention period of the database.
+Azure Storage contains all data files in a database. Page servers keep data files in Azure Storage up to date. This storage is used for backup purposes, as well as for replication between Azure regions. Backups are implemented using storage snapshots of data files. Restore operations using snapshots are fast regardless of data size. A database can be restored to any point in time within its backup retention period.
## Backup and restore
azure-sql Transparent Data Encryption Byok Create Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/transparent-data-encryption-byok-create-server.md
Last updated 12/16/2021
[!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-This how-to guide outlines the steps to create an Azure SQL logical [server](logical-servers.md) configured with transparent data encryption (TDE) with customer-managed keys (CMK) using a [user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types) to access [Azure Key Vault](/azure/key-vault/general/quick-create-portal).
+This how-to guide outlines the steps to create an Azure SQL logical [server](logical-servers.md) configured with transparent data encryption (TDE) with customer-managed keys (CMK) using a [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) to access [Azure Key Vault](../../key-vault/general/quick-create-portal.md).
## Prerequisites -- This how-to guide assumes that you've already created an [Azure Key Vault](/azure/key-vault/general/quick-create-portal) and imported a key into it to use as the TDE protector for Azure SQL Database. For more information, see [transparent data encryption with BYOK support](transparent-data-encryption-byok-overview.md).-- You must have created a [user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types) and provided it the required TDE permissions (*Get, Wrap Key, Unwrap Key*) on the above key vault. For creating a user-assigned managed identity, see [Create a user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal).
+- This how-to guide assumes that you've already created an [Azure Key Vault](../../key-vault/general/quick-create-portal.md) and imported a key into it to use as the TDE protector for Azure SQL Database. For more information, see [transparent data encryption with BYOK support](transparent-data-encryption-byok-overview.md).
+- You must have created a [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) and provided it the required TDE permissions (*Get, Wrap Key, Unwrap Key*) on the above key vault. For creating a user-assigned managed identity, see [Create a user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal).
- You must have Azure PowerShell installed and running. - [Recommended but optional] Create the key material for the TDE protector in a hardware security module (HSM) or local key store first, and import the key material to Azure Key Vault. Follow the [instructions for using a hardware security module (HSM) and Key Vault](../../key-vault/general/overview.md) to learn more.
To get your user-assigned managed identity **Resource ID**, search for **Managed
## Next steps -- Get started with Azure Key Vault integration and Bring Your Own Key support for TDE: [Turn on TDE using your own key from Key Vault](transparent-data-encryption-byok-configure.md).
+- Get started with Azure Key Vault integration and Bring Your Own Key support for TDE: [Turn on TDE using your own key from Key Vault](transparent-data-encryption-byok-configure.md).
azure-sql Transparent Data Encryption Byok Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/transparent-data-encryption-byok-identity.md
Last updated 12/16/2021
> [!NOTE] > Assigning a user-assigned managed identity for Azure SQL logical servers and Managed Instances is in **public preview**.
-Managed identities in Azure Active Directory (Azure AD) provide Azure services with an automatically managed identity in Azure AD. This identity can be used to authenticate to any service that supports Azure AD authentication, such as [Azure Key Vault](/azure/key-vault/general/overview), without any credentials in the code. For more information, see [Managed identity types](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types) in Azure.
+Managed identities in Azure Active Directory (Azure AD) provide Azure services with an automatically managed identity in Azure AD. This identity can be used to authenticate to any service that supports Azure AD authentication, such as [Azure Key Vault](../../key-vault/general/overview.md), without any credentials in the code. For more information, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) in Azure.
Managed Identities can be of two types:
For [TDE with customer-managed key (CMK)](transparent-data-encryption-byok-overv
In addition to the system-assigned managed identity that is already supported for TDE with CMK, a user-assigned managed identity (UMI) that is assigned to the server can be used to allow the server to access the key vault. A prerequisite to enable key vault access is to ensure the user-assigned managed identity has been provided the *Get*, *wrapKey* and *unwrapKey* permissions on the key vault. Since the user-assigned managed identity is a standalone resource that can be created and granted access to the key vault, [TDE with a customer-managed key can now be enabled at creation time for the server or database](transparent-data-encryption-byok-create-server.md). > [!NOTE]
-> For assigning a user-assigned managed identity to the logical server or managed instance, a user must have the [SQL Server Contributor](/azure/role-based-access-control/built-in-roles#sql-server-contributor) or [SQL Managed Instance Contributor](/azure/role-based-access-control/built-in-roles#sql-managed-instance-contributor) Azure RBAC role along with any other Azure RBAC role containing the **Microsoft.ManagedIdentity/userAssignedIdentities/*/assign/action** action.
+> For assigning a user-assigned managed identity to the logical server or managed instance, a user must have the [SQL Server Contributor](../../role-based-access-control/built-in-roles.md#sql-server-contributor) or [SQL Managed Instance Contributor](../../role-based-access-control/built-in-roles.md#sql-managed-instance-contributor) Azure RBAC role along with any other Azure RBAC role containing the **Microsoft.ManagedIdentity/userAssignedIdentities/*/assign/action** action.
## Benefits of using UMI for customer-managed TDE
In addition to the system-assigned managed identity that is already supported fo
## Next steps > [!div class="nextstepaction"]
-> [Create Azure SQL database configured with user-assigned managed identity and customer-managed TDE](transparent-data-encryption-byok-create-server.md)
-
+> [Create Azure SQL database configured with user-assigned managed identity and customer-managed TDE](transparent-data-encryption-byok-create-server.md)
azure-sql Transparent Data Encryption Byok Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/transparent-data-encryption-byok-overview.md
Auditors can use Azure Monitor to review key vault AuditEvent logs, if logging i
> Both Soft-delete and Purge protection must be enabled on the key vault(s) for servers being configured with customer-managed TDE, as well as existing servers using customer-managed TDE. - Grant the server or managed instance access to the key vault (*get*, *wrapKey*, *unwrapKey*) using its Azure Active Directory identity. The server identity can be a system-assigned managed identity or a user-assigned managed identity assigned to the server. When using the Azure portal, the Azure AD identity gets automatically created when the server is created. When using PowerShell or Azure CLI, the Azure AD identity must be explicitly created and should be verified. See [Configure TDE with BYOK](transparent-data-encryption-byok-configure.md) and [Configure TDE with BYOK for SQL Managed Instance](../managed-instance/scripts/transparent-data-encryption-byok-powershell.md) for detailed step-by-step instructions when using PowerShell.
- - Depending on the permission model of the key vault (access policy or Azure RBAC), key vault access can be granted either by creating an access policy on the key vault, or by creating a new Azure RBAC role assignment with the role [Key Vault Crypto Service Encryption User](/azure/key-vault/general/rbac-guide#azure-built-in-roles-for-key-vault-data-plane-operations).
+ - Depending on the permission model of the key vault (access policy or Azure RBAC), key vault access can be granted either by creating an access policy on the key vault, or by creating a new Azure RBAC role assignment with the role [Key Vault Crypto Service Encryption User](../../key-vault/general/rbac-guide.md#azure-built-in-roles-for-key-vault-data-plane-operations).
- When using firewall with AKV, you must enable option *Allow trusted Microsoft services to bypass the firewall*.
You may also want to check the following PowerShell sample scripts for the commo
- [Remove a transparent data encryption (TDE) protector for SQL Database](transparent-data-encryption-byok-remove-tde-protector.md) -- [Manage transparent data encryption in SQL Managed Instance with your own key using PowerShell](../managed-instance/scripts/transparent-data-encryption-byok-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json)
+- [Manage transparent data encryption in SQL Managed Instance with your own key using PowerShell](../managed-instance/scripts/transparent-data-encryption-byok-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json)
azure-sql Authentication Azure Ad User Assigned Managed Identity Create Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/authentication-azure-ad-user-assigned-managed-identity-create-managed-instance.md
Last updated 12/15/2021
> [!NOTE] > User-assigned managed identity for Azure SQL is in **public preview**. If you are looking for a guide on Azure SQL Database, see [Create an Azure SQL logical server using a user-assigned managed identity](../database/authentication-azure-ad-user-assigned-managed-identity-create-server.md)
-This how-to guide outlines the steps to create an [Azure SQL Managed Instance](sql-managed-instance-paas-overview.md) with a [user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types). For more information on the benefits of using a user-assigned managed identity for the server identity in Azure SQL Database, see [User-assigned managed identity in Azure AD for Azure SQL](../database/authentication-azure-ad-user-assigned-managed-identity.md).
+This how-to guide outlines the steps to create an [Azure SQL Managed Instance](sql-managed-instance-paas-overview.md) with a [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types). For more information on the benefits of using a user-assigned managed identity for the server identity in Azure SQL Database, see [User-assigned managed identity in Azure AD for Azure SQL](../database/authentication-azure-ad-user-assigned-managed-identity.md).
## Prerequisites - To provision a Managed Instance with a user-assigned managed identity, the [SQL Managed Instance Contributor](../../role-based-access-control/built-in-roles.md#sql-managed-instance-contributor) role (or a role with greater permissions), along with an Azure RBAC role containing the following action is required:
- - Microsoft.ManagedIdentity/userAssignedIdentities/*/assign/action - For example, the [Managed Identity Operator](/azure/role-based-access-control/built-in-roles#managed-identity-operator) has this action.
-- Create a user-assigned managed identity and assign it the necessary permission to be a server or managed instance identity. For more information, see [Manage user-assigned managed identities](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities) and [user-assigned managed identity permissions for Azure SQL](../database/authentication-azure-ad-user-assigned-managed-identity.md#permissions).
+ - Microsoft.ManagedIdentity/userAssignedIdentities/*/assign/action - For example, the [Managed Identity Operator](../../role-based-access-control/built-in-roles.md#managed-identity-operator) has this action.
+- Create a user-assigned managed identity and assign it the necessary permission to be a server or managed instance identity. For more information, see [Manage user-assigned managed identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) and [user-assigned managed identity permissions for Azure SQL](../database/authentication-azure-ad-user-assigned-managed-identity.md#permissions).
- [Az.Sql module 3.4](https://www.powershellgallery.com/packages/Az.Sql/3.4.0) or higher is required when using PowerShell for user-assigned managed identities. - [The Azure CLI 2.26.0](/cli/azure/install-azure-cli) or higher is required to use the Azure CLI with user-assigned managed identities. - For a list of limitations and known issues with using user-assigned managed identity, see [User-assigned managed identity in Azure AD for Azure SQL](../database/authentication-azure-ad-user-assigned-managed-identity.md#limitations-and-known-issues)
azure-video-analyzer Pipeline Topologies List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/pipeline-topologies-list.md
Name | Description | Samples | VSCode Name
:-- | :- | :- | : [grpcExtensionOpenVINO](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/grpcExtensionOpenVINO/topology.json) | Run video analytics on a live video feed. The gRPC extension allows you to create images at video frame rate from the camera that are converted to images, and sent to the [OpenVINO™ DL Streamer - Edge AI Extension module](https://aka.ms/ava-intel-ovms) provided by Intel. The results are then published to the IoT Edge Hub. | [Analyze live video with Intel OpenVINO™ DL Streamer – Edge AI Extension](edge/use-intel-grpc-video-analytics-serving-tutorial.md) [httpExtension](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/httpExtension/topology.json) | Run video analytics on a live video feed. A subset of the video frames from the camera are converted to images, and sent to an external AI inference engine. The results are then published to the IoT Edge Hub. | [Analyze live video with your own model - HTTP](edge/analyze-live-video-use-your-model-http.md), [Analyze live video with Azure Video Analyzer on IoT Edge and Azure Custom Vision](edge/analyze-live-video-custom-vision.md) | [Analyze video using HTTP Extension](./visual-studio-code-extension.md#analyze-video-using-http-extension)
-[httpExtensionOpenVINO](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/httpExtensionOpenVINO/topology.json) | Run video analytics on a live video feed. A subset of the video frames from the camera are converted to images, and sent to the [OpenVINO™ Model Server – AI Extension module](https://aka.ms/ava-intel-ovms) provided by Intel. The results are then published to the IoT Edge Hub. | [Analyze live video using OpenVINO™ Model Server – AI Extension from Intel](https://aka.ms/ava-intel-ovms-tutorial) | [Analyze video with Intel OpenVINO Model Server](./visual-studio-code-extension.md#analyze-video-with-intel-openvino-model-server)
+[httpExtensionOpenVINO](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/httpExtensionOpenVINO/topology.json) | Run video analytics on a live video feed. A subset of the video frames from the camera are converted to images, and sent to the [OpenVINO™ Model Server – AI Extension module](https://aka.ms/ava-intel-ovms) provided by Intel. The results are then published to the IoT Edge Hub. | [Analyze live video using OpenVINO™ Model Server – AI Extension from Intel](./edge/use-intel-openvino-tutorial.md) | [Analyze video with Intel OpenVINO Model Server](./visual-studio-code-extension.md#analyze-video-with-intel-openvino-model-server)
### Computer vision
Name | Description | Samples | VSCode Name
:-- | :- | :- | : [spatial-analysis/person-count-operation-topology](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/spatial-analysis/person-count-operation-topology.json) | Live video is sent to an external [spatialAnalysis](../../cognitive-services/computer-vision/spatial-analysis-operations.md) module that counts people in a designated zone. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource. | | [Person count operation with Computer Vision for Spatial Analysis](./visual-studio-code-extension.md#person-count-operation-with-computer-vision-for-spatial-analysis) [spatial-analysis/person-line-crossing-operation-topology](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/spatial-analysis/person-line-crossing-operation-topology.json) | Live video is sent to an external [spatialAnalysis](../../cognitive-services/computer-vision/spatial-analysis-operations.md) module that tracks when a person crosses a designated line. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource. | | [Person crossing line operation with Computer Vision for Spatial Analysis](./visual-studio-code-extension.md#person-crossing-line-operation-with-computer-vision-for-spatial-analysis)
-[spatial-analysis/person-zone-crossing-operation-topology](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/spatial-analysis/person-zone-crossing-operation-topology.json) | Live video is sent to an external [spatialAnalysis](../../cognitive-services/computer-vision/spatial-analysis-operations.md) module that emits an event when a person enters or exists a zone. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource. | [Live Video with Computer Vision for Spatial Analysis](https://aka.ms/ava-spatial-analysis) | [Person crossing zone operation with Computer Vision for Spatial Analysis](./visual-studio-code-extension.md#person-crossing-zone-operation-with-computer-vision-for-spatial-analysis)
+[spatial-analysis/person-zone-crossing-operation-topology](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/spatial-analysis/person-zone-crossing-operation-topology.json) | Live video is sent to an external [spatialAnalysis](../../cognitive-services/computer-vision/spatial-analysis-operations.md) module that emits an event when a person enters or exists a zone. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource. | [Live Video with Computer Vision for Spatial Analysis](./edge/computer-vision-for-spatial-analysis.md) | [Person crossing zone operation with Computer Vision for Spatial Analysis](./visual-studio-code-extension.md#person-crossing-zone-operation-with-computer-vision-for-spatial-analysis)
[spatial-analysis/person-distance-operation-topology](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/spatial-analysis/person-distance-operation-topology.json) | Live video is sent to an external [spatialAnalysis](../../cognitive-services/computer-vision/spatial-analysis-operations.md) module that tracks when people violate a distance rule. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource. | | [Person distance operation with Computer Vision for Spatial Analysis](./visual-studio-code-extension.md#person-distance-operation-with-computer-vision-for-spatial-analysis) [spatial-analysis/custom-operation-topology](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/spatial-analysis/custom-operation-topology.json) | Live video is sent to an external [spatialAnalysis](../../cognitive-services/computer-vision/spatial-analysis-operations.md) module that carries out a supported AI operation. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource. | | [Custom operation with Computer Vision for Spatial Analysis](./visual-studio-code-extension.md#custom-operation-with-computer-vision-for-spatial-analysis)
Name | Description | Samples | VSCode Name
## Next steps
-[Understand Video Analyzer pipelines](pipeline.md).
+[Understand Video Analyzer pipelines](pipeline.md).
azure-video-analyzer Policy Definitions Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/policy-definitions-security.md
The following built-in policy definitions are available for use with Video Analy
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Video Analyzer accounts should use customer-managed keys to encrypt data at rest.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F165a4137-c3ed-4fd0-a17f-1c8a80266580) |Use customer-managed keys to manage the encryption at rest of your Video Analyzer accounts. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/videoanalyzerscmkdocs](https://aka.ms/videoanalyzerscmkdocs). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Video%20Analyzers/VideoAnalyzer_CustomerManagedKey_Audit.json) |
+|[Video Analyzer accounts should use customer-managed keys to encrypt data at rest.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F165a4137-c3ed-4fd0-a17f-1c8a80266580) |Use customer-managed keys to manage the encryption at rest of your Video Analyzer accounts. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/videoanalyzerscmkdocs](./customer-managed-keys.md). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Video%20Analyzers/VideoAnalyzer_CustomerManagedKey_Audit.json) |
## Create a policy assignment
To remove the assignment created, follow these steps:
## See Also -- Read more about [customer managed keys](customer-managed-keys.md).
+- Read more about [customer managed keys](customer-managed-keys.md).
azure-video-analyzer Deploy With Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/deploy-with-arm-template.md
The resource will be deployed to your subscription and will create the Azure Vid
If you're new to Azure Video Analyzer for Media (formerly Video Indexer), see:
-* [Azure Video Analyzer for Media Documentation](https://aka.ms/vi-docs)
-* [Azure Video Analyzer for Media Developer Portal](https://aka.ms/vi-docs)
+* [Azure Video Analyzer for Media Documentation](/en-gb/azure/azure-video-analyzer/video-analyzer-for-media-docs/)
+* [Azure Video Analyzer for Media Developer Portal](/en-gb/azure/azure-video-analyzer/video-analyzer-for-media-docs/)
* After completing this tutorial, head to other Azure Video Analyzer for Media samples, described on [README.md](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/README.md) If you're new to template deployment, see:
If you're new to template deployment, see:
## Next steps
-[Connect an existing classic paid Video Analyzer for Media account to ARM-based account](connect-classic-account-to-arm.md)
+[Connect an existing classic paid Video Analyzer for Media account to ARM-based account](connect-classic-account-to-arm.md)
azure-vmware Concepts Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-security-recommendations.md
Use the following guidelines and links for general security recommendations for
| :-- | :-- | | Review and follow VMware Security Best Practices | It's important to stay updated on Azure security practices and [VMware Security Best Practices](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-412EF981-D4F1-430B-9D09-A4679C2D04E7.html). | | Keep up to date on VMware Security Advisories | Subscribe to VMware notifications in my.vmware.com and regularly review and remediate any [VMware Security Advisories](https://www.vmware.com/security/advisories.html). |
-| Enable Microsoft Defender for Cloud | [Microsoft Defender for Cloud](https://docs.microsoft.com/azure/defender-for-cloud/) provides unified security management and advanced threat protection across hybrid cloud workloads. |
+| Enable Microsoft Defender for Cloud | [Microsoft Defender for Cloud](../defender-for-cloud/index.yml) provides unified security management and advanced threat protection across hybrid cloud workloads. |
| Follow the Microsoft Security Response Center blog | [Microsoft Security Response Center](https://msrc-blog.microsoft.com/) |
-| Review and implement recommendations within the Azure Security Baseline for Azure VMware Solution | [Azure security baseline for VMware Solution](https://docs.microsoft.com/security/benchmark/azure/baselines/vmware-solution-security-baseline/) |
+| Review and implement recommendations within the Azure Security Baseline for Azure VMware Solution | [Azure security baseline for VMware Solution](/security/benchmark/azure/baselines/vmware-solution-security-baseline/) |
## Network
The following are network-related security recommendations for Azure VMware Solu
| **Recommendation** | **Comments** | | :-- | :-- | | Only allow trusted networks | Only allow access to your environments over ExpressRoute or other secured networks. Avoid exposing your management services like vCenter, for example, on the internet. |
-| Use Azure Firewall Premium | If you must expose management services on the internet, use [Azure Firewall Premium](https://docs.microsoft.com/azure/firewall/premium-migrate/) with both IDPS Alert and Deny mode along with TLS inspection for proactive threat detection. |
-| Deploy and configure Network Security Groups on VNET | Ensure any VNET deployed has [Network Security Groups](https://docs.microsoft.com/azure/virtual-network/network-security-groups-overview/) configured to control ingress and egress to your environment. |
-| Review and implement recommendations within the Azure security baseline for Azure VMware Solution | [Azure security baseline for Azure VMware Solution](https://docs.microsoft.com/security/benchmark/azure/baselines/vmware-solution-security-baseline/) |
+| Use Azure Firewall Premium | If you must expose management services on the internet, use [Azure Firewall Premium](../firewall/premium-migrate.md) with both IDPS Alert and Deny mode along with TLS inspection for proactive threat detection. |
+| Deploy and configure Network Security Groups on VNET | Ensure any VNET deployed has [Network Security Groups](../virtual-network/network-security-groups-overview.md) configured to control ingress and egress to your environment. |
+| Review and implement recommendations within the Azure security baseline for Azure VMware Solution | [Azure security baseline for Azure VMware Solution](/security/benchmark/azure/baselines/vmware-solution-security-baseline/) |
## HCX
See the following information for recommendations to secure your HCX deployment.
| **Recommendation** | **Comments** | | :-- | :-- |
-| Stay current with HCX service updates | HCX service updates can include new features, software fixes, and security patches. Apply service updates during a maintenance window where no new HCX operations are queued up by following these [steps](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-F4AEAACB-212B-4FB6-AC36-9E5106879222.html). |
-
+| Stay current with HCX service updates | HCX service updates can include new features, software fixes, and security patches. Apply service updates during a maintenance window where no new HCX operations are queued up by following these [steps](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-F4AEAACB-212B-4FB6-AC36-9E5106879222.html). |
azure-vmware Configure Identity Source Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-identity-source-vcenter.md
In this how-to, you learn how to:
- If you use FQDN, enable DNS resolution on your on-premises AD.
+ - Enable DNS Forwarder from Azure portal Ref: Configure DNS forwarder for Azure VMware Solution - Azure VMware Solution | Microsoft Docs
## List external identity
Now that you've learned about how to configure LDAP and LDAPS, you can learn mor
- [Azure VMware Solution identity concepts](concepts-identity.md) - Use vCenter to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the CloudAdmin role for vCenter and restricted administrator rights for NSX-T Manager.
-
backup About Azure Vm Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/about-azure-vm-restore.md
This article describes how the [Azure Backup service](./backup-overview.md) rest
| **Scenario** | **What is done** | **When to use** | | | | |
-| [Restore to create a new virtual machine](./backup-azure-arm-restore-vms.md) | Restores the entire VM to OLR (if the source VM still exists) or ALR | <ul><li> If the source VM is lost or corrupt, then you can restore entire VM <li> You can create a copy of the VM <li> You can perform a restore drill for audit or compliance <li> If license for Marketplace Azure VM has expired, [create VM restore](/azure/backup/backup-azure-arm-restore-vms#create-a-vm) option can't be used.</ul> |
+| [Restore to create a new virtual machine](./backup-azure-arm-restore-vms.md) | Restores the entire VM to OLR (if the source VM still exists) or ALR | <ul><li> If the source VM is lost or corrupt, then you can restore entire VM <li> You can create a copy of the VM <li> You can perform a restore drill for audit or compliance <li> If license for Marketplace Azure VM has expired, [create VM restore](./backup-azure-arm-restore-vms.md#create-a-vm) option can't be used.</ul> |
| [Restore disks of the VM](./backup-azure-arm-restore-vms.md#restore-disks) | Restore disks attached to the VM | All disks: This option creates the template and restores the disk. You can edit this template with special configurations (for example, availability sets) to meet your requirements and then use both the template and restore the disk to recreate the VM. | | [Restore specific files within the VM](./backup-azure-restore-files-from-vm.md) | Choose restore point, browse, select files, and restore them to the same (or compatible) OS as the backed-up VM. | If you know which specific files to restore, then use this option instead of restoring the entire VM. | | [Restore an encrypted VM](./backup-azure-vms-encryption.md) | From the portal, restore the disks and then use PowerShell to create the VM | <ul><li> [Encrypted VM with Azure Active Directory](../virtual-machines/windows/disk-encryption-windows-aad.md) <li> [Encrypted VM without Azure AD](../virtual-machines/windows/disk-encryption-windows.md) <li> [Encrypted VM *with Azure AD* migrated to *without Azure AD*](../virtual-machines/windows/disk-encryption-faq.yml#can-i-migrate-vms-that-were-encrypted-with-an-azure-ad-app-to-encryption-without-an-azure-ad-app-)</ul> |
This article describes how the [Azure Backup service](./backup-overview.md) rest
- [Frequently asked questions about VM restore](./backup-azure-vm-backup-faq.yml) - [Supported restore methods](./backup-support-matrix-iaas.md#supported-restore-methods)-- [Troubleshoot restore issues](./backup-azure-vms-troubleshoot.md#restore)
+- [Troubleshoot restore issues](./backup-azure-vms-troubleshoot.md#restore)
backup Backup Azure Delete Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-delete-vault.md
If you try to delete the vault without removing the dependencies, you'll encount
- Vault cannot be deleted as there are existing resources within the vault. Please ensure there are no backup items, protected servers, or backup management servers associated with this vault. Unregister the following containers associated with this vault before proceeding for deletion. -- Recovery Services vault cannot be deleted as there are backup items in soft deleted state in the vault. The soft deleted items are permanently deleted after 14 days of delete operation. Please try vault deletion after the backup items are permanently deleted and there is no item in soft deleted state left in the vault. For more information, see [Soft delete for Azure Backup](/azure/backup/backup-azure-security-feature-cloud).
+- Recovery Services vault cannot be deleted as there are backup items in soft deleted state in the vault. The soft deleted items are permanently deleted after 14 days of delete operation. Please try vault deletion after the backup items are permanently deleted and there is no item in soft deleted state left in the vault. For more information, see [Soft delete for Azure Backup](./backup-azure-security-feature-cloud.md).
## Delete a Recovery Services vault
Choose a client:
>The following operation is destructive and can't be undone. All backup data and backup items associated with the protected server will be permanently deleted. Proceed with caution. >[!Note]
->If you're sure that all backed-up items in the vault are no longer required and want to delete them at once without reviewing, [run this PowerShell script](/azure/backup/backup-azure-delete-vault?tabs=powershell#script-for-delete-vault). The script will delete all backup items recursively and eventually the entire vault.
+>If you're sure that all backed-up items in the vault are no longer required and want to delete them at once without reviewing, [run this PowerShell script](?tabs=powershell#script-for-delete-vault). The script will delete all backup items recursively and eventually the entire vault.
To delete a vault, follow these steps:
To delete a vault, follow these steps:
Alternately, go to the blades manually by following the steps below. -- <a id="portal-mua">**Step 2**</a>: If Multi-User Authorization (MUA) is enabled, seek necessary permissions from the security administrator before vault deletion. [Learn more](/azure/backup/multi-user-authorization#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)
+- <a id="portal-mua">**Step 2**</a>: If Multi-User Authorization (MUA) is enabled, seek necessary permissions from the security administrator before vault deletion. [Learn more](./multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)
- <a id="portal-disable-soft-delete">**Step 3**</a>: Disable the soft delete and Security features
- 1. Go to **Properties** -> **Security Settings** and disable the **Soft Delete** feature if enabled. See [how to disable soft delete](/azure/backup/backup-azure-security-feature-cloud#enabling-and-disabling-soft-delete).
- 1. Go to **Properties** -> **Security Settings** and disable **Security Features**, if enabled. [Learn more](/azure/backup/backup-azure-security-feature)
+ 1. Go to **Properties** -> **Security Settings** and disable the **Soft Delete** feature if enabled. See [how to disable soft delete](./backup-azure-security-feature-cloud.md#enabling-and-disabling-soft-delete).
+ 1. Go to **Properties** -> **Security Settings** and disable **Security Features**, if enabled. [Learn more](./backup-azure-security-feature.md)
- <a id="portal-delete-cloud-protected-items">**Step 4**</a>: Delete Cloud protected items
- 1. **Delete Items in soft-deleted state**: After disabling soft delete, check if there are any items previously remaining in the soft deleted state. If there are items in soft deleted state, then you need to *undelete* and *delete* them again. [Follow these steps](/azure/backup/backup-azure-security-feature-cloud#using-azure-portal) to find soft delete items and permanently delete them.
+ 1. **Delete Items in soft-deleted state**: After disabling soft delete, check if there are any items previously remaining in the soft deleted state. If there are items in soft deleted state, then you need to *undelete* and *delete* them again. [Follow these steps](./backup-azure-security-feature-cloud.md#using-azure-portal) to find soft delete items and permanently delete them.
:::image type="content" source="./media/backup-azure-delete-vault/delete-items-in-soft-delete-state-inline.png" alt-text="Screenshot showing the process to delete items in soft-delete state." lightbox="./media/backup-azure-delete-vault/delete-items-in-soft-delete-state-expanded.png":::
To delete a vault, follow these steps:
- **Step 8**: Delete vault
- After you've completed these steps, you can continue to [delete the vault](/azure/backup/backup-azure-delete-vault?tabs=portal#delete-the-recovery-services-vault).
+ After you've completed these steps, you can continue to [delete the vault](?tabs=portal#delete-the-recovery-services-vault).
- If you're **still unable to delete the vault** that contain no dependencies then follow the steps listed in [**deleting vault using Azure Resource Manager client**](/azure/backup/backup-azure-delete-vault?tabs=arm#tabpanel_1_arm).
+ If you're **still unable to delete the vault** that contain no dependencies then follow the steps listed in [**deleting vault using Azure Resource Manager client**](?tabs=arm#tabpanel_1_arm).
### Delete protected items in the cloud
First, read the **[Before you start](#before-you-start)** section to understand
>[!Note] >- To download the PowerShell file to delete your vault, go to vault **Overview** -> **Delete** -> **Delete using PowerShell Script**, and then click **Generate and Download Script** as shown in the screenshot below. This generates a customized script specific to the vault, which requires no additional changes. You can run the script in the PowerShell console by switching to the downloaded scriptΓÇÖs directory and running the file using: _.\NameofFile.ps1_
->- Ensure PowerShell version 7 or later and the latest _Az module_ are installed. To install the same, see the [instructions here](/azure/backup/backup-azure-delete-vault?tabs=powershell#powershell-install-az-module).
+>- Ensure PowerShell version 7 or later and the latest _Az module_ are installed. To install the same, see the [instructions here](?tabs=powershell#powershell-install-az-module).
If you're sure that all the items backed up in the vault are no longer required and wish to delete them at once without reviewing, you can directly run the PowerShell script in this section. The script will delete all the backup items recursively and eventually the entire vault.
If you're sure that all the items backed up in the vault are no longer required
Follow these steps: -- **Step 1**: Seek the necessary permissions from the security administrator to delete the vault if Multi-User Authorization has been enabled against the vault. [Learn more](/azure/backup/multi-user-authorization#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)
+- **Step 1**: Seek the necessary permissions from the security administrator to delete the vault if Multi-User Authorization has been enabled against the vault. [Learn more](./multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)
- <a id="powershell-install-az-module">**Step 2**</a>: Install the _Az module_ and upgrade to PowerShell 7 version by performing these steps:
For more information on the ARMClient command, see [ARMClient README](https://gi
## Next steps - [Learn about Recovery Services vaults](backup-azure-recovery-services-vault-overview.md)-- [Learn about monitoring and managing Recovery Services vaults](backup-azure-manage-windows-server.md)
+- [Learn about monitoring and managing Recovery Services vaults](backup-azure-manage-windows-server.md)
backup Backup Azure Encrypted Vm Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-encrypted-vm-troubleshoot.md
To resolve this issue, [restore the Key-Vault key or secret](backup-azure-restor
Error message: Backup failed in allocating storage from protection service
-Backup operation failed because Azure Key Vault do not have required access to the Recovery Service Vault. [Assign required permissions to the vault to access the encryption key](/azure/backup/encryption-at-rest-with-cmk?tabs=portal#assign-user-assigned-managed-identity-to-the-vault-in-preview) and retry the operation.
+Backup operation failed because Azure Key Vault do not have required access to the Recovery Service Vault. [Assign required permissions to the vault to access the encryption key](./encryption-at-rest-with-cmk.md?tabs=portal#assign-user-assigned-managed-identity-to-the-vault-in-preview) and retry the operation.
## Next steps - [Step-by-step instructions to backup encrypted Azure virtual machines](backup-azure-vms-encryption.md)-- [Step-by-step instructions to restore encrypted Azure virtual machines](restore-azure-encrypted-virtual-machines.md)
+- [Step-by-step instructions to restore encrypted Azure virtual machines](restore-azure-encrypted-virtual-machines.md)
backup Backup Azure Monitoring Built In Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-monitoring-built-in-monitor.md
The following scenarios are defined by service as alertable scenarios.
- Backup succeeded with warnings for Microsoft Azure Recovery Services (MARS) agent - Stop protection with retain data/Stop protection with delete data - Soft-delete functionality disabled for vault-- [Unsupported backup type for database workloads](/azure/backup/backup-sql-server-azure-troubleshoot#backup-type-unsupported)
+- [Unsupported backup type for database workloads](./backup-sql-server-azure-troubleshoot.md#backup-type-unsupported)
### Alerts from the following Azure Backup solutions are shown here
To configure notifications for Azure Monitor alerts, create an [alert processing
## Next steps
-[Monitor Azure Backup workloads using Azure Monitor](backup-azure-monitoring-use-azuremonitor.md)
+[Monitor Azure Backup workloads using Azure Monitor](backup-azure-monitoring-use-azuremonitor.md)
backup Backup Azure Vms Enhanced Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-vms-enhanced-policy.md
# Back up an Azure VM using Enhanced policy (in preview)
-This article explains how to use _Enhanced policy_ to configure _Multiple Backups Per Day_ and back up [Trusted Launch VMs](/azure/virtual-machines/trusted-launch) with the Azure Backup service. _Enhanced policy_ for backup of VMs is in preview.
+This article explains how to use _Enhanced policy_ to configure _Multiple Backups Per Day_ and back up [Trusted Launch VMs](../virtual-machines/trusted-launch.md) with the Azure Backup service. _Enhanced policy_ for backup of VMs is in preview.
-Azure Backup now supports _Enhanced policy_ that's needed to support new Azure offerings. For example, [Trusted Launch VM](/azure/virtual-machines/trusted-launch) is supported with _Enhanced policy_ only. To enroll your subscription for backup of Trusted Launch VM, write to us at [askazurebackupteam@microsoft.com](mailto:askazurebackupteam@microsoft.com).
+Azure Backup now supports _Enhanced policy_ that's needed to support new Azure offerings. For example, [Trusted Launch VM](../virtual-machines/trusted-launch.md) is supported with _Enhanced policy_ only. To enroll your subscription for backup of Trusted Launch VM, write to us at [askazurebackupteam@microsoft.com](mailto:askazurebackupteam@microsoft.com).
>[!Important]
->The existing [default policy](/azure/backup/backup-during-vm-creation#create-a-vm-with-backup-configured) wonΓÇÖt support protecting newer Azure offerings, such as Trusted Launch VM, UltraSSD, Shared disk, and Confidential Azure VMs.
+>The existing [default policy](./backup-during-vm-creation.md#create-a-vm-with-backup-configured) wonΓÇÖt support protecting newer Azure offerings, such as Trusted Launch VM, UltraSSD, Shared disk, and Confidential Azure VMs.
You must enable backup for Trusted Launch VM through enhanced policy only. The Enhanced policy provides the following features:
Follow these steps:
6. Click **Create**. >[!Note]
->- We support the Enhanced policy configuration through [Recovery Services vault](/azure/backup/backup-azure-arm-vms-prepare) and [VM Manage blade](/azure/backup/backup-during-vm-creation#start-a-backup-after-creating-the-vm) only. Configuration through Backup center is currently not supported.
+>- We support the Enhanced policy configuration through [Recovery Services vault](./backup-azure-arm-vms-prepare.md) and [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm) only. Configuration through Backup center is currently not supported.
>- For hourly backups, the last backup of the day is transferred to the vault. If the backup fails, the first backup of the next day is transferred to the vault. >- Enhanced policy can be only availed for unprotected VMs that are new to Azure Backup. Note that Azure VMs that are protected with existing policy can't be moved to Enhanced policy. ## Next steps -- [Run a backup immediately](/azure/backup/backup-azure-vms-first-look-arm#run-a-backup-immediately)-- [Verify Backup job status](/azure/backup/backup-azure-arm-vms-prepare#verify-backup-job-status)-- [Restore Azure virtual machines](/azure/backup/backup-azure-arm-restore-vms#restore-disks)-
+- [Run a backup immediately](./backup-azure-vms-first-look-arm.md#run-a-backup-immediately)
+- [Verify Backup job status](./backup-azure-arm-vms-prepare.md#verify-backup-job-status)
+- [Restore Azure virtual machines](./backup-azure-arm-restore-vms.md#restore-disks)
backup Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-overview.md
Azure Backup delivers these key benefits:
## How Azure Backup protects from ransomware?
-Azure Backup helps protect your critical business systems and backup data against a ransomware attack by implementing preventive measures and providing tools that protect your organization from every step that attackers take to infiltrate your systems. It provides security to your backup environment, both when your data is in transit and at rest. [Learn more](/azure/security/fundamentals/backup-plan-to-protect-against-ransomware)
+Azure Backup helps protect your critical business systems and backup data against a ransomware attack by implementing preventive measures and providing tools that protect your organization from every step that attackers take to infiltrate your systems. It provides security to your backup environment, both when your data is in transit and at rest. [Learn more](../security/fundamentals/backup-plan-to-protect-against-ransomware.md)
## Next steps - [Review](backup-architecture.md) the architecture and components for different backup scenarios.-- [Verify](backup-support-matrix.md) support requirements and limitations for backup, and for [Azure VM backup](backup-support-matrix-iaas.md).
+- [Verify](backup-support-matrix.md) support requirements and limitations for backup, and for [Azure VM backup](backup-support-matrix-iaas.md).
backup Backup Sql Server Database Azure Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-sql-server-database-azure-vms.md
You can also use the following FQDNs to allow access to the required services fr
#### Allow connectivity for servers behind internal load balancers
-When using an internal load balancer, you need to allow the outbound connectivity from virtual machines behind the internal load balancer to perform backups. To do so, you can use a combination of internal and external standard load balancers to create an outbound connectivity. [Learn more](/azure/load-balancer/egress-only) about the configuration to create an _egress only_ setup for VMs in the backend pool of the internal load balancer.
+When using an internal load balancer, you need to allow the outbound connectivity from virtual machines behind the internal load balancer to perform backups. To do so, you can use a combination of internal and external standard load balancers to create an outbound connectivity. [Learn more](../load-balancer/egress-only.md) about the configuration to create an _egress only_ setup for VMs in the backend pool of the internal load balancer.
#### Use an HTTP proxy server to route traffic
If you need to disable auto-protection, select the instance name under **Configu
Learn how to: * [Restore backed-up SQL Server databases](restore-sql-database-azure-vm.md)
-* [Manage backed-up SQL Server databases](manage-monitor-sql-database-backup.md)
+* [Manage backed-up SQL Server databases](manage-monitor-sql-database-backup.md)
backup Backup Support Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-automation.md
You can automate most backup related tasks using programmatic methods in Azure
| Manage | Create Backup vault | Supported <br><br> [See the examples](./backup-blobs-storage-account-ps.md#create-a-backup-vault). | Supported <br><br> [See the examples](./backup-blobs-storage-account-cli.md#create-a-backup-vault). | Supported <br><br> [See the examples](./backup-azure-dataprotection-use-rest-api-create-update-backup-vault.md). | N/A | Supported | | Manage | Move Recovery Services vault | Supported <br><br> [See the examples](./backup-azure-move-recovery-services-vault.md#use-powershell-to-move-recovery-services-vault). | Supported <br><br> [See the examples](./backup-azure-move-recovery-services-vault.md#use-powershell-to-move-recovery-services-vault). | Supported | N/A | N/A | | Manage | Move Backup vault | Supported | Supported | Supported | N/A | N/A |
-| Manage | Delete Recovery Services vault | Supported <br><br> [See the examples](/azure/backup/backup-azure-delete-vault?tabs=powershell#tabpanel_1_powershell). | Supported <br><br> [See the examples](/azure/backup/backup-azure-delete-vault?tabs=cli#tabpanel_1_cli). | Supported <br><br> [See the examples](/azure/backup/backup-azure-delete-vault?tabs=arm#tabpanel_1_arm). | N/A | N/A |
+| Manage | Delete Recovery Services vault | Supported <br><br> [See the examples](./backup-azure-delete-vault.md?tabs=powershell#tabpanel_1_powershell). | Supported <br><br> [See the examples](./backup-azure-delete-vault.md?tabs=cli#tabpanel_1_cli). | Supported <br><br> [See the examples](./backup-azure-delete-vault.md?tabs=arm#tabpanel_1_arm). | N/A | N/A |
| Manage | Delete Backup vault | Supported | Here | Here | N/A | N/A | | Manage | Configure diagnostics settings | Supported | Supported | Supported | Supported <br><br> [See the examples](./azure-policy-configure-diagnostics.md). | Supported | | Manage | Manage Azure Monitor Alerts (preview) | Supported | Supported | Supported | N/A | N/A |
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix-iaas.md
Backup of Azure VMs with locks | Unsupported for unmanaged VMs. <br><br> Support
Windows Storage Spaces configuration of standalone Azure VMs | Supported [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for flexible orchestration model to back up and restore Single Azure VM. Restore with Managed identities | Yes, supported for managed Azure VMs, and not supported for classic and unmanaged Azure VMs. <br><br> Cross Region Restore isn't supported with managed identities. <br><br> Currently, this is available in all Azure public and national cloud regions. <br><br> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities).
-<a name="tvm-backup">Trusted Launch VM</a> | Backup supported (in preview) <br><br> To enroll your subscription for this feature, write to us at [askazurebackupteam@microsoft.com](mailto:askazurebackupteam@microsoft.com). <br><br> Backup for Trusted Launch VM is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup only through [Recovery Services vault](/azure/backup/backup-azure-arm-vms-prepare) and [VM Manage blade](/azure/backup/backup-during-vm-creation#start-a-backup-after-creating-the-vm). <br><br> **Feature details** <br> <ul><li> Migration of an existing [Generation 2](/azure/virtual-machines/generation-2) VM (protected with Azure Backup) to Trusted Launch VM is currently not supported. Learn about how to [create a Trusted Launch VM](/azure/virtual-machines/trusted-launch-portal?tabs=portal#deploy-a-trusted-vm). </li><li> Configurations of Backup, Alerts, and Monitoring for Trusted Launch VM are currently not supported through Backup center. </li><li> Currently, you can restore as [Create VM](/azure/backup/backup-azure-arm-restore-vms#create-a-vm), or [Restore disk](/azure/backup/backup-azure-arm-restore-vms#restore-disks) only. </li><li> [vTPM state](/azure/virtual-machines/trusted-launch#vtpm) doesn't persist while you restore a VM from a recovery point. Therefore, scenarios that require vTPM persistence may not work across the backup and restore operation. </li></ul>
+<a name="tvm-backup">Trusted Launch VM</a> | Backup supported (in preview) <br><br> To enroll your subscription for this feature, write to us at [askazurebackupteam@microsoft.com](mailto:askazurebackupteam@microsoft.com). <br><br> Backup for Trusted Launch VM is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup only through [Recovery Services vault](./backup-azure-arm-vms-prepare.md) and [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm). <br><br> **Feature details** <br> <ul><li> Migration of an existing [Generation 2](../virtual-machines/generation-2.md) VM (protected with Azure Backup) to Trusted Launch VM is currently not supported. Learn about how to [create a Trusted Launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-vm). </li><li> Configurations of Backup, Alerts, and Monitoring for Trusted Launch VM are currently not supported through Backup center. </li><li> Currently, you can restore as [Create VM](./backup-azure-arm-restore-vms.md#create-a-vm), or [Restore disk](./backup-azure-arm-restore-vms.md#restore-disks) only. </li><li> [vTPM state](../virtual-machines/trusted-launch.md#vtpm) doesn't persist while you restore a VM from a recovery point. Therefore, scenarios that require vTPM persistence may not work across the backup and restore operation. </li></ul>
## VM storage support
Restore with Managed identities | Yes, supported for managed Azure VMs, and not
| Azure VM data disks | Support for backup of Azure VMs with up to 32 disks.<br><br> Support for backup of Azure VMs with unmanaged disks or classic VMs is up to 16 disks only. Data disk size | Individual disk size can be up to 32 TB and a maximum of 256 TB combined for all disks in a VM.
-Storage type | Standard HDD, Standard SSD, Premium SSD. <br><br> Backup and restore of [ZRS disks](/azure/virtual-machines/disks-redundancy#zone-redundant-storage-for-managed-disks) is supported.
+Storage type | Standard HDD, Standard SSD, Premium SSD. <br><br> Backup and restore of [ZRS disks](../virtual-machines/disks-redundancy.md#zone-redundant-storage-for-managed-disks) is supported.
Managed disks | Supported. Encrypted disks | Supported.<br/><br/> Azure VMs enabled with Azure Disk Encryption can be backed up (with or without the Azure AD app).<br/><br/> Encrypted VMs can't be recovered at the file/folder level. You must recover the entire VM.<br/><br/> You can enable encryption on VMs that are already protected by Azure Backup. Disks with Write Accelerator enabled | Currently, Azure VM with WA disk backup is previewed in all Azure public regions. <br><br> To enroll your subscription for WA Disk, write to us at [askazurebackupteam@microsoft.com](mailto:askazurebackupteam@microsoft.com). <br><br> Snapshots donΓÇÖt include WA disk snapshots for unsupported subscriptions as WA disk will be excluded. <br><br>**Important** <br> Virtual machines with WA disks need internet connectivity for a successful backup (even though those disks are excluded from the backup).
On-premises/Azure VMs with MABS | ![Yes][green] | ![Yes][green]
[green]: ./media/backup-support-matrix/green.png [yellow]: ./media/backup-support-matrix/yellow.png
-[red]: ./media/backup-support-matrix/red.png
+[red]: ./media/backup-support-matrix/red.png
backup Create Manage Azure Services Using Azure Command Line Interface https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/create-manage-azure-services-using-azure-command-line-interface.md
The following table lists the Azure CLI document references available for suppor
Azure services | CLI document references -- |
-Azure Vault | [Delete a Recovery Services vault](/azure/backup/backup-azure-delete-vault?tabs=cli#tabpanel_1_cli)
+Azure Vault | [Delete a Recovery Services vault](./backup-azure-delete-vault.md?tabs=cli#tabpanel_1_cli)
Azure Virtual Machine (VM) | <li>[Backup an Azure VM](quick-backup-vm-cli.md)</li><li>[Restore an Azure VM](tutorial-restore-disk.md)</li><li>[Restores files from Azure VM backups](tutorial-restore-files.md)</li><li>[Update the existing VM backup policy](modify-vm-policy-cli.md)</li><li>[Backup and restore selective disk for Azure VMs](selective-disk-backup-restore.md#using-azure-cli)</li> Azure file share | <li>[Back up Azure file shares](backup-afs-cli.md)</li><li>[Restore Azure file shares](restore-afs-cli.md)</li><li>[Manage Azure file share backups](manage-afs-backup-cli.md)</li>
-SAP HANA | <li>[Back up SAP HANA databases in an Azure VM](tutorial-sap-hana-backup-cli.md)</li><li>[Restore SAP HANA databases in an Azure VM](tutorial-sap-hana-restore-cli.md)</li><li>[Manage SAP HANA databases in an Azure VM](tutorial-sap-hana-manage-cli.md)</li>
+SAP HANA | <li>[Back up SAP HANA databases in an Azure VM](tutorial-sap-hana-backup-cli.md)</li><li>[Restore SAP HANA databases in an Azure VM](tutorial-sap-hana-restore-cli.md)</li><li>[Manage SAP HANA databases in an Azure VM](tutorial-sap-hana-manage-cli.md)</li>
backup Guidance Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/guidance-best-practices.md
Azure Backup enables data protection for various workloads (on-premises and clou
* **Native workload integration**: Azure Backup provides native integration with Azure Workloads (VMs, SAP HANA, SQL in Azure VMs and even Azure Files) without requiring you to manage automation or infrastructure to deploy agents, write new scripts or provision storage.
- [Learn more](/azure/backup/backup-overview#what-can-i-back-up) about supported workloads.
+ [Learn more](./backup-overview.md#what-can-i-back-up) about supported workloads.
### Data plane
Consider the following guidelines when creating Backup Policy:
While scheduling your backup policy, consider the following points: -- For mission-critical resources, try scheduling the most frequently available automated backups per day to have a smaller RPO. [Learn more](/azure/backup/backup-support-matrix#retention-limits)
+- For mission-critical resources, try scheduling the most frequently available automated backups per day to have a smaller RPO. [Learn more](./backup-support-matrix.md#retention-limits)
If you need to take multiple backups per day for Azure VM via the extension, see the workarounds in the [next section](#retention-considerations).
While scheduling your backup policy, consider the following points:
* Long-term retention:
- * Planned (compliance requirements) - if you know in advance that data is required years from the current time, then use Long-term retention. Azure Backup supports back up of Long-Term Retention points in the archive tier, along with Snapshots and the Standard tier. [Learn more](/azure/backup/archive-tier-support) about supported workloads for Archive tier and retention configuration.
+ * Planned (compliance requirements) - if you know in advance that data is required years from the current time, then use Long-term retention. Azure Backup supports back up of Long-Term Retention points in the archive tier, along with Snapshots and the Standard tier. [Learn more](./archive-tier-support.md) about supported workloads for Archive tier and retention configuration.
* Unplanned (on-demand requirement) - if you don't know in advance, then use you can use on-demand with specific custom retention settings (these custom retention settings aren't impacted by policy settings). * On-demand backup with custom retention - if you need to take a backup not scheduled via backup policy, then you can use an on-demand backup. This can be useful for taking backups that donΓÇÖt fit your scheduled backup or for taking granular backup (for example, multiple IaaS VM backups per day since scheduled backup permits only one backup per day). It's important to note that the retention policy defined in scheduled policy doesn't apply to on-demand backups.
To help you protect your backup data and meet the security needs of your busines
- Azure role-based access control (Azure RBAC) enables fine-grained access management, segregation of duties within your team and granting only the amount of access to users necessary to perform their jobs. [Learn more here](backup-rbac-rs-vault.md). -- If youΓÇÖve multiple workloads to back up (such as Azure VMs, SQL databases, and PostgreSQL databases) and you've multiple stakeholders to manage those backups, it is important to segregate their responsibilities so that user has access to only those resources theyΓÇÖre responsible for. Azure role-based access control (Azure RBAC) enables granular access management, segregation of duties within your team, and granting only the types of access to users necessary to perform their jobs. [Learn more](/azure/backup/backup-rbac-rs-vault)
+- If youΓÇÖve multiple workloads to back up (such as Azure VMs, SQL databases, and PostgreSQL databases) and you've multiple stakeholders to manage those backups, it is important to segregate their responsibilities so that user has access to only those resources theyΓÇÖre responsible for. Azure role-based access control (Azure RBAC) enables granular access management, segregation of duties within your team, and granting only the types of access to users necessary to perform their jobs. [Learn more](./backup-rbac-rs-vault.md)
-- You can also segregate the duties by providing minimum required access to perform a particular task. For example, a person responsible for monitoring the workloads shouldn't have access to modify the backup policy or delete the backup items. Azure Backup provides three built-in roles to control backup management operations: Backup contributors, operators, and readers. Learn more here. For information about the minimum Azure role required for each backup operation for Azure VMs, SQL/SAP HANA databases, and Azure File Share, see [this guide](/azure/backup/backup-rbac-rs-vault).
+- You can also segregate the duties by providing minimum required access to perform a particular task. For example, a person responsible for monitoring the workloads shouldn't have access to modify the backup policy or delete the backup items. Azure Backup provides three built-in roles to control backup management operations: Backup contributors, operators, and readers. Learn more here. For information about the minimum Azure role required for each backup operation for Azure VMs, SQL/SAP HANA databases, and Azure File Share, see [this guide](./backup-rbac-rs-vault.md).
-- [Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/overview) also provides the flexibility to build [Custom Roles](/azure/role-based-access-control/custom-roles) based on your individual requirements. If youΓÇÖre unsure about the types of roles recommended for specific operation, you can utilize the built-in roles provided by Azure role-based access control (Azure RBAC) and get started.
+- [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) also provides the flexibility to build [Custom Roles](../role-based-access-control/custom-roles.md) based on your individual requirements. If youΓÇÖre unsure about the types of roles recommended for specific operation, you can utilize the built-in roles provided by Azure role-based access control (Azure RBAC) and get started.
The following diagram explains about how different Azure built-in roles work:
Encryption protects your data and helps you to meet your organizational security
### Protection of backup data from unintentional deletes with soft-delete
-You may encounter scenarios where youΓÇÖve mission-critical backup data in a vault, and it gets deleted accidentally or erroneously. Also, a malicious actor may delete your production backup items. ItΓÇÖs often costly and time-intensive to rebuild those resources and can even cause crucial data loss. Azure Backup provides safeguard against accidental and malicious deletion with the [Soft-Delete](/azure/backup/backup-azure-security-feature-cloud) feature by allowing you to recover those resources after they are deleted.
+You may encounter scenarios where youΓÇÖve mission-critical backup data in a vault, and it gets deleted accidentally or erroneously. Also, a malicious actor may delete your production backup items. ItΓÇÖs often costly and time-intensive to rebuild those resources and can even cause crucial data loss. Azure Backup provides safeguard against accidental and malicious deletion with the [Soft-Delete](./backup-azure-security-feature-cloud.md) feature by allowing you to recover those resources after they are deleted.
-With soft-delete, if a user deletes the backup (of a VM, SQL Server database, Azure file share, SAP HANA database), the backup data is retained for 14 additional days, allowing the recovery of that backup item with no data loss. The additional 14 days retention of backup data in the soft delete state doesn't incur any cost. [Learn more](/azure/backup/backup-azure-security-feature-cloud)
+With soft-delete, if a user deletes the backup (of a VM, SQL Server database, Azure file share, SAP HANA database), the backup data is retained for 14 additional days, allowing the recovery of that backup item with no data loss. The additional 14 days retention of backup data in the soft delete state doesn't incur any cost. [Learn more](./backup-azure-security-feature-cloud.md)
### Multi-User Authorization (MUA)
With soft-delete, if a user deletes the backup (of a VM, SQL Server database, Az
Any administrator that has the privileged access to your backup data has the potential to cause irreparable damage to the system. A rogue admin can delete all your business-critical data or even turn off all the security measures that may leave your system vulnerable to cyber-attacks.
-Azure Backup provides you with the [Multi-User Authorization (MUA)](/azure/backup/multi-user-authorization) feature to protect you from such rouge administrator attacks. Multi-user authorization helps protect against a rogue administrator performing destructive operations (that is, disabling soft-delete), by ensuring that every privileged/destructive operation is done only after getting approval from a security administrator.
+Azure Backup provides you with the [Multi-User Authorization (MUA)](./multi-user-authorization.md) feature to protect you from such rouge administrator attacks. Multi-user authorization helps protect against a rogue administrator performing destructive operations (that is, disabling soft-delete), by ensuring that every privileged/destructive operation is done only after getting approval from a security administrator.
### Ransomware Protection
Azure Backup provides you with the [Multi-User Authorization (MUA)](/azure/backu
You may encounter scenarios where someone tries to breach into your system and maliciously turn off the security mechanisms, such as disabling Soft Delete or attempts to perform destructive operations, such as deleting the backup resources.
-Azure Backup provides security against such incidents by sending you critical alerts over your preferred notification channel (email, ITSM, Webhook, runbook, and sp pn) by creating an [Action Rule](/azure/azure-monitor/alerts/alerts-action-rules) on top of the alert. [Learn more](/azure/backup/security-overview#monitoring-and-alerts-of-suspicious-activity)
+Azure Backup provides security against such incidents by sending you critical alerts over your preferred notification channel (email, ITSM, Webhook, runbook, and sp pn) by creating an [Action Rule](../azure-monitor/alerts/alerts-action-rules.md) on top of the alert. [Learn more](./security-overview.md#monitoring-and-alerts-of-suspicious-activity)
### Security features to help protect hybrid backups
Azure Backup requires movement of data from your workload to the Recovery Servic
While protecting your critical data with Azure Backup, you wouldnΓÇÖt want your resources to be accessible from the public internet. Especially, if youΓÇÖre a bank or a financial institution, you would have stringent compliance and security requirements to protect your High Business Impact (HBI) data. Even in the healthcare industry, there are strict compliance rules.
-To fulfill all these needs, use [Azure Private Endpoint](/azure/private-link/private-endpoint-overview), which is a network interface that connects you privately and securely to a service powered by Azure Private Link. We recommend you to use private endpoints for secure backup and restore without the need to add to an allowlist of any IPs/FQDNs for Azure Backup or Azure Storage from your virtual networks.
+To fulfill all these needs, use [Azure Private Endpoint](../private-link/private-endpoint-overview.md), which is a network interface that connects you privately and securely to a service powered by Azure Private Link. We recommend you to use private endpoints for secure backup and restore without the need to add to an allowlist of any IPs/FQDNs for Azure Backup or Azure Storage from your virtual networks.
-[Learn more](/azure/backup/private-endpoints#get-started-with-creating-private-endpoints-for-backup) about how to create and use private endpoints for Azure Backup inside your virtual networks.
+[Learn more](./private-endpoints.md#get-started-with-creating-private-endpoints-for-backup) about how to create and use private endpoints for Azure Backup inside your virtual networks.
* When you enable private endpoints for the vault, they're only used for backup and restore of SQL and SAP HANA workloads in an Azure VM and MARS agent backups. You can use the vault for the backup of other workloads as well (they wonΓÇÖt require private endpoints though). In addition to the backup of SQL and SAP HANA workloads and backup using the MARS agent, private endpoints are also used to perform file recovery in the case of Azure VM backup. [Learn more here](private-endpoints-overview.md#recommended-and-supported-scenarios).
The Azure Backup service offers the flexibility to effectively manage your costs
* **Reduce the backup storage cost with Selectively backup disks**: Exclude disk (preview feature) provides an efficient and cost-effective choice to selectively back up critical data. For example, you can back up only one disk when you don't want to back up all disks attached to a VM. This is also useful when you have multiple backup solutions. For example, to back up your databases or data with a workload backup solution (SQL Server database in Azure VM backup), use Azure VM level backup for selected disks. -- **Speed up your Restores and minimize RTO using the Instant Restore feature**: Azure Backup takes snapshots of Azure VMs and stores them along with the disks to boost recovery point creation and to speed up restore operations. This is called Instant Restore. This feature allows a restore operation from these snapshots by cutting down the restore times. It reduces the time needed to transform and copy data back from the vault. Therefore, itΓÇÖll incur storage costs for the snapshots taken during this period. Learn more about [Azure Backup Instant Recovery capability](/azure/backup/backup-instant-restore-capability).
+- **Speed up your Restores and minimize RTO using the Instant Restore feature**: Azure Backup takes snapshots of Azure VMs and stores them along with the disks to boost recovery point creation and to speed up restore operations. This is called Instant Restore. This feature allows a restore operation from these snapshots by cutting down the restore times. It reduces the time needed to transform and copy data back from the vault. Therefore, itΓÇÖll incur storage costs for the snapshots taken during this period. Learn more about [Azure Backup Instant Recovery capability](./backup-instant-restore-capability.md).
-- **Choose correct replication type**: Azure Backup vault's Storage Replication type is set to Geo-redundant (GRS), by default. This option can't be changed after you start protecting items. Geo-redundant storage (GRS) provides a higher level of data durability than Locally redundant storage (LRS), allows an opt-in to use Cross Region Restore, and costs more. Review the trade-offs between lower costs and higher data durability and choose the best option for your scenario. [Learn more](/azure/backup/backup-create-rs-vault#set-storage-redundancy)
+- **Choose correct replication type**: Azure Backup vault's Storage Replication type is set to Geo-redundant (GRS), by default. This option can't be changed after you start protecting items. Geo-redundant storage (GRS) provides a higher level of data durability than Locally redundant storage (LRS), allows an opt-in to use Cross Region Restore, and costs more. Review the trade-offs between lower costs and higher data durability and choose the best option for your scenario. [Learn more](./backup-create-rs-vault.md#set-storage-redundancy)
-- **Use Archive Tier for Long-Term Retention (LTR) and save costs**: Consider the scenario where youΓÇÖve older backup data that you rarely access, but is required to be stored for a long period (for example, 99 years) for compliance reasons. Storing such huge data in a Standard Tier is costly and isnΓÇÖt economical. To help you optimize your storage costs, Azure Backup provides you with [Archive Tier](/azure/backup/archive-tier-support), which is an access tier especially designed for Long-Term Retention (LTR) of the backup data.
+- **Use Archive Tier for Long-Term Retention (LTR) and save costs**: Consider the scenario where youΓÇÖve older backup data that you rarely access, but is required to be stored for a long period (for example, 99 years) for compliance reasons. Storing such huge data in a Standard Tier is costly and isnΓÇÖt economical. To help you optimize your storage costs, Azure Backup provides you with [Archive Tier](./archive-tier-support.md), which is an access tier especially designed for Long-Term Retention (LTR) of the backup data.
- If you're protecting both the workload running inside a VM and the VM itself, ensure if this dual protection is needed.
In a scenario where your backup/restore job failed due to some unknown issue. To
You can configure such critical alerts and route them to any preferred notification channel (email, ITSM, webhook, runbook, and so on). Azure Backup integrates with multiple Azure services to meet different alerting and notification requirements: -- **Azure Monitor Logs (Log Analytics)**: You can configure your [vaults to send data to a Log Analytics workspace](/azure/backup/backup-azure-monitoring-use-azuremonitor#create-alerts-by-using-log-analytics), write custom queries on the workspace, and configure alerts to be generated based on the query output. You can view the query results in tables and charts; also, export them to Power BI or Grafana. (Log Analytics is also a key component of the reporting/auditing capability described in the later sections).
+- **Azure Monitor Logs (Log Analytics)**: You can configure your [vaults to send data to a Log Analytics workspace](./backup-azure-monitoring-use-azuremonitor.md#create-alerts-by-using-log-analytics), write custom queries on the workspace, and configure alerts to be generated based on the query output. You can view the query results in tables and charts; also, export them to Power BI or Grafana. (Log Analytics is also a key component of the reporting/auditing capability described in the later sections).
- Azure Monitor Alerts: For certain default scenarios, such as backup failure, restore failure, backup data deletion, and so on, Azure Backup sends alerts by default that are surfaced using Azure Monitor, without the need for a user to set up a Log Analytics workspace.
Watch the following video to learn how to leverage Azure Monitor to configure va
Read the following articles as starting points for using Azure Backup: * [Azure Backup overview](backup-overview.md)
-* [Frequently Asked Questions](backup-azure-backup-faq.yml)
+* [Frequently Asked Questions](backup-azure-backup-faq.yml)
backup Install Mars Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/install-mars-agent.md
If you've already installed the agent on any machines, ensure you're running the
1. Select **Finish**. The agent is now installed, and your machine is registered to the vault. You're ready to configure and schedule your backup. >[!Note]
- >We strongly recommend you save your passphrase in an alternate secure location, such as the Azure key vault. Microsoft can't recover the data without the passphrase. [Learn](/azure/key-vault/secrets/quick-create-portal) how to store a secret in a key vault.
+ >We strongly recommend you save your passphrase in an alternate secure location, such as the Azure key vault. Microsoft can't recover the data without the passphrase. [Learn](../key-vault/secrets/quick-create-portal.md) how to store a secret in a key vault.
## Next steps
-Learn how to [Back up Windows machines by using the Azure Backup MARS agent](backup-windows-with-mars-agent.md)
+Learn how to [Back up Windows machines by using the Azure Backup MARS agent](backup-windows-with-mars-agent.md)
backup Manage Monitor Sql Database Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/manage-monitor-sql-database-backup.md
You can fix the policy version for all the impacted items in one click:
## Unregister a SQL Server instance
-Before you unregister the server, [disable soft delete](/azure/backup/backup-azure-security-feature-cloud#disabling-soft-delete-using-azure-portal), and then delete all backup items.
+Before you unregister the server, [disable soft delete](./backup-azure-security-feature-cloud.md#disabling-soft-delete-using-azure-portal), and then delete all backup items.
>[!NOTE]
->Deleting backup items with soft delete enabled will lead to 14 days retention, and you will need to wait before the items are completely removed. However, if you've deleted the backup items with soft delete enabled, you can undelete them, disable soft-delete, and then delete them again for immediate removal. [Learn more](/azure/backup/backup-azure-security-feature-cloud#permanently-deleting-soft-deleted-backup-items)
+>Deleting backup items with soft delete enabled will lead to 14 days retention, and you will need to wait before the items are completely removed. However, if you've deleted the backup items with soft delete enabled, you can undelete them, disable soft-delete, and then delete them again for immediate removal. [Learn more](./backup-azure-security-feature-cloud.md#permanently-deleting-soft-deleted-backup-items)
Unregister a SQL Server instance after you disable protection but before you delete the vault.
backup Multi User Authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/multi-user-authorization.md
Here is the flow of events in a typical scenario:
- The Resource Guard and the Recovery Services vault must be in the same Azure region. - As stated in the previous section, ensure the Backup admin does **not** have **Contributor** permissions on the Resource Guard. You can choose to have the Resource Guard in another subscription of the same directory or in another directory to ensure maximum isolation.-- Ensure that your subscriptions containing the Recovery Services vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use the **Microsoft.RecoveryServices** provider. For more details, see [Azure resource providers and types](/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider-1).
+- Ensure that your subscriptions containing the Recovery Services vault as well as the Resource Guard (in different subscriptions or tenants) are registered to use the **Microsoft.RecoveryServices** provider. For more details, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1).
## Usage scenarios
backup Tutorial Sql Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/tutorial-sql-backup.md
Configure backup as follows:
To optimize backup loads, Azure Backup sets a maximum number of databases in one backup job to 50. * To protect more than 50 databases, configure multiple backups.
- * To [enable](/azure/backup/backup-sql-server-database-azure-vms#enable-auto-protection) the entire instance or the Always On availability group, in the **AUTOPROTECT** drop-down list, select **ON**, and then select **OK**.
+ * To [enable](./backup-sql-server-database-azure-vms.md#enable-auto-protection) the entire instance or the Always On availability group, in the **AUTOPROTECT** drop-down list, select **ON**, and then select **OK**.
> [!NOTE]
- > The [auto-protection](/azure/backup/backup-sql-server-database-azure-vms#enable-auto-protection) feature not only enables protection on all the existing databases at once, but also automatically protects any new databases added to that instance or the availability group.
+ > The [auto-protection](./backup-sql-server-database-azure-vms.md#enable-auto-protection) feature not only enables protection on all the existing databases at once, but also automatically protects any new databases added to that instance or the availability group.
1. Define the **Backup policy**. You can do one of the following:
In this tutorial, you used the Azure portal to:
Continue to the next tutorial to restore an Azure virtual machine from disk. > [!div class="nextstepaction"]
-> [Restore SQL Server databases on Azure VMs](./restore-sql-database-azure-vm.md)
+> [Restore SQL Server databases on Azure VMs](./restore-sql-database-azure-vm.md)
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/whats-new.md
For more information, see [Archive Tier support in Azure Backup](archive-tier-su
Azure Backup now supports multi-user authorization (MUA) that allows you to add an additional layer of protection to critical operations on your Recovery Services vaults. For MUA, Azure Backup uses the Azure resource, Resource Guard, to ensure critical operations are performed only with applicable authorization.
-For more information, see [how to protect Recovery Services vault and manage critical operations with MUA](/azure/backup/multi-user-authorization).
+For more information, see [how to protect Recovery Services vault and manage critical operations with MUA](./multi-user-authorization.md).
## Multiple backups per day for Azure Files (in preview)
cloud-services-extended-support Support Help https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/support-help.md
Here are suggestions for where you can get help when developing your Azure Cloud
## Self help troubleshooting
-For common issues and and workarounds, see [Azure Cloud Services troubleshooting documentation](https://docs.microsoft.com/troubleshoot/azure/cloud-services/welcome-cloud-services) and [Frequently asked questions](faq.yml)
+For common issues and and workarounds, see [Azure Cloud Services troubleshooting documentation](/troubleshoot/azure/cloud-services/welcome-cloud-services) and [Frequently asked questions](faq.yml)
## Post a question on Microsoft Q&A
cognitive-services How To Configure Azure Ad Auth https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-configure-azure-ad-auth.md
To configure your Speech resource for Azure AD authentication, create a custom d
### Assign roles For Azure AD authentication with Speech resources, you need to assign either the *Cognitive Services Speech Contributor* or *Cognitive Services Speech User* role.
-You can assign roles to the user or application using the [Azure portal](/azure/role-based-access-control/role-assignments-portal) or [PowerShell](/azure/role-based-access-control/role-assignments-powershell).
+You can assign roles to the user or application using the [Azure portal](../../role-based-access-control/role-assignments-portal.md) or [PowerShell](../../role-based-access-control/role-assignments-powershell.md).
## Get an Azure AD access token ::: zone pivot="programming-language-csharp"
aadToken = ibc.get_token("https://cognitiveservices.azure.com/.default")
::: zone-end ::: zone pivot="programming-language-more"
-Find samples that get an Azure AD access token in [Microsoft identity platform code samples](/azure/active-directory/develop/sample-v2-code).
+Find samples that get an Azure AD access token in [Microsoft identity platform code samples](../../active-directory/develop/sample-v2-code.md).
-For programming languages where a Microsoft identity platform client library isn't available, you can directly [request an access token](/azure/active-directory/develop/v2-oauth-ropc).
+For programming languages where a Microsoft identity platform client library isn't available, you can directly [request an access token](../../active-directory/develop/v2-oauth-ropc.md).
::: zone-end ## Get the Speech resource ID
The ```VoiceProfileClient``` isn't available with the Speech SDK for Python.
::: zone-end > [!NOTE]
-> The ```ConversationTranslator``` doesn't support Azure AD authentication.
+> The ```ConversationTranslator``` doesn't support Azure AD authentication.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
## December 2021
-* The version 3.1-preview.x REST endpoints and 5.1.0-beta.x client library have been retired. Please upgrade to the General Available version of the API(v3.1). If you're using the client libraries, use package version 5.1.0 or higher. See the [migration guide](https://aka.ms/ta-get-started-sdk) for details.
+* The version 3.1-preview.x REST endpoints and 5.1.0-beta.x client library have been retired. Please upgrade to the General Available version of the API(v3.1). If you're using the client libraries, use package version 5.1.0 or higher. See the [migration guide](./concepts/migrate-language-service-latest.md) for details.
## November 2021
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
## Next steps
-* [What is Azure Cognitive Service for Language?](overview.md)
+* [What is Azure Cognitive Service for Language?](overview.md)
communication-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/authentication.md
The Azure platform provides role-based access (Azure RBAC) to control access to
To set up a service principal, [create a registered application from the Azure CLI](../quickstarts/identity/service-principal-from-cli.md). Then, the endpoint and credentials can be used to authenticate the SDKs. See examples of how [service principal](../quickstarts/identity/service-principal.md) is used.
-Communication services support Azure AD authentication but do not support managed identity for Communication services resources. You can find more details, about the managed identity support in the [Azure Active Directory documentation](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/services-support-managed-identities).
+Communication services support Azure AD authentication but do not support managed identity for Communication services resources. You can find more details, about the managed identity support in the [Azure Active Directory documentation](../../active-directory/managed-identities-azure-resources/services-support-managed-identities.md).
### User Access Tokens
The user identity is intended to act as a primary key for logs and metrics colle
> [Create User Access Tokens](../quickstarts/access-tokens.md) For more information, see the following articles:-- [Learn about client and server architecture](../concepts/client-and-server-architecture.md)
+- [Learn about client and server architecture](../concepts/client-and-server-architecture.md)
communication-services Calling Chat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/interop/calling-chat.md
const call = callAgent.startCall([teamsCallee]);
``` **Voice and video calling events**
-[Communication Services voice and video calling events](/azure/event-grid/communication-services-voice-video-events) are raised for calls between a Communication Services user and Teams users.
+[Communication Services voice and video calling events](../../../event-grid/communication-services-voice-video-events.md) are raised for calls between a Communication Services user and Teams users.
**Limitations and known issues** - Teams users must be in "TeamsOnly" mode. Skype for Business users can't receive 1:1 calls from Communication Services users.
While in private preview, a Communication Services user can do various actions u
## Privacy Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chat. It is your responsibility to ensure that the users of your application are notified when recording or transcription are enabled in a Teams call or meeting.
-Microsoft will indicate to you via the Azure Communication Services API that recording or transcription has commenced and you must communicate this fact in real time to your users within your application's user interface. You agree to indemnify Microsoft for all costs and damages incurred as a result of your failure to comply with this obligation.
+Microsoft will indicate to you via the Azure Communication Services API that recording or transcription has commenced and you must communicate this fact in real time to your users within your application's user interface. You agree to indemnify Microsoft for all costs and damages incurred as a result of your failure to comply with this obligation.
communication-services Join Teams Meeting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/join-teams-meeting.md
Microsoft will indicate to you via the Azure Communication Services API that rec
- PowerPoint presentations are not rendered for Communication Services users. - Teams meetings support up to 1000 participants, but the Azure Communication Services Calling SDK currently only supports 350 participants and Chat SDK supports 250 participants. - With [Cloud Video Interop for Microsoft Teams](/microsoftteams/cloud-video-interop), some devices have seen issues when a Communication Services user shares their screen.-- [Communication Services voice and video calling events](/azure/event-grid/communication-services-voice-video-events) are not raised for Teams meeting.
+- [Communication Services voice and video calling events](../../event-grid/communication-services-voice-video-events.md) are not raised for Teams meeting.
- Features such as reactions, raised hand, together mode, and breakout rooms are only available for Teams users. - Communication Services users cannot interact with poll or Q&A apps in meetings. - Communication Services won't have access to all chat features supported by Teams. They can send and receive text messages, use typing indicators, read receipts and other features supported by Chat SDK. However features like file sharing, reply or react to a message are not supported for Communication Services users.
Microsoft will indicate to you via the Azure Communication Services API that rec
- [How-to: Join a Teams meeting](../how-tos/calling-sdk/teams-interoperability.md) - [Quickstart: Join a BYOI calling app to a Teams meeting](../quickstarts/voice-video-calling/get-started-teams-interop.md)-- [Quickstart: Join a BYOI chat app to a Teams meeting](../quickstarts/chat/meeting-interop.md)
+- [Quickstart: Join a BYOI chat app to a Teams meeting](../quickstarts/chat/meeting-interop.md)
communication-services Teams Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/teams-endpoint.md
None.
- Application admin - Cloud application admin
-Find more details in [Azure Active Directory documentation](https://docs.microsoft.com/azure/active-directory/roles/permissions-reference).
+Find more details in [Azure Active Directory documentation](../../active-directory/roles/permissions-reference.md).
## Next steps > [!div class="nextstepaction"] > [Issue a Teams access token](../quickstarts/manage-teams-identity.md)
-Learn about [Teams interoperability](./teams-interop.md).
+Learn about [Teams interoperability](./teams-interop.md).
confidential-computing Quick Create Confidential Vm Arm Amd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/quick-create-confidential-vm-arm-amd.md
To create and deploy a confidential VM using an ARM template in the Azure portal
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. [Open the confidential VM ARM template](https://aka.ms/deploycvmazure).
+1. [Open the confidential VM ARM template](./quick-create-confidential-vm-portal-amd.md).
1. For **Subscription**, select an Azure subscription that meets the [prerequisites](#prerequisites).
Use this example to create a custom parameter file for a Linux-based confidentia
## Next steps > [!div class="nextstepaction"]
-> [Quickstart: Create a confidential VM on AMD in the Azure portal](quick-create-confidential-vm-portal-amd.md)
+> [Quickstart: Create a confidential VM on AMD in the Azure portal](quick-create-confidential-vm-portal-amd.md)
connectors Connectors Sftp Ssh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-sftp-ssh.md
ms.suite: integration
Previously updated : 08/05/2021 Last updated : 01/12/2022 tags: connectors
For differences between the SFTP-SSH connector and the SFTP connector, review th
* MessageWay * OpenText Secure MFT * OpenText GXS
+ * Globalscape
+ * SFTP for Azure Blob Storage
* SFTP-SSH actions that support [chunking](../logic-apps/logic-apps-handle-large-messages.md) can handle files up to 1 GB, while SFTP-SSH actions that don't support chunking can handle files up to 50 MB. The default chunk size is 15 MB. However, this size can dynamically change, starting from 5 MB and gradually increasing to the 50-MB maximum. Dynamic sizing is based on factors such as network latency, server response time, and so on.
container-apps Microservices Dapr Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/microservices-dapr-azure-resource-manager.md
In this tutorial, you deploy the same applications from the Dapr [Hello World](h
::: zone pivot="container-apps-bicep"
-* [Bicep](/azure/azure-resource-manager/bicep/install)
+* [Bicep](../azure-resource-manager/bicep/install.md)
::: zone-end
This command deletes both container apps, the storage account, the container app
## Next steps > [!div class="nextstepaction"]
-> [Application lifecycle management](application-lifecycle-management.md)
+> [Application lifecycle management](application-lifecycle-management.md)
container-instances Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/availability-zones.md
az containershow --name acilinuxcontainergroup --resource-group myResourceGroup
[az-container-show]: /cli/azure/container#az_container_show [az-group-create]: /cli/azure/group#az_group_create [az-deployment-group-create]: /cli/azure/deployment#az_deployment_group_create
-[availability-zone-overview]: /azure/availability-zones/az-overview
+[availability-zone-overview]: ../availability-zones/az-overview.md
container-instances Container Instances Region Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-region-availability.md
For information on troubleshooting container instance deployment, see [Troublesh
[azure-support]: https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest
-[az-region-support]: /azure/availability-zones/az-overview#regions
+[az-region-support]: ../availability-zones/az-overview.md#regions
container-registry Pull Images From Connected Registry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/pull-images-from-connected-registry.md
The command will return details about the newly generated token including passwo
## Update the connected registry with the client token
-Use the [az acr connected-registry update][az-acr-connected-registry-update] command to update the connected registry with the newly created client token.
+Use the az acr connected-registry update command to update the connected registry with the newly created client token.
```azurecli az acr connected-registry update \
docker pull <IP_address_or_FQDN_of_connected_registry>:<port>/hello-world
[az-acr-scope-map-create]: /cli/azure/acr/token/#az_acr_token_create [az-acr-token-create]: /cli/azure/acr/token/#az_acr_token_create [az-acr-token-credential-generate]: /cli/azure/acr/token/credential#az_acr_token_credential_generate
-[az-acr-connected-registry-update]: /azure/container-registry/quickstart-connected-registry-cli#az_acr_connected_registry_update]
+[az-acr-connected-registry-update]: ./quickstart-connected-registry-cli.md#az_acr_connected_registry_update]
[container-registry-intro]: container-registry-intro.md [quickstart-connected-registry-cli]: quickstart-connected-registry-cli.md
cosmos-db Dedicated Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/dedicated-gateway.md
The dedicated gateway is available in the following sizes:
There are many different ways to provision a dedicated gateway: - [Provision a dedicated gateway using the Azure Portal](how-to-configure-integrated-cache.md#provision-a-dedicated-gateway-cluster)-- [Use Azure Cosmos DB's REAT API](https://docs.microsoft.com/rest/api/cosmos-db-resource-provider/2021-04-01-preview/service/create)-- [Azure CLI](https://docs.microsoft.com/cli/azure/cosmosdb/service?view=azure-cli-latest#az_cosmosdb_service_create)-- [ARM template](https://docs.microsoft.com/azure/templates/microsoft.documentdb/databaseaccounts/services?tabs=bicep)
+- [Use Azure Cosmos DB's REAT API](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/service/create)
+- [Azure CLI](/cli/azure/cosmosdb/service?view=azure-cli-latest#az_cosmosdb_service_create)
+- [ARM template](/azure/templates/microsoft.documentdb/databaseaccounts/services?tabs=bicep)
- Note: You cannot deprovision a dedicated gateway using ARM templates ## Dedicated gateway in multi-region accounts
data-factory Connector Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-cosmos-db.md
Settings specific to Azure Cosmos DB are available in the **Source Options** tab
**Preferred regions:** Choose the preferred read regions for this process.
-**Change feed (Preview):** If true, you will get data from [Azure Cosmos DB change feed](/azure/cosmos-db/change-feed) which is a persistent record of changes to a container in the order they occur from last run automatically. When you set it true, do not set both **Infer drifted column types** and **Allow schema drift** as true at the same time. For more details, see [Azure Cosmos DB change feed (preview)](#azure-cosmos-db-change-feed-preview).
+**Change feed (Preview):** If true, you will get data from [Azure Cosmos DB change feed](../cosmos-db/change-feed.md) which is a persistent record of changes to a container in the order they occur from last run automatically. When you set it true, do not set both **Infer drifted column types** and **Allow schema drift** as true at the same time. For more details, see [Azure Cosmos DB change feed (preview)](#azure-cosmos-db-change-feed-preview).
**Start from beginning (Preview):** If true, you will get initial load of full snapshot data in the first run, followed by capturing changed data in next runs. If false, the initial load will be skipped in the first run, followed by capturing changed data in next runs. The setting is aligned with the same setting name in [Cosmos DB reference](https://github.com/Azure/azure-cosmosdb-spark/wiki/Configuration-references#reading-cosmosdb-collection-change-feed). For more details, see [Azure Cosmos DB change feed (preview)](#azure-cosmos-db-change-feed-preview).
When migrating from a relational database e.g. SQL Server to Azure Cosmos DB, co
## Azure Cosmos DB change feed (preview)
-Azure Data Factory can get data from [Azure Cosmos DB change feed](/azure/cosmos-db/change-feed) by enabling it in the mapping data flow source transformation. With this connector option, you can read change feeds and apply transformations before loading transformed data into destination datasets of your choice. You do not have to use Azure functions to read the change feed and then write custom transformations. You can use this option to move data from one container to another, prepare change feed driven material views for fit purpose or automate container backup or recovery based on change feed, and enable many more such use cases using visual drag and drop capability of Azure Data Factory.
+Azure Data Factory can get data from [Azure Cosmos DB change feed](../cosmos-db/change-feed.md) by enabling it in the mapping data flow source transformation. With this connector option, you can read change feeds and apply transformations before loading transformed data into destination datasets of your choice. You do not have to use Azure functions to read the change feed and then write custom transformations. You can use this option to move data from one container to another, prepare change feed driven material views for fit purpose or automate container backup or recovery based on change feed, and enable many more such use cases using visual drag and drop capability of Azure Data Factory.
Make sure you keep the pipeline and activity name unchanged, so that the checkpoint can be recorded by ADF for you to get changed data from the last run automatically. If you change your pipeline name or activity name, the checkpoint will be reset, which leads you to start from beginning or get changes from now in the next run.
data-factory Deploy Linked Arm Templates With Vsts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/deploy-linked-arm-templates-with-vsts.md
The scenario we walk through here is to deploy VNet with a Network Security Gro
## Create an Azure Storage account
-1. Log in to the Azure portal and create an Azure Storage account following the steps documented [here](/azure/storage/common/storage-account-create?tabs=azure-portal).
+1. Log in to the Azure portal and create an Azure Storage account following the steps documented [here](../storage/common/storage-account-create.md?tabs=azure-portal).
1. Once deployment is complete, navigate to the storage account and select **Shared access signature**. Select Service, Container, and Object for the **Allowed resource types**. Then select **Generate SAS and connection string**. Copy the SAS token and keep it available since we will use it later. :::image type="content" source="media\deploy-linked-arm-templates-with-vsts\storage-account-generate-sas-token.png" alt-text="Shows an Azure Storage Account in the Azure portal with Shared access signature selected." lightbox="media\deploy-linked-arm-templates-with-vsts\storage-account-generate-sas-token.png":::
The scenario we walk through here is to deploy VNet with a Network Security Gro
1. Save the release pipeline and trigger a release. ## Next steps-- [Automate continuous integration using Azure Pipelines releases](continuous-integration-delivery-automate-azure-pipelines.md)
+- [Automate continuous integration using Azure Pipelines releases](continuous-integration-delivery-automate-azure-pipelines.md)
databox Data Box Customer Managed Encryption Key Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-customer-managed-encryption-key-portal.md
If you receive any errors related to your customer-managed key, use the followin
| SsemUserErrorKekUserIdentityNotFound | Applied a customer-managed key but the user assigned identity that has access to the key was not found in the active directory. <br> Note: This error can occur when a user identity is deleted from Azure.| Try adding a different user-assigned identity to your key vault to enable access to the customer-managed key. For more information, see how to [Enable the key](#enable-key). | | SsemUserErrorUserAssignedIdentityAbsent | Could not fetch the passkey as the customer-managed key could not be found. | Could not access the customer-managed key. Either the User Assigned Identity (UAI) associated with the key is deleted or the UAI type has changed. | | SsemUserErrorKeyVaultBadRequestException | Applied a customer-managed key, but key access has not been granted or has been revoked, or the key vault couldn't be accessed because a firewall is enabled. | Add the identity selected to your key vault to enable access to the customer-managed key. If the key vault has a firewall enabled, switch to a system-assigned identity and then add a customer-managed key. For more information, see how to [Enable the key](#enable-key). |
-| SsemUserErrorEncryptionKeyTypeNotSupported | The encryption key type isn't supported for the operation. | Enable a supported encryption type on the key - for example, RSA or RSA-HSM. For more information, see [Key types, algorithms, and operations](/azure/key-vault/keys/about-keys-details). |
+| SsemUserErrorEncryptionKeyTypeNotSupported | The encryption key type isn't supported for the operation. | Enable a supported encryption type on the key - for example, RSA or RSA-HSM. For more information, see [Key types, algorithms, and operations](../key-vault/keys/about-keys-details.md). |
| SsemUserErrorSoftDeleteAndPurgeProtectionNotEnabled | Key vault does not have soft delete or purge protection enabled. | Ensure that both soft delete and purge protection are enabled on the key vault. | | SsemUserErrorInvalidKeyVaultUrl<br>(Command-line only) | An invalid key vault URI was used. | Get the correct key vault URI. To get the key vault URI, use [Get-AzKeyVault](/powershell/module/az.keyvault/get-azkeyvault?view=azps-7.1.0) in PowerShell. | | SsemUserErrorKeyVaultUrlWithInvalidScheme | Only HTTPS is supported for passing the key vault URI. | Pass the key vault URI over HTTPS. |
If you receive any errors related to your customer-managed key, use the followin
## Next steps - [What is Azure Key Vault?](../key-vault/general/overview.md)-- [Quickstart: Set and retrieve a secret from Azure Key Vault using the Azure portal](../key-vault/secrets/quick-create-portal.md)
+- [Quickstart: Set and retrieve a secret from Azure Key Vault using the Azure portal](../key-vault/secrets/quick-create-portal.md)
ddos-protection Manage Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/manage-ddos-protection.md
You cannot move a virtual network to another resource group or subscription when
### Configure an Azure DDoS Protection Plan using Azure Firewall Manager (preview)
-Azure Firewall Manager is a platform to manage and protect your network resources at scale. You can associate your virtual networks with a DDoS protection plan within Azure Firewall Manager. This functionality is currently available in Public Preview. See [Configure an Azure DDoS Protection Plan using Azure Firewall Manager](/azure/firewall-manager/configure-ddos)
+Azure Firewall Manager is a platform to manage and protect your network resources at scale. You can associate your virtual networks with a DDoS protection plan within Azure Firewall Manager. This functionality is currently available in Public Preview. See [Configure an Azure DDoS Protection Plan using Azure Firewall Manager](../firewall-manager/configure-ddos.md)
:::image type="content" source="/azure/firewall-manager/media/configure-ddos/ddos-protection.png" alt-text="Screenshot showing virtual network with DDoS Protection Plan":::
If you want to delete a DDoS protection plan, you must first dissociate all virt
To learn how to view and configure telemetry for your DDoS protection plan, continue to the tutorials. > [!div class="nextstepaction"]
-> [View and configure DDoS protection telemetry](telemetry.md)
+> [View and configure DDoS protection telemetry](telemetry.md)
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/alerts-reference.md
Title: Reference table for all security alerts in Microsoft Defender for Cloud description: This article lists the security alerts visible in Microsoft Defender for Cloud Previously updated : 01/10/2022 Last updated : 01/13/2022 # Security alerts - a reference guide
Microsoft Defender for Containers provides security alerts on the cluster level
[Further details and notes](defender-for-resource-manager-introduction.md) | Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
-|-||:-:|-|
-| **Azure Resource Manager operation from suspicious IP address (Preview)**<br>(ARM_OperationFromSuspiciousIP) | Microsoft Defender for Resource Manager detected an operation from an IP address that has been marked as suspicious in threat intelligence feeds. | Execution | Medium |
-| **Azure Resource Manager operation from suspicious proxy IP address (Preview)**<br>(ARM_OperationFromSuspiciousProxyIP) | Microsoft Defender for Resource Manager detected a resource management operation from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when threat actors try to hide their source IP. | Defense Evasion | Medium |
-| **MicroBurst exploitation toolkit used to enumerate resources in your subscriptions**<br>(ARM_MicroBurst.AzDomainInfo) | MicroBurst's Information Gathering module was run on your subscription. This tool can be used to discover resources, permissions and network structures. This was detected by analyzing the Azure Activity logs and resource management operations in your subscription | - | High |
-| **MicroBurst exploitation toolkit used to enumerate resources in your subscriptions**<br>(ARM_MicroBurst.AzureDomainInfo) | MicroBurst's Information Gathering module was run on your subscription. This tool can be used to discover resources, permissions and network structures. This was detected by analyzing the Azure Activity logs and resource management operations in your subscription | - | High |
-| **MicroBurst exploitation toolkit used to execute code on your virtual machine**<br>(ARM_MicroBurst.AzVMBulkCMD) | MicroBurst's exploitation toolkit was used to execute code on your virtual machines. This was detected by analyzing Azure Resource Manager operations in your subscription. | Execution | High |
-| **MicroBurst exploitation toolkit used to execute code on your virtual machine**<br>(RM_MicroBurst.AzureRmVMBulkCMD) | MicroBurst's exploitation toolkit was used to execute code on your virtual machines. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
-| **MicroBurst exploitation toolkit used to extract keys from your Azure key vaults**<br>(ARM_MicroBurst.AzKeyVaultKeysREST) | MicroBurst's exploitation toolkit was used to extract keys from your Azure key vaults. This was detected by analyzing Azure Activity logs and resource management operations in your subscription. | - | High |
-| **MicroBurst exploitation toolkit used to extract keys to your storage accounts**<br>(ARM_MicroBurst.AZStorageKeysREST) | MicroBurst's exploitation toolkit was used to extract keys to your storage accounts. This was detected by analyzing Azure Activity logs and resource management operations in your subscription. | Collection | High |
-| **MicroBurst exploitation toolkit used to extract secrets from your Azure key vaults**<br>(ARM_MicroBurst.AzKeyVaultSecretsREST) | MicroBurst's exploitation toolkit was used to extract secrets from your Azure key vaults. This was detected by analyzing Azure Activity logs and resource management operations in your subscription. | - | High |
+|-||:--:|-|
+| **Azure Resource Manager operation from suspicious IP address**<br>(ARM_OperationFromSuspiciousIP) | Microsoft Defender for Resource Manager detected an operation from an IP address that has been marked as suspicious in threat intelligence feeds. | Execution | Medium |
+| **Azure Resource Manager operation from suspicious proxy IP address**<br>(ARM_OperationFromSuspiciousProxyIP) | Microsoft Defender for Resource Manager detected a resource management operation from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when threat actors try to hide their source IP. | Defense Evasion | Medium |
+| **MicroBurst exploitation toolkit used to enumerate resources in your subscriptions**<br>(ARM_MicroBurst.AzDomainInfo) | MicroBurst's Information Gathering module was run on your subscription. This tool can be used to discover resources, permissions and network structures. This was detected by analyzing the Azure Activity logs and resource management operations in your subscription | - | High |
+| **MicroBurst exploitation toolkit used to enumerate resources in your subscriptions**<br>(ARM_MicroBurst.AzureDomainInfo) | MicroBurst's Information Gathering module was run on your subscription. This tool can be used to discover resources, permissions and network structures. This was detected by analyzing the Azure Activity logs and resource management operations in your subscription | - | High |
+| **MicroBurst exploitation toolkit used to execute code on your virtual machine**<br>(ARM_MicroBurst.AzVMBulkCMD) | MicroBurst's exploitation toolkit was used to execute code on your virtual machines. This was detected by analyzing Azure Resource Manager operations in your subscription. | Execution | High |
+| **MicroBurst exploitation toolkit used to execute code on your virtual machine**<br>(RM_MicroBurst.AzureRmVMBulkCMD) | MicroBurst's exploitation toolkit was used to execute code on your virtual machines. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
+| **MicroBurst exploitation toolkit used to extract keys from your Azure key vaults**<br>(ARM_MicroBurst.AzKeyVaultKeysREST) | MicroBurst's exploitation toolkit was used to extract keys from your Azure key vaults. This was detected by analyzing Azure Activity logs and resource management operations in your subscription. | - | High |
+| **MicroBurst exploitation toolkit used to extract keys to your storage accounts**<br>(ARM_MicroBurst.AZStorageKeysREST) | MicroBurst's exploitation toolkit was used to extract keys to your storage accounts. This was detected by analyzing Azure Activity logs and resource management operations in your subscription. | Collection | High |
+| **MicroBurst exploitation toolkit used to extract secrets from your Azure key vaults**<br>(ARM_MicroBurst.AzKeyVaultSecretsREST) | MicroBurst's exploitation toolkit was used to extract secrets from your Azure key vaults. This was detected by analyzing Azure Activity logs and resource management operations in your subscription. | - | High |
| **Permissions granted for an RBAC role in an unusual way for your Azure environment (Preview)**<br>(ARM_AnomalousRBACRoleAssignment) | Microsoft Defender for Resource Manager detected an RBAC role assignment that's unusual when compared with other assignments performed by the same assigner / performed for the same assignee / in your tenant due to the following anomalies: assignment time, assigner location, assigner, authentication method, assigned entities, client software used, assignment extent. This operation might have been performed by a legitimate user in your organization. Alternatively, it might indicate that an account in your organization was breached, and that the threat actor is trying to grant permissions to an additional user account they own.|Lateral Movement, Defense Evasion|Medium|
-| **PowerZure exploitation toolkit used to elevate access from Azure AD to Azure**<br>(ARM_PowerZure.AzureElevatedPrivileges) | PowerZure exploitation toolkit was used to elevate access from AzureAD to Azure. This was detected by analyzing Azure Resource Manager operations in your tenant. | - | High |
-| **PowerZure exploitation toolkit used to enumerate resources**<br>(ARM_PowerZure.GetAzureTargets) | PowerZure exploitation toolkit was used to enumerate resources on behalf of a legitimate user account in your organization. This was detected by analyzing Azure Resource Manager operations in your subscription. | Collection | High |
-| **PowerZure exploitation toolkit used to enumerate storage containers, shares, and tables**<br>(ARM_PowerZure.ShowStorageContent) | PowerZure exploitation toolkit was used to enumerate storage shares, tables, and containers. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
-| **PowerZure exploitation toolkit used to execute a Runbook in your subscription**<br>(ARM_PowerZure.StartRunbook) | PowerZure exploitation toolkit was used to execute a Runbook. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
-| **PowerZure exploitation toolkit used to extract Runbooks content**<br>(ARM_PowerZure.AzureRunbookContent) | PowerZure exploitation toolkit was used to extract Runbook content. This was detected by analyzing Azure Resource Manager operations in your subscription. | Collection | High |
-| **PREVIEW - Activity from a risky IP address**<br>(ARM.MCAS_ActivityFromAnonymousIPAddresses) | Users activity from an IP address that has been identified as an anonymous proxy IP address has been detected.<br>These proxies are used by people who want to hide their device's IP address, and can be used for malicious intent. This detection uses a machine learning algorithm that reduces false positives, such as mis-tagged IP addresses that are widely used by users in the organization.<br>Requires an active Microsoft Defender for Cloud Apps license. | - | Medium |
-| **PREVIEW - Activity from infrequent country**<br>(ARM.MCAS_ActivityFromInfrequentCountry) | Activity from a location that wasn't recently or ever visited by any user in the organization has occurred.<br>This detection considers past activity locations to determine new and infrequent locations. The anomaly detection engine stores information about previous locations used by users in the organization.<br>Requires an active Microsoft Defender for Cloud Apps license. | - | Medium |
-| **PREVIEW - Azurite toolkit run detected**<br>(ARM_Azurite) | A known cloud-environment reconnaissance toolkit run has been detected in your environment. The tool [Azurite](https://github.com/mwrlabs/Azurite) can be used by an attacker (or penetration tester) to map your subscriptions' resources and identify insecure configurations. | Collection | High |
+| **PowerZure exploitation toolkit used to elevate access from Azure AD to Azure**<br>(ARM_PowerZure.AzureElevatedPrivileges) | PowerZure exploitation toolkit was used to elevate access from AzureAD to Azure. This was detected by analyzing Azure Resource Manager operations in your tenant. | - | High |
+| **PowerZure exploitation toolkit used to enumerate resources**<br>(ARM_PowerZure.GetAzureTargets) | PowerZure exploitation toolkit was used to enumerate resources on behalf of a legitimate user account in your organization. This was detected by analyzing Azure Resource Manager operations in your subscription. | Collection | High |
+| **PowerZure exploitation toolkit used to enumerate storage containers, shares, and tables**<br>(ARM_PowerZure.ShowStorageContent) | PowerZure exploitation toolkit was used to enumerate storage shares, tables, and containers. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
+| **PowerZure exploitation toolkit used to execute a Runbook in your subscription**<br>(ARM_PowerZure.StartRunbook) | PowerZure exploitation toolkit was used to execute a Runbook. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
+| **PowerZure exploitation toolkit used to extract Runbooks content**<br>(ARM_PowerZure.AzureRunbookContent) | PowerZure exploitation toolkit was used to extract Runbook content. This was detected by analyzing Azure Resource Manager operations in your subscription. | Collection | High |
+| **PREVIEW - Activity from a risky IP address**<br>(ARM.MCAS_ActivityFromAnonymousIPAddresses) | Users activity from an IP address that has been identified as an anonymous proxy IP address has been detected.<br>These proxies are used by people who want to hide their device's IP address, and can be used for malicious intent. This detection uses a machine learning algorithm that reduces false positives, such as mis-tagged IP addresses that are widely used by users in the organization.<br>Requires an active Microsoft Defender for Cloud Apps license. | - | Medium |
+| **PREVIEW - Activity from infrequent country**<br>(ARM.MCAS_ActivityFromInfrequentCountry) | Activity from a location that wasn't recently or ever visited by any user in the organization has occurred.<br>This detection considers past activity locations to determine new and infrequent locations. The anomaly detection engine stores information about previous locations used by users in the organization.<br>Requires an active Microsoft Defender for Cloud Apps license. | - | Medium |
+| **PREVIEW - Azurite toolkit run detected**<br>(ARM_Azurite) | A known cloud-environment reconnaissance toolkit run has been detected in your environment. The tool [Azurite](https://github.com/mwrlabs/Azurite) can be used by an attacker (or penetration tester) to map your subscriptions' resources and identify insecure configurations. | Collection | High |
| **PREVIEW - Impossible travel activity**<br>(ARM.MCAS_ImpossibleTravelActivity) | Two user activities (in a single or multiple sessions) have occurred, originating from geographically distant locations. This occurs within a time period shorter than the time it would have taken the user to travel from the first location to the second. This indicates that a different user is using the same credentials.<br>This detection uses a machine learning algorithm that ignores obvious false positives contributing to the impossible travel conditions, such as VPNs and locations regularly used by other users in the organization. The detection has an initial learning period of seven days, during which it learns a new user's activity pattern.<br>Requires an active Microsoft Defender for Cloud Apps license. | - | Medium |
-| **PREVIEW - Suspicious management session using an inactive account detected**<br>(ARM_UnusedAccountPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal not in use for a long period of time is now performing actions that can secure persistence for an attacker. | Persistence | Medium |
-| **PREVIEW - Suspicious management session using PowerShell detected**<br>(ARM_UnusedAppPowershellPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal that doesn't regularly use PowerShell to manage the subscription environment is now using PowerShell, and performing actions that can secure persistence for an attacker. | Persistence | Medium |
-| **PREVIEW ΓÇô Suspicious management session using Azure portal detected**<br>(ARM_UnusedAppIbizaPersistence) | Analysis of your subscription activity logs has detected a suspicious behavior. A principal that doesn't regularly use the Azure portal (Ibiza) to manage the subscription environment (hasn't used Azure portal to manage for the last 45 days, or a subscription that it is actively managing), is now using the Azure portal and performing actions that can secure persistence for an attacker. | Persistence | Medium |
-| **Privileged custom role created for your subscription in a suspicious way (Preview)**<br>(ARM_PrivilegedRoleDefinitionCreation) | Microsoft Defender for Resource Manager detected a suspicious creation of privileged custom role definition in your subscription. This operation might have been performed by a legitimate user in your organization. Alternatively, it might indicate that an account in your organization was breached, and that the threat actor is trying to create a privileged role to use in the future to evade detection. | Privilege Escalation, Defense Evasion | Low|
-| **Usage of MicroBurst exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials**<br>(ARM_MicroBurst.RunCodeOnBehalf) | Usage of MicroBurst exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials. This was detected by analyzing Azure Resource Manager operations in your subscription. | Persistence, Credential Access | High |
-| **Usage of NetSPI techniques to maintain persistence in your Azure environment**<br>(ARM_NetSPI.MaintainPersistence) | Usage of NetSPI persistence technique to create a webhook backdoor and maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
-| **Usage of PowerZure exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials**<br>(ARM_PowerZure.RunCodeOnBehalf) | PowerZure exploitation toolkit detected attempting to run code or exfiltrate Azure Automation account credentials. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
-| **Usage of PowerZure function to maintain persistence in your Azure environment**<br>(ARM_PowerZure.MaintainPersistence) | PowerZure exploitation toolkit detected creating a webhook backdoor to maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
-| | | | |
+| **PREVIEW - Suspicious management session using an inactive account detected**<br>(ARM_UnusedAccountPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal not in use for a long period of time is now performing actions that can secure persistence for an attacker. | Persistence | Medium |
+| **PREVIEW - Suspicious management session using PowerShell detected**<br>(ARM_UnusedAppPowershellPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal that doesn't regularly use PowerShell to manage the subscription environment is now using PowerShell, and performing actions that can secure persistence for an attacker. | Persistence | Medium |
+| **PREVIEW ΓÇô Suspicious management session using Azure portal detected**<br>(ARM_UnusedAppIbizaPersistence) | Analysis of your subscription activity logs has detected a suspicious behavior. A principal that doesn't regularly use the Azure portal (Ibiza) to manage the subscription environment (hasn't used Azure portal to manage for the last 45 days, or a subscription that it is actively managing), is now using the Azure portal and performing actions that can secure persistence for an attacker. | Persistence | Medium |
+| **Privileged custom role created for your subscription in a suspicious way (Preview)**<br>(ARM_PrivilegedRoleDefinitionCreation) | Microsoft Defender for Resource Manager detected a suspicious creation of privileged custom role definition in your subscription. This operation might have been performed by a legitimate user in your organization. Alternatively, it might indicate that an account in your organization was breached, and that the threat actor is trying to create a privileged role to use in the future to evade detection. | Privilege Escalation, Defense Evasion | Low |
+| **Suspicious invocation of a high-risk 'Credential Access' operation detected (Preview)**<br>(ARM_AnomalousOperation.CredentialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access credentials. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Credential Access | Medium |
+| **Suspicious invocation of a high-risk 'Data Collection' operation detected (Preview)**<br>(ARM_AnomalousOperation.Collection) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to collect data. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to collect sensitive data on resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Collection | Medium |
+| **Suspicious invocation of a high-risk 'Defense Evasion' operation detected (Preview)**<br>(ARM_AnomalousOperation.DefenseEvasion) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to evade defenses. The identified operations are designed to allow administrators to efficiently manage the security posture of their environments. While this activity may be legitimate, a threat actor might utilize such operations to avoid being detected while compromising resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Defense Evasion | Medium |
+| **Suspicious invocation of a high-risk 'Execution' operation detected (Preview)**<br>(ARM_AnomalousOperation.Execution) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation on a machine in your subscription which might indicate an attempt to execute code. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Execution | Medium |
+| **Suspicious invocation of a high-risk 'Impact' operation detected (Preview)**<br>(ARM_AnomalousOperation.Impact) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempted configuration change. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Impact | Medium |
+| **Suspicious invocation of a high-risk 'Initial Access' operation detected (Preview)**<br>(ARM_AnomalousOperation.InitialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access restricted resources. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to gain initial access to restricted resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Initial Access | Medium |
+| **Suspicious invocation of a high-risk 'Lateral Movement' operation detected (Preview)**<br>(ARM_AnomalousOperation.LateralMovement) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to perform lateral movement. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to compromise additional resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Lateral Movement | Medium |
+| **Suspicious invocation of a high-risk 'Persistence' operation detected (Preview)**<br>(ARM_AnomalousOperation.Persistence) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to establish persistence. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to establish persistence in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Persistence | Medium |
+| **Suspicious invocation of a high-risk 'Privilege Escalation' operation detected (Preview)**<br>(ARM_AnomalousOperation.PrivilegeEscalation) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to escalate privileges. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to escalate privileges while compromising resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Privilege Escalation | Medium |
+| **Usage of MicroBurst exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials**<br>(ARM_MicroBurst.RunCodeOnBehalf) | Usage of MicroBurst exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials. This was detected by analyzing Azure Resource Manager operations in your subscription. | Persistence, Credential Access | High |
+| **Usage of NetSPI techniques to maintain persistence in your Azure environment**<br>(ARM_NetSPI.MaintainPersistence) | Usage of NetSPI persistence technique to create a webhook backdoor and maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
+| **Usage of PowerZure exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials**<br>(ARM_PowerZure.RunCodeOnBehalf) | PowerZure exploitation toolkit detected attempting to run code or exfiltrate Azure Automation account credentials. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
+| **Usage of PowerZure function to maintain persistence in your Azure environment**<br>(ARM_PowerZure.MaintainPersistence) | PowerZure exploitation toolkit detected creating a webhook backdoor to maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
+| | | | |
## <a name="alerts-dns"></a>Alerts for DNS
Microsoft Defender for Containers provides security alerts on the cluster level
| **Authenticated access from a Tor exit node**<br>(Storage.Blob_TorAnomaly<br>Storage.Files_TorAnomaly) | One or more storage container(s) / file share(s) in your storage account were successfully accessed from an IP address known to be an active exit node of Tor (an anonymizing proxy). Threat actors use Tor to make it difficult to trace the activity back to them. Authenticated access from a Tor exit node is a likely indication that a threat actor is trying to hide their identity.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Initial access | High/Medium | | **Access from an unusual location to a storage account**<br>(Storage.Blob_GeoAnomaly<br>Storage.Files_GeoAnomaly) | Indicates that there was a change in the access pattern to an Azure Storage account. Someone has accessed this account from an IP address considered unfamiliar when compared with recent activity. Either an attacker has gained access to the account, or a legitimate user has connected from a new or unusual geographic location. An example of the latter is remote maintenance from a new application or developer.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Exploitation | Low | | **Unusual unauthenticated access to a storage container**<br>(Storage.Blob_AnonymousAccessAnomaly) | This storage account was accessed without authentication, which is a change in the common access pattern. Read access to this container is usually authenticated. This might indicate that a threat actor was able to exploit public read access to storage container(s) in this storage account(s).<br>Applies to: Azure Blob Storage | Collection | Medium |
-| **Potential malware uploaded to a storage account**<br>(Storage.Blob_MalwareHashReputation<br>Storage.Files_MalwareHashReputation) | Indicates that a blob containing potential malware has been uploaded to a blob container or a file share in a storage account. This alert is based on hash reputation analysis leveraging the power of Microsoft threat intelligence, which includes hashes for viruses, trojans, spyware and ransomware. Potential causes may include an intentional malware upload by an attacker, or an unintentional upload of a potentially malicious blob by a legitimate user.<br>Applies to: Azure Blob Storage, Azure Files (Only for transactions over REST API)<br>Learn more about [Azure's hash reputation analysis for malware](defender-for-storage-introduction.md#what-is-hash-reputation-analysis-for-malware).<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684). | Lateral Movement | High |
+| **Potential malware uploaded to a storage account**<br>(Storage.Blob_MalwareHashReputation<br>Storage.Files_MalwareHashReputation) | Indicates that a blob containing potential malware has been uploaded to a blob container or a file share in a storage account. This alert is based on hash reputation analysis leveraging the power of Microsoft threat intelligence, which includes hashes for viruses, trojans, spyware and ransomware. Potential causes may include an intentional malware upload by an attacker, or an unintentional upload of a potentially malicious blob by a legitimate user.<br>Applies to: Azure Blob Storage, Azure Files (Only for transactions over REST API)<br>Learn more about [Azure's hash reputation analysis for malware](defender-for-storage-introduction.md#what-kind-of-alerts-does-microsoft-defender-for-storage-provide).<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684). | Lateral Movement | High |
| **Publicly accessible storage containers successfully discovered**<br>(Storage.Blob_OpenContainersScanning.SuccessfulDiscovery) | A successful discovery of publicly open storage container(s) in your storage account was performed in the last hour by a scanning script or tool.<br><br> This usually indicates a reconnaissance attack, where the threat actor tries to list blobs by guessing container names, in the hope of finding misconfigured open storage containers with sensitive data in them.<br><br> The threat actor may use their own script or use known scanning tools like Microburst to scan for publicly open containers.<br><br> Γ£ö Azure Blob Storage<br> Γ£û Azure Files<br> Γ£û Azure Data Lake Storage Gen2 | Collection | Medium | | **Publicly accessible storage containers unsuccessfully scanned**<br>(Storage.Blob_OpenContainersScanning.FailedAttempt) | A series of failed attempts to scan for publicly open storage containers were performed in the last hour. <br><br>This usually indicates a reconnaissance attack, where the threat actor tries to list blobs by guessing container names, in the hope of finding misconfigured open storage containers with sensitive data in them.<br><br> The threat actor may use their own script or use known scanning tools like Microburst to scan for publicly open containers.<br><br> Γ£ö Azure Blob Storage<br> Γ£û Azure Files<br> Γ£û Azure Data Lake Storage Gen2 | Collection | Low | | **Unusual access inspection in a storage account**<br>(Storage.Blob_AccessInspectionAnomaly<br>Storage.Files_AccessInspectionAnomaly) | Indicates that the access permissions of a storage account have been inspected in an unusual way, compared to recent activity on this account. A potential cause is that an attacker has performed reconnaissance for a future attack.<br>Applies to: Azure Blob Storage, Azure Files | Collection | Medium |
defender-for-cloud Auto Deploy Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/auto-deploy-vulnerability-assessment.md
To assess your machines for vulnerabilities, you can use one of the following so
1. Select **Apply** and **Save**.
-1. To view the findings for **all** supported vulnerability assessment solutions, see the **Vulnerabilities in your virtual machines should be remediated.** recommendation.
+1. To view the findings for **all** supported vulnerability assessment solutions, see the **Machines should have vulnerability findings resolved** recommendation.
Learn more in [View and remediate findings from vulnerability assessment solutions on your machines](remediate-vulnerability-findings-vm.md).
defender-for-cloud Defender For Storage Exclude https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-storage-exclude.md
+
+ Title: Microsoft Defender for Storage - excluding a storage account
+description: Excluding a specific storage account from a subscription with Microsoft Defender for Storage enabled.
Last updated : 01/16/2022++
+# Exclude a storage account from Microsoft Defender for Storage protections
+
+> [!CAUTION]
+> Excluding resources from advanced threat protection is not recommended and leaves your cloud workload exposed.
+
+When you [enable Microsoft Defender for Storage](../storage/common/azure-defender-storage-configure.md#set-up-microsoft-defender-for-cloud) on a subscription, all existing Azure Storage accounts will be protected and any storage resources added to that subscription in the future will also be automatically protected.
+
+If you need to exempt a specific Azure Storage account from this Defender plan, use the instructions on this page.
+
+> [!TIP]
+> We recommend enabling [Microsoft Defender for Resource Manager](defender-for-resource-manager-introduction.md) for any accounts with unprotected Azure Storage resources. Defender for Resource Manager automatically monitors your organization's resource management operations, whether they're performed through the Azure portal, Azure REST APIs, Azure CLI, or other Azure programmatic clients.
++
+## Exclude a specific storage account
+
+To exclude specific storage accounts from Microsoft Defender for Storage when the plan is enabled on a subscription:
+
+### [**PowerShell**](#tab/enable-storage-protection-ps)
+
+### Use PowerShell to exclude an Azure Storage account
+
+1. If you don't have the Azure Az PowerShell module installed, install it using [the instructions from the Azure PowerShell documentation](/powershell/azure/install-az-ps).
+
+1. Using an authenticated account, connect to Azure with the ``Connect-AzAccount`` cmdlet, as explained in [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
+
+1. Define the AzDefenderPlanAutoEnable tag on the storage account with the ``Update-AzTag`` cmdlet (replace the ResourceId with the resource ID of the relevant storage account):
+
+ ```azurepowershell
+ Update-AzTag -ResourceId <resourceID> -Tag @{"AzDefenderPlanAutoEnable" = "off"} -Operation Merge
+ ```
+
+ If you skip this stage, your untagged resources will continue receiving daily updates from the subscription level enablement policy. That policy will enable Defender for Storage again on the account.
+
+ > [!TIP]
+ > Learn more about tags in [Use tags to organize your Azure resources and management hierarchy](/azure-resource-manager/management/tag-resources.md).
+
+1. Disable Microsoft Defender for Storage for the desired account on the relevant subscription with the ``Disable-AzSecurityAdvancedThreatProtection`` cmdlet (using the same resource ID):
+
+ ```azurepowershell
+ Disable-AzSecurityAdvancedThreatProtection -ResourceId <resourceId>
+ ```
+
+ [Learn more about this cmdlet](/powershell/module/az.security/disable-azsecurityadvancedthreatprotection).
++
+### [**Azure CLI**](#tab/enable-storage-protection-cli)
+
+### Use Azure CLI to exclude an Azure Storage account
+
+1. If you don't have Azure CLI installed, install it using [the instructions from the Azure CLI documentation](/cli/azure/install-azure-cli).
+
+1. Using an authenticated account, connect to Azure with the ``login`` command as explained in [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli) and enter your account credentials when prompted:
+
+ ```azurecli
+ az login
+ ```
+
+1. Define the AzDefenderPlanAutoEnable tag on the storage account with the ``tag update`` command (replace the ResourceId with the resource ID of the relevant storage account):
+
+ ```azurecli
+ az tag update --resource-id MyResourceId --operation merge --tags AzDefenderPlanAutoEnable=off
+ ```
+
+ If you skip this stage, your untagged resources will continue receiving daily updates from the subscription level enablement policy. That policy will enable Defender for Storage again on the account.
+
+ > [!TIP]
+ > Learn more about tags in [az tag](/cli/azure/tag).
+
+1. Disable Microsoft Defender for Storage for the desired account on the relevant subscription with the ``security atp storage`` command (using the same resource ID):
+
+ ```azurecli
+ az security atp storage update --resource-group MyResourceGroup --storage-account MyStorageAccount --is-enabled false
+ ```
+
+ [Learn more about this command](/cli/azure/security/atp/storage).
++
+### [**Azure portal**](#tab/enable-storage-protection-portal)
+
+### Use the Azure portal to exclude an Azure Storage account
+
+1. Define the AzDefenderPlanAutoEnable tag on the storage account:
+
+ 1. From the Azure portal, open the storage account and select the **Tags** page.
+ 1. Enter the tag name **AzDefenderPlanAutoEnable** and set the value to **off**.
+ 1. Select **Apply**.
+
+ :::image type="content" source="media/defender-for-storage-exclude/define-tag-storage-account.png" alt-text="Screenshot of how to add a tag to a storage account in the Azure portal." lightbox="media/defender-for-storage-exclude/define-tag-storage-account.png":::
+
+1. Verify that the tag has been added successfully. It should look similar to this:
+
+ :::image type="content" source="media/defender-for-storage-exclude/define-tag-storage-account-success.png" alt-text="Screenshot of a tag on a storage account in the Azure portal." lightbox="media/defender-for-storage-exclude/define-tag-storage-account-success.png":::
+
+1. Disable and then enable the Microsoft Defender for Storage on the subscription:
+
+ 1. From the Azure portal, open **Microsoft Defender for Cloud**.
+ 1. Open **Environment settings** > select the relevant subscription > **Defender plans** > toggle the Defender for Storage plan off > select **Save** > turn it back on > select **Save**.
+
+ :::image type="content" source="media/defender-for-storage-exclude/defender-plan-toggle.png" alt-text="Screenshot of disabling and enabling the Microsoft Defender for Storage plan from Microsoft Defender for Cloud." lightbox="media/defender-for-storage-exclude/defender-plan-toggle.png":::
++++
+## Exclude an Azure Databricks Storage account
+
+When Defender for Storage is enabled on a subscription, it's not currently possible to exclude a Storage account if it belongs to an Azure Databricks workspace.
+
+Instead, you can disable Defender for Storage on the subscription and enable Defender for Storage for each Azure Storage account from the **Security** page:
+++
+## Next steps
+
+- Explore the [Microsoft Defender for Storage ΓÇô Price Estimation Dashboard](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-storage-price-estimation-dashboard/ba-p/2429724)
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-storage-introduction.md
Title: Microsoft Defender for Storage - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Storage. Previously updated : 11/09/2021 Last updated : 01/16/2022 - # Introduction to Microsoft Defender for Storage [!INCLUDE [Banner for top of topics](./includes/banner.md)]
-**Microsoft Defender for Storage** is an Azure-native layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit your storage accounts. It utilizes the advanced capabilities of security AI and [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) to provide contextual security alerts and recommendations.
+**Microsoft Defender for Storage** is an Azure-native layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit your storage accounts. It uses advanced threat detection capabilities and [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks.
+
+You can enable **Microsoft Defender for Storage** at either the subscription level (recommended) or the resource level.
+
+Defender for Storage continually analyzes the telemetry stream generated by the Azure Blob Storage and Azure Files services. When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Microsoft Defender for Cloud together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations.
+
+Analyzed telemetry of Azure Blob Storage includes operation types such as **Get Blob**, **Put Blob**, **Get Container ACL**, **List Blobs**, and **Get Blob Properties**. Examples of analyzed Azure Files operation types include **Get File**, **Create File**, **List Files**, **Get File Properties**, and **Put Range**.
-Security alerts are triggered when anomalous activities occur. These alerts appear in Microsoft Defender for Cloud, and are also sent via email to subscription administrators, with details of suspicious activity and recommendations for how to investigate and remediate threats.
+Defender for Storage doesn't access the Storage account data and has no impact on its performance.
## Availability
Security alerts are triggered when anomalous activities occur. These alerts appe
|-|:-| |Release state:|General availability (GA)| |Pricing:|**Microsoft Defender for Storage** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
-|Protected storage types:|[Blob Storage](https://azure.microsoft.com/services/storage/blobs/)<br>[Azure Files](../storage/files/storage-files-introduction.md)<br>[Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md)|
-|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet|
+|Protected storage types:|[Blob Storage](https://azure.microsoft.com/services/storage/blobs/) (Standard/Premium StorageV2, Block Blobs) <br>[Azure Files](../storage/files/storage-files-introduction.md) (over REST API and SMB)<br>[Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) (Standard/Premium accounts with hierarchical namespaces enabled)|
+|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts|
||| ## What are the benefits of Microsoft Defender for Storage?
-Microsoft Defender for Storage provides:
+Defender for Storage provides:
-- **Azure-native security** - With 1-click enablement, Defender for Storage protects data stored in Azure Blob, Azure Files, and Data Lakes. As an Azure-native service, Defender for Storage provides centralized security across all data assets managed by Azure and is integrated with other Azure security services such as Microsoft Sentinel.-- **Rich detection suite** - Powered by Microsoft Threat Intelligence, the detections in Defender for Storage cover the top storage threats such as anonymous access, compromised credentials, social engineering, privilege abuse, and malicious content.
+- **Azure-native security** - With 1-click enablement, Defender for Storage protects data stored in Azure Blob, Azure Files, and Data Lakes. As an Azure-native service, Defender for Storage provides centralized security across all data assets that are managed by Azure and is integrated with other Azure security services such as Microsoft Sentinel.
+- **Rich detection suite** - Powered by Microsoft Threat Intelligence, the detections in Defender for Storage cover the top storage threats such as unauthenticated access, compromised credentials, social engineering attacks, data exfiltration, privilege abuse, and malicious content.
- **Response at scale** - Defender for Cloud's automation tools make it easier to prevent and respond to identified threats. Learn more in [Automate responses to Defender for Cloud triggers](workflow-automation.md). :::image type="content" source="media/defender-for-storage-introduction/defender-for-storage-high-level-overview.png" alt-text="High-level overview of the features of Microsoft Defender for Storage.":::
+## Security threats in cloud-based storage services
+
+Microsoft security researchers have analyzed the attack surface of storage services. Storage accounts can be subject to data corruption, exposure of sensitive content, malicious content distribution, data exfiltration, unauthorized access, and more.
+
+The potential security risks are described in the [threat matrix for cloud-based storage services](https://www.microsoft.com/security/blog/2021/04/08/threat-matrix-for-storage/) and are based on the [MITRE ATT&CK® framework](https://attack.mitre.org/techniques/enterprise/), a knowledge base for the tactics and techniques employed in cyber attacks.
++ ## What kind of alerts does Microsoft Defender for Storage provide?
-Security alerts are triggered when there's:
+Security alerts are triggered for the following scenarios (typically from 1-2 hours after the event):
-- **Suspicious access patterns** - such as successful access from a Tor exit node or from an IP considered suspicious by Microsoft Threat Intelligence-- **Suspicious activities** - such as anomalous data extraction or unusual change of access permissions-- **Upload of malicious content** - such as potential malware files (based on hash reputation analysis) or hosting of phishing content
+|Type of threat | Description |
+|||
+|**Unusual access to an account** | For example, access from a TOR exit node, suspicious IP addresses, unusual applications, unusual locations, and anonymous access without authentication. |
+|**Unusual behavior in an account** | Behavior that deviates from a learned baseline, such as a change of access permissions in an account, unusual access inspection, unusual data exploration, unusual deletion of blobs/files, or unusual data extraction. |
+|**Hash reputation based Malware detection** | Detection of known malware based on full blob/file hash. This can help detect ransomware, viruses, spyware, and other malware uploaded to an account, prevent it from entering the organization, and spreading to more users and resources. See also [Limitations of hash reputation analysis](#limitations-of-hash-reputation-analysis). |
+|**Unusual file uploads** | Unusual cloud service packages and executable files that have been uploaded to an account. |
+| **Public visibility** | Potential break-in attempts by scanning containers and pulling potentially sensitive data from publicly accessible containers. |
+| **Phishing campaigns** | When content that's hosted on Azure Storage is identified as part of a phishing attack that's impacting Microsoft 365 users. |
-Alerts include details of the incident that triggered them, as well as recommendations on how to investigate and remediate threats. Alerts can be exported to Microsoft Sentinel or any other third-party SIEM or any other external tool.
+Alerts include details of the incident that triggered them, and recommendations on how to investigate and remediate threats. Alerts can be exported to Microsoft Sentinel or any other third-party SIEM or any other external tool. Learn more in [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md).
> [!TIP]
-> It's a best practice to [configure Microsoft Defender for Storage](../storage/common/azure-defender-storage-configure.md?tabs=azure-security-center) on the subscription level, but you may also [configure it on individual storage accounts](../storage/common/azure-defender-storage-configure.md?tabs=azure-portal).
+> For a comprehensive list of all Defender for Storage alerts, see the [alerts reference page](alerts-reference.md#alerts-azurestorage). This is useful for workload owners who want to know what threats can be detected and help SOC teams gain familiarity with detections before investigating them. Learn more about what's in a Defender for Cloud security alert, and how to manage your alerts in [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md).
-## What is hash reputation analysis for malware?
+### Limitations of hash reputation analysis
-To determine whether an uploaded file is suspicious, Microsoft Defender for Storage uses hash reputation analysis supported by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684). The threat protection tools donΓÇÖt scan the uploaded files, rather they examine the storage logs and compare the hashes of newly uploaded files with those of known viruses, trojans, spyware, and ransomware.
+- **Hash reputation isn't deep file inspection** - Microsoft Defender for Storage uses hash reputation analysis supported by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) to determine whether an uploaded file is suspicious. The threat protection tools donΓÇÖt scan the uploaded files; rather they analyze the telemetry generated from the Blobs Storage and Files services. Defender for Storage then compares the hashes of newly uploaded files with hashes of known viruses, trojans, spyware, and ransomware.
-When a file is suspected to contain malware, Defender for Cloud displays an alert and can optionally email the storage owner for approval to delete the suspicious file. To set up this automatic removal of files that hash reputation analysis indicates contain malware, deploy a [workflow automation to trigger on alerts that contain "Potential malware uploaded to a storage accountΓÇ¥](https://techcommunity.microsoft.com/t5/azure-security-center/how-to-respond-to-potential-malware-uploaded-to-azure-storage/ba-p/1452005).
+- **Hash reputation analysis isn't supported for all files protocols and operation types** - Some, but not all, of the telemetry logs contain the hash value of the related blob or file. In some cases, the telemetry doesn't contain a hash value. As a result, some operations can't be monitored for known malware uploads. Examples of such unsupported use cases include SMB file-shares and when a blob is created using [Put Block](/rest/api/storageservices/put-block) and [Put Block List](/rest/api/storageservices/put-block-list).
-> [!NOTE]
-> To enable Defender for Cloud's threat protection capabilities, you must enable enhanced security features on the subscription containing the applicable workloads.
->
-> You can enable **Microsoft Defender for Storage** at either the subscription level or resource level.
+> [!TIP]
+> When a file is suspected to contain malware, Defender for Cloud displays an alert and can optionally email the storage owner for approval to delete the suspicious file. To set up this automatic removal of files that hash reputation analysis indicates contain malware, deploy a [workflow automation to trigger on alerts that contain "Potential malware uploaded to a storage accountΓÇ¥](https://techcommunity.microsoft.com/t5/azure-security-center/how-to-respond-to-potential-malware-uploaded-to-azure-storage/ba-p/1452005).
++
+## Enable Defender for Storage
+
+When you enable this Defender plan on a subscription, all existing Azure Storage accounts will be protected and any storage resources added to that subscription in the future will also be automatically protected.
+
+You can enable Defender for Storage in any of several ways, described in [Set up Microsoft Defender for Cloud](../storage/common/azure-defender-storage-configure.md#set-up-microsoft-defender-for-cloud) in the Azure Storage documentation.
## Trigger a test alert for Microsoft Defender for Storage
To test the security alerts from Microsoft Defender for Storage in your environm
:::image type="content" source="media/defender-for-storage-introduction/tor-access-alert-storage.png" alt-text="Security alert regarding access from a Tor exit node."::: ++
+## FAQ - Microsoft Defender for Storage
+
+- [How do I estimate charges at the account level?](#how-do-i-estimate-charges-at-the-account-level)
+- [Can I exclude a specific Azure Storage account from a protected subscription?](#can-i-exclude-a-specific-azure-storage-account-from-a-protected-subscription)
+- [How do I configure automatic responses for security alerts?](#how-do-i-configure-automatic-responses-for-security-alerts)
+
+### How do I estimate charges at the account level?
+
+To optimize costs, you might want to exclude specific Storage accounts associated with high traffic from Defender for Storage protections. To get an estimate of Defender for Storage costs, use the [Price Estimation Dashboard](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-storage-price-estimation-dashboard/ba-p/2429724).
+
+### Can I exclude a specific Azure Storage account from a protected subscription?
+
+To exclude a specific Storage account when Defender for Storage is enabled on a subscription, follow the instructions in [Exclude a storage account from Microsoft Defender for Storage protections](defender-for-storage-exclude.md).
+
+### How do I configure automatic responses for security alerts?
+
+Use [workflow automation](workflow-automation.md) to trigger automatic responses to Defender for Cloud security alerts.
+
+For example, you can set up automation to open tasks or tickets for specific personnel or teams in an external task management system.
+
+> [!TIP]
+> Explore the automations available from the Defender for Cloud community pages: [ServiceNow automation](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Workflow%20automation/Create-SNOWIncfromASCAlert), [Jira automation](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Workflow%20automation/Open-JIRA-Ticket), [Azure DevOps automation](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Workflow%20automation/Open-DevOpsTaskAlert), [Slack automation](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Workflow%20automation/Post-SlackMessageAlert) or build your own.
+
+Use automation for automatic response - to define your own or use ready-made automation from the community (such as removing malicious files upon detection). For more solutions, visit the Microsoft community on GitHub.ΓÇ»
+++ ## Next steps
-In this article, you learned about Microsoft Defender for Storage.
+In this article, you learned about Microsoft Defender for Storage.
-For related material, see the following articles:
+> [!div class="nextstepaction"]
+> [Enable Defender for Storage](enable-enhanced-security.md)
-- Whether an alert is generated by Defender for Cloud, or received by Defender for Cloud from a different security product, you can export it. To export your alerts to Microsoft Sentinel, any third-party SIEM, or any other external tool, follow the instructions in [Exporting alerts to a SIEM](continuous-export.md).-- [How to enable Advanced Defender for Storage](../storage/common/azure-defender-storage-configure.md)-- [The list of Microsoft Defender for Storage alerts](alerts-reference.md#alerts-azurestorage)-- [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684)
+- [The full list of Microsoft Defender for Storage alerts](alerts-reference.md#alerts-azurestorage)
+- [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md)
+- [Save Storage telemetry for investigation](/azure-monitor/essentials/diagnostic-settings.md)
defender-for-cloud Deploy Vulnerability Assessment Byol Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/deploy-vulnerability-assessment-byol-vm.md
Supported solutions report vulnerability data to the partner's management platfo
1. From Defender for Cloud's menu, open the **Recommendations** page.
-1. Select the recommendation **A vulnerability assessment solution should be enabled on your virtual machines**.
+1. Select the recommendation **Machines should have a vulnerability assessment solution**.
:::image type="content" source="./media/deploy-vulnerability-assessment-vm/recommendation-page-machine-groupings.png" alt-text="The groupings of the machines in the **A vulnerability assessment solution should be enabled on your virtual machines** recommendation page" lightbox="./media/deploy-vulnerability-assessment-vm/recommendation-page-machine-groupings.png":::
defender-for-cloud Deploy Vulnerability Assessment Tvm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/deploy-vulnerability-assessment-tvm.md
For a quick overview of threat and vulnerability management, watch this video:
The integration with Microsoft Defender for Cloud doesn't involve any changes at the endpoint level: it takes place in the background between the two platforms. -- **To manually onboard one or more machines** to threat and vulnerability management, use the security recommendation "[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ffff0522-1e88-47fc-8382-2a80ba848f5d)":
+- **To manually onboard one or more machines** to threat and vulnerability management, use the security recommendation "[Machines should have a vulnerability assessment solution](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ffff0522-1e88-47fc-8382-2a80ba848f5d)":
:::image type="content" source="media/deploy-vulnerability-assessment-tvm/deploy-vulnerability-assessment-solutions.png" alt-text="Selecting a volnerability assessment solution from the recommendation":::
The integration with Microsoft Defender for Cloud doesn't involve any changes at
- **To onboard via the REST API**, run PUT/DELETE using this URL: `https://management.azure.com/subscriptions/.../resourceGroups/.../providers/Microsoft.Compute/virtualMachines/.../providers/Microsoft.Security/serverVulnerabilityAssessments/mdetvm?api-version=2015-06-01-preview`
-The findings for **all** vulnerability assessment tools are provided in a Defender for Cloud recommendation **Vulnerabilities in your virtual machines should be remediated.**. Learn about how to [View and remediate findings from vulnerability assessment solutions on your VMs](remediate-vulnerability-findings-vm.md)
+The findings for **all** vulnerability assessment tools are provided in a Defender for Cloud recommendation **Vulnerabilities in your virtual machines should be remediated**. Learn about how to [View and remediate findings from vulnerability assessment solutions on your VMs](remediate-vulnerability-findings-vm.md)
## Next steps
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
The vulnerability scanner extension works as follows:
1. From Defender for Cloud's menu, open the **Recommendations** page.
-1. Select the recommendation **A vulnerability assessment solution should be enabled on your virtual machines**.
+1. Select the recommendation **Machines should have a vulnerability assessment solution**.
:::image type="content" source="./media/deploy-vulnerability-assessment-vm/recommendation-page-machine-groupings.png" alt-text="The groupings of the machines in the recommendation page." lightbox="./media/deploy-vulnerability-assessment-vm/recommendation-page-machine-groupings.png":::
The vulnerability scanner extension works as follows:
> - If you haven't got a third-party vulnerability scanner configured, you won't be offered the opportunity to deploy it. > - If your selected machines aren't protected by Microsoft Defender for servers, the Defender for Cloud integrated vulnerability scanner option won't be available.
- :::image type="content" source="./media/deploy-vulnerability-assessment-vm/recommendation-remediation-options-builtin.png" alt-text="The options for which type of remediation flow you want to choose when responding to the recommendation **A vulnerability assessment solution should be enabled on your virtual machines** recommendation page":::
+ :::image type="content" source="./media/deploy-vulnerability-assessment-vm/recommendation-remediation-options-builtin.png" alt-text="The options for which type of remediation flow you want to choose when responding to the recommendation ** Machines should have a vulnerability assessment solution** recommendation page":::
1. Choose the recommended option, **Deploy integrated vulnerability scanner**, and **Proceed**.
defender-for-cloud Export To Siem https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/export-to-siem.md
Another alternative for investigating Defender for Cloud alerts in Microsoft Sen
## Stream alerts with Azure Monitor
-To stream alerts into **ArcSight**, **Splunk**, **SumoLogic**, Syslog servers, **LogRhythm**, **Logz.io Cloud Observability Platform**, and other monitoring solutions. connect Defender for Cloud with Azure monitor via Azure Event Hubs:
+To stream alerts into **ArcSight**, **Splunk**, **QRadar**, **SumoLogic**, **Syslog servers**, **LogRhythm**, **Logz.io Cloud Observability Platform**, and other monitoring solutions. connect Defender for Cloud with Azure monitor via Azure Event Hubs:
-1. Enable [continuous export](continuous-export.md) to stream Defender for Cloud alerts into a dedicated Azure Event Hub at the subscription level. To do this at the Management Group level using Azure Policy, see [Create continuous export automation configurations at scale](continuous-export.md?tabs=azure-policy#configure-continuous-export-at-scale-using-the-supplied-policies)
+> [!NOTE]
+> To stream alerts at the tenant level, use this Azure policy and set the scope at the root management group (you'll need permissions for the root management group as explained in [Defender for Cloud permissions](permissions.md)): [Deploy export to event hub for Microsoft Defender for Cloud alerts and recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fcdfcce10-4578-4ecd-9703-530938e4abcb).
+
+1. Enable [continuous export](continuous-export.md) to stream Defender for Cloud alerts into a dedicated event hub at the subscription level. To do this at the Management Group level using Azure Policy, see [Create continuous export automation configurations at scale](continuous-export.md?tabs=azure-policy#configure-continuous-export-at-scale-using-the-supplied-policies)
-1. [Connect the Azure Event hub to your preferred solution using Azure Monitor's built-in connectors](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md#partner-tools-with-azure-monitor-integration).
+1. [Connect the event hub to your preferred solution using Azure Monitor's built-in connectors](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md#partner-tools-with-azure-monitor-integration).
-1. Optionally, stream the raw logs to the Azure Event Hub and connect to your preferred solution. Learn more in [Monitoring data available](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md#monitoring-data-available).
+1. Optionally, stream the raw logs to the event hub and connect to your preferred solution. Learn more in [Monitoring data available](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md#monitoring-data-available).
-To view the event schemas of the exported data types, visit the [Event Hub event schemas](https://aka.ms/ASCAutomationSchemas).
+To view the event schemas of the exported data types, visit the [Event hub event schemas](https://aka.ms/ASCAutomationSchemas).
## Other streaming options
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/release-notes.md
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in January include:
+- [Microsoft Defender for Resource Manager updated with new alerts and greater emphasis on high-risk operations mapped to MITRE ATT&CK® Matrix](#microsoft-defender-for-resource-manager-updated-with-new-alerts-and-greater-emphasis-on-high-risk-operations-mapped-to-mitre-attck-matrix)
- [Recommendations to enable Microsoft Defender plans on workspaces (in preview)](#recommendations-to-enable-microsoft-defender-plans-on-workspaces-in-preview) - [Auto provision Log Analytics agent to Azure Arc-enabled machines (preview)](#auto-provision-log-analytics-agent-to-azure-arc-enabled-machines-preview) - [Deprecated the recommendation to classify sensitive data in SQL databases](#deprecated-the-recommendation-to-classify-sensitive-data-in-sql-databases)
Updates in January include:
- [Renamed two recommendations](#renamed-two-recommendations)
+### Microsoft Defender for Resource Manager updated with new alerts and greater emphasis on high-risk operations mapped to MITRE ATT&CK® Matrix
+
+The cloud management layer is a crucial service connected to all your cloud resources. Because of this, it is also a potential target for attackers. Consequently, we recommend security operations teams closely monitor the resource management layer.
+
+Microsoft Defender for Resource Manager automatically monitors the resource management operations in your organization, whether they're performed through the Azure portal, Azure REST APIs, Azure CLI, or other Azure programmatic clients. Defender for Cloud runs advanced security analytics to detect threats and alerts you about suspicious activity.
+
+The plan's protections greatly enhance an organization's resiliency against attacks from threat actors and significantly increase the number of Azure resources protected by Defender for Cloud.
+
+In December 2020, we introduced the preview of Defender for Resource Manager, and in May 2021 the plan was release for general availability.
+
+With this update, we've comprehensively revised the focus of the Microsoft Defender for Resource Manager plan. The updated plan includes many **new alerts focused on identifying suspicious invocation of high-risk operations**. These new alerts provide extensive monitoring for attacks across the *complete* [MITRE ATT&CK® matrix for cloud-based techniques](https://attack.mitre.org/matrices/enterprise/cloud/).
+
+This matrix covers the following range of potential intentions of threat actors who may be targeting your organization's resources: *Initial Access, Execution, Persistence, Privilege Escalation, Defense Evasion, Credential Access, Discovery, Lateral Movement, Collection, Exfiltration, and Impact*.
+
+The new alerts for this Defender plan cover these intentions as shown in the following table.
+
+> [!TIP]
+> These alerts also appear in the [alerts reference page](alerts-reference.md).
+
+| Alert (alert type) | Description | MITRE tactics (intentions)| Severity |
+|-|--|:-:|-|
+| **Suspicious invocation of a high-risk 'Initial Access' operation detected (Preview)**<br>(ARM_AnomalousOperation.InitialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access restricted resources. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to gain initial access to restricted resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Initial Access | Medium |
+| **Suspicious invocation of a high-risk 'Execution' operation detected (Preview)**<br>(ARM_AnomalousOperation.Execution) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation on a machine in your subscription which might indicate an attempt to execute code. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Execution | Medium |
+| **Suspicious invocation of a high-risk 'Persistence' operation detected (Preview)**<br>(ARM_AnomalousOperation.Persistence) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to establish persistence. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to establish persistence in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Persistence | Medium |
+| **Suspicious invocation of a high-risk 'Privilege Escalation' operation detected (Preview)**<br>(ARM_AnomalousOperation.PrivilegeEscalation) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to escalate privileges. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to escalate privileges while compromising resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Privilege Escalation | Medium |
+| **Suspicious invocation of a high-risk 'Defense Evasion' operation detected (Preview)**<br>(ARM_AnomalousOperation.DefenseEvasion) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to evade defenses. The identified operations are designed to allow administrators to efficiently manage the security posture of their environments. While this activity may be legitimate, a threat actor might utilize such operations to avoid being detected while compromising resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Defense Evasion | Medium |
+| **Suspicious invocation of a high-risk 'Credential Access' operation detected (Preview)**<br>(ARM_AnomalousOperation.CredentialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access credentials. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Credential Access | Medium |
+| **Suspicious invocation of a high-risk 'Lateral Movement' operation detected (Preview)**<br>(ARM_AnomalousOperation.LateralMovement) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to perform lateral movement. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to compromise additional resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Lateral Movement | Medium |
+| **Suspicious invocation of a high-risk 'Data Collection' operation detected (Preview)**<br>(ARM_AnomalousOperation.Collection) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to collect data. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to collect sensitive data on resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Collection | Medium |
+| **Suspicious invocation of a high-risk 'Impact' operation detected (Preview)**<br>(ARM_AnomalousOperation.Impact) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempted configuration change. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Impact | Medium |
+|||||
+
+In addition, these two alerts from this plan have come out of preview:
+
+| Alert (alert type) | Description | MITRE tactics (intentions)| Severity |
+|-|--|:-:|-|
+| **Azure Resource Manager operation from suspicious IP address**<br>(ARM_OperationFromSuspiciousIP) | Microsoft Defender for Resource Manager detected an operation from an IP address that has been marked as suspicious in threat intelligence feeds. | Execution | Medium |
+| **Azure Resource Manager operation from suspicious proxy IP address**<br>(ARM_OperationFromSuspiciousProxyIP) | Microsoft Defender for Resource Manager detected a resource management operation from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when threat actors try to hide their source IP. | Defense Evasion | Medium |
+|||||
++ ### Recommendations to enable Microsoft Defender plans on workspaces (in preview) To benefit from all of the security features available from [Microsoft Defender for servers](defender-for-servers-introduction.md) and [Microsoft Defender for SQL on machines](defender-for-sql-introduction.md), the plans must be enabled on **both** the subscription and workspace levels.
defender-for-cloud Remediate Vulnerability Findings Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/remediate-vulnerability-findings-vm.md
To view vulnerability assessment findings (from all of your configured scanners)
1. From Defender for Cloud's menu, open the **Recommendations** page.
-1. Select the recommendation **Vulnerabilities in your virtual machines should be remediated**.
+1. Select the recommendation **Machines should have vulnerability findings resolved**.
Defender for Cloud shows you all the findings for all VMs in the currently selected subscriptions. The findings are ordered by severity.
defender-for-iot How To Configure Agent Based Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/how-to-configure-agent-based-solution.md
- Title: Configure Microsoft Defender for IoT agent-based solution
-description: Learn how to configure data collection in Microsoft Defender for IoT agent-based solution
Previously updated : 12/20/2021---
-# Configure Microsoft Defender for IoT agent-based solution
-
-This article describes how to configure data collection in Microsoft Defender for IoT agent-based solution.
-
-## Configure data collection
-
-**To configure data collection in Microsoft Defender for IoT agent-based solution**:
-
-1. Navigate to the Azure portal, and select the IoT Hub that the Defender for IoT is attached to.
-
-1. Select **Defender for IoT > Settings > Data Collection**.
-
- :::image type="content" source="media/how-to-configure-agent-based-solution/data-collection.png" alt-text="Select data collection from the security menu settings.":::
-
-1. Under **Microsoft Defender for IoT**, ensure that **Enable Microsoft Defender for IoT** is enabled.
-
- :::image type="content" source="media/how-to-configure-agent-based-solution/enable-data-collection.png" alt-text="Screenshot showing you how to enable data collection.":::
-
-## Geolocation and IP address handling
-
-In order to secure your IoT solution, the IP addresses of the incoming, and outgoing connections for your IoT devices, IoT Edge, and IoT Hub(s) are collected and stored by default. This information is essential, and used to detect abnormal connectivity from suspicious IP address sources. For example, when there are attempts made that try to establish connections from an IP address source of a known botnet, or from an IP address source outside your geolocation. The Defender for IoT service, offers the flexibility to enable and disable the collection of the IP address data at any time.
-
-**To enable, or disable the collection of IP address data**:
-
-1. Open your IoT Hub, and then select **Settings** from the **Security** menu.
-
-1. Select the **Data Collection** screen and modify the geolocation, and IP address handling settings to suit your needs.
-
-## Log Analytics creation
-
-Defender for IoT allows you to store security alerts, recommendations, and raw security data, in your Log Analytics workspace. Log Analytics ingestion in IoT Hub is set to **off** by default in the Defender for IoT solution. It is possible, to attach Defender for IoT to a Log Analytic workspace, and to store the security data there as well.
-
-There are two types of information stored by default in your Log Analytics workspace by Defender for IoT:
--- Security alerts.--- Recommendations.-
-You can choose to add storage of an additional information type as `raw events`.
-
-> [!Note]
-> Storing `raw events` in Log Analytics carries additional storage costs.
-
-**To enable Log Analytics to work with micro agent**:
-
-1. Navigate to **Workspace configuration** > **Data Collection**, and switch the toggle to **On**.
-
-1. Create a new Log Analytics workspace, or attach an existing one.
-
-1. Verify that the **Access to raw security data** option is selected.
-
- :::image type="content" source="media/how-to-configure-agent-based-solution/data-settings.png" alt-text="Ensure Access to raw security data is selected.":::
-
-1. Select a subscription from the drop down menu.
-
-1. Select a workspace from the dropdown menu.
-
-1. Select **Save**.
-
-Every month, the first 5 gigabytes of data ingested, per customer to the Azure Log Analytics service, is free. Every gigabyte of data ingested into your Azure Log Analytics workspace, is retained at no charge for the first 31 days. For more information on pricing, see, [Log Analytics pricing](https://azure.microsoft.com/pricing/details/monitor/).
-
-**To change the workspace configuration of Log Analytics**:
-
-1. In your IoT Hub, in the **Security** menu, select **Settings**.
-
-1. Select the **Data Collection** screen, and modify the workspace configuration of Log Analytics settings to suit your needs.
-
-**To access your alerts in your Log Analytics workspace after configuration**:
-
-1. Select an alert in Defender for IoT.
-
-1. Select **Investigate alerts in Log Analytics workspace**.
-
-**To access your recommendations in your Log Analytics workspace after configuration**:
-
-1. Select a recommendation in Defender for IoT.
-
-1. Select **Investigate recommendations in Log Analytics workspace**.
-
-For more information on querying data from Log Analytics, see [Get started with log queries in Azure Monitor](../../azure-monitor/logs/get-started-queries.md).
-
-## Turn off Defender for IoT
-
-**To turn a Defender for IoT service on, or off on a specific IoT Hub**:
-
-1. In your IoT Hub, in the **Security** menu, select **Settings**.
-
-1. Select the **Data Collection** screen, and modify the workspace configuration of Log Analytics settings to suit your needs.
-
-## Next steps
-
-Advance to the next article to [configure your solution](quickstart-configure-your-solution.md).
defender-for-iot How To Configure Micro Agent Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/how-to-configure-micro-agent-twin.md
+
+ Title: Configure a micro agent twin
+description: Learn how to configure a micro agent twin.
+++ Last updated : 01/16/2022++
+# Configure a micro agent twin
+
+Learn how to configure a micro agent twin.
+
+## Prerequisites
+
+- An Azure account. If you do not already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
+
+- A Defender for IoT subscription.
+
+- An existing IoT Hub with: [A connected device](tutorial-standalone-agent-binary-installation.md), and [A micro agent module twin](tutorial-create-micro-agent-module-twin.md).
+
+## Micro agent configuration
+
+**To view and update the micro agent twin configuration**:
+
+1. Navigate to the [Azure portal](https://ms.portal.azure.com).
+
+1. Search for, and select **IoT Hub**.
+
+ :::image type="content" source="media/tutorial-micro-agent-configuration/iot-hub.png" alt-text="Screenshot of searching for the IoT hub in the search bar.":::
+
+1. Select your IoT Hub from the list.
+
+1. Under the Device management section, select **Devices**.
+
+ :::image type="content" source="media/tutorial-micro-agent-configuration/devices.png" alt-text="Screenshot of the device management section of the IoT hub.":::
+
+1. Select your device from the list.
+
+1. Select the module ID.
+
+ :::image type="content" source="media/tutorial-micro-agent-configuration/module-id.png" alt-text="Screenshot of the device's module ID selection screen.":::
+
+1. In the Module Identity Details screen, select **Module Identity Twin**.
+
+ :::image type="content" source="media/tutorial-micro-agent-configuration/module-identity-twin.png" alt-text="Screenshot of the Module Identity Details screen.":::
+
+1. Change the value of any field by adding the field to the `"desired"` section with the new value.
+
+ :::image type="content" source="media/tutorial-micro-agent-configuration/desired.png" alt-text="Screenshot of the sample output of the module identity twin.":::
+
+ The agent successfully set the new configuration if the value of `"latest_state"`, under the `"reported"` section will show `"success"`.
+
+ :::image type="content" source="media/tutorial-micro-agent-configuration/reported-success.png" alt-text="Screenshot of a successful configuration change.":::
+
+ If the agent fails to set the new configuration, the value of `"latest_state"`, under the `"reported"` section will show `"failed"`. If this occurs, the `"latest_invalid_fields"` will contain a list of the fields that are invalid.
+
+## Next steps
+
+You learned how to configure a micro agent twin. Continue on to learn about other [Micro agent configurations (Preview)](concept-micro-agent-configuration.md).
defender-for-iot How To Region Move https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/how-to-region-move.md
Before transitioning the resource to the new region, we recommended using [log a
## Move
-You are now ready to move your resource to your new location. Follow [these instructions](/azure/iot-hub/iot-hub-how-to-clone) to move your IoT Hub.
+You are now ready to move your resource to your new location. Follow [these instructions](../../iot-hub/iot-hub-how-to-clone.md) to move your IoT Hub.
After transferring, and enabling the resource, you can link to the same log analytics workspace that was configured earlier.
DonΓÇÖt clean up until you have finished verifying that the resource has moved,
In this tutorial, you moved an Azure resource from one region to another and cleaned up the source resource. -- Learn more about [Moving your resources to a new resource group or subscription.](/azure/azure-resource-manager/management/move-resource-group-and-subscription).
+- Learn more about [Moving your resources to a new resource group or subscription.](../../azure-resource-manager/management/move-resource-group-and-subscription.md).
-- Learn how to [move VMs to another Azure region](/azure/site-recovery/azure-to-azure-tutorial-migrate).
+- Learn how to [move VMs to another Azure region](../../site-recovery/azure-to-azure-tutorial-migrate.md).
defender-for-iot Quickstart Azure Rtos Security Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/quickstart-azure-rtos-security-module.md
- Title: 'Quickstart: Configure and enable the Defender-IoT-micro-agent for Azure RTOS'
-description: In this quickstart, learn how to onboard and enable the Defender-IoT-micro-agent for Azure RTOS service in your Azure IoT Hub.
-- Previously updated : 11/09/2021----
-# Quickstart: Defender-IoT-micro-agent for Azure RTOS
-
-This article provides an explanation of the prerequisites before getting started and explains how to enable the Defender-IoT-micro-agent for Azure RTOS service on an IoT Hub. If you don't currently have an IoT Hub, see [Create an IoT hub using the Azure portal](../../iot-hub/iot-hub-create-through-portal.md).
-
-## Prerequisites
-
-### Supported devices
--- ST STM32F746G Discovery Kit-- NXP i.MX RT1060 EVK-- Microchip SAM E54 Xplained Pro EVK-
-Download, compile, and run one of the .zip files for the specific board and tool (IAR, semi's IDE or PC) of your choice from the [Defender-IoT-micro-agent for Azure RTOS GitHub resource](https://github.com/azure-rtos/azure-iot-preview/releases).
-
-### Azure resources
-
-The next stage for getting started is preparing your Azure resources. You'll need an IoT Hub and we suggest a Log Analytics workspace. For IoT Hub, you'll need your IoT Hub connection string to connect to your device.
-
-### IoT Hub connection
-
-An IoT Hub connection is required to get started.
-
-1. Open your **IoT Hub** in Azure portal.
-
-1. Navigate to **IoT Devices**.
-
-1. Select **Create**.
-
-1. Copy the IoT connection string to the [configuration file](how-to-azure-rtos-security-module.md).
-
-The connections credentials are taken from the user application configuration **HOST_NAME**, **DEVICE_ID**, and **DEVICE_SYMMETRIC_KEY**.
-
-The Defender-IoT-micro-agent for Azure RTOS uses Azure IoT Middleware connections based on the **MQTT** protocol.
-
-## Next steps
-
-Advance to the next article to finish configuring and customizing your solution.
-
-> [!div class="nextstepaction"]
-> [Configure and customize Defender-IoT-micro-agent for Azure RTOS (preview)](how-to-azure-rtos-security-module.md)
defender-for-iot Quickstart Configure Your Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/quickstart-configure-your-solution.md
- Title: 'Quickstart: Add Azure resources to your IoT solution'
-description: In this quickstart, learn how to configure your end-to-end IoT solution using Microsoft Defender for IoT.
- Previously updated : 11/09/2021---
-# Quickstart: Configure your Microsoft Defender for IoT solution
-
-This article explains how to configure your IoT security solution using Defender for IoT for the first time.
-
-## Prerequisites
--- None-
-## What is Defender for IoT?
-
-Defender for IoT provides comprehensive end-to-end security for Azure-based IoT solutions.
-
-With Defender for IoT, you can monitor your entire IoT solution in one dashboard, surfacing all of your IoT devices, IoT platforms, and back-end resources in Azure.
-
-Once you enable Defender for IoT on your IoT Hub, Defender for IoT will automatically identify other Azure services, and connect to related services that are affiliated with your IoT solution.
-
-You will also be able to select other Azure resource groups that are part of your IoT solution.
-
-Your selections allow you to add entire subscriptions, resource groups, or single resources.
-
-After defining all of the resource relationships, Defender for IoT will use Defender for Cloud to provide you security recommendations, and alerts for these resources.
-
-## Add Azure resources to your IoT solution
-
-**To add new resource to your IoT solution**:
-
-1. In the Azure portal, search for, and select **IoT Hub** .
-
-1. Under the Security section, select **Settings** > **Monitored Resources**.
-
-1. Select **Edit**, and select the monitored resources that belong to your IoT solution.
-
-1. Select **Add**.
-
-A new resource group will now be added to your IoT solution.
-
-Defender for IoT will now monitor you're newly added resource groups, and surfaces relevant security recommendations, and alerts as part of your IoT solution.
-
-## Next steps
-
-Advance to the next article to learn how to create Defender-IoT-micro-agents...
-
-> [!div class="nextstepaction"]
-> [Create Defender-IoT-micro-agents](quickstart-create-security-twin.md)
defender-for-iot Quickstart Create Micro Agent Module Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/quickstart-create-micro-agent-module-twin.md
- Title: 'Quickstart: Create a Defender for IoT micro agent module twin (Preview)'
-description: In this quickstart, learn how to create individual DefenderIotMicroAgent module twins for new devices.
Previously updated : 11/09/2021----
-# Quickstart: Create a Defender for IoT micro agent module twin (Preview)
-
-You can create individualΓÇ»**DefenderIotMicroAgent** module twins for new devices. You can also batch create module twins for all devices in an IoT Hub.
-
-## Prerequisites
-
-None
-
-## Device twins
-
-For IoT solutions built in Azure, device twins play a key role in both device management and process automation.
-
-Defender for IoT has the ability to fully integrate with your existing IoT device management platform. Full integration, enables you to manage your device's security status, and allows you to make use of all existing device control capabilities. Integration is achieved by making use of the IoT Hub twin mechanism.
-
-Learn more about the concept ofΓÇ»[Understand and use device twins in IoT Hub](../../iot-hub/iot-hub-devguide-device-twins.md).
-
-## Defender-IoT-micro-agent twins
-
-Defender for IoT uses a Defender-IoT-micro-agent twin for each device. The Defender-IoT-micro-agent twin holds all of the information that is relevant to device security, for each specific device in your solution. Device security properties are configured through a dedicated Defender-IoT-micro-agent twin for safer communication, to enable updates, and maintenance that requires fewer resources.
-
-## Understanding DefenderIotMicroAgent module twins
-
-Device twins play a key role in both device management and process automation, for IoT solutions that are built in to Azure.
-
-Defender for IoT offers the capability to fully integrate your existing IoT device management platform, enabling you to manage your device security status and make use of the existing device control capabilities. You can integrate your Defender for IoT by using the IoT Hub twin mechanism.
-
-To learn more about the general concept of module twins in Azure IoT Hub, seeΓÇ»[Understand and use module twins in IoT Hub](../../iot-hub/iot-hub-devguide-module-twins.md).
-
-Defender for IoT uses the module twin mechanism, and maintains a Defender-IoT-micro-agent twin named `DefenderIotMicroAgent` for each of your devices.
-
-To take full advantage of all Defender for IoT feature's, you need to create, configure, and use the Defender-IoT-micro-agent twins for every device in the service.
-
-## Create DefenderIotMicroAgent module twin
-
-**DefenderIotMicroAgent** module twins can be created by manually editing each module twin to include specific configurations for each device.
-
-To manually create a newΓÇ»**DefenderIotMicroAgent** module twin for a device:
-
-1. In your IoT Hub, locate and select the device on which to create a Defender-IoT-micro-agent twin.
-
-1. SelectΓÇ»**Add module identity**.
-
-1. In the **Module Identity Name** field, and enter `DefenderIotMicroAgent`.
-
-1. SelectΓÇ»**Save**.
-
-## Verify the creation of a module twin
-
-To verify if a Defender-IoT-micro-agent twin exists for a specific device:
-
-1. In your Azure IoT Hub, select **IoT devices** from the **Explorers** menu.
-
-1. Enter the device ID, or select an option in the **Query device** field and select **Query devices**. 
-
- :::image type="content" source="media/quickstart-create-micro-agent-module-twin/iot-devices.png" alt-text="Select query devices to get a list of your devices.":::
-
-1. Select the device, and open the **Device details** page.
-
-1. Select the **Module identities** menu, and confirm the existence of the **DefenderIotMicroAgent** module in the list of module identities associated with the device. 
-
- :::image type="content" source="media/quickstart-create-micro-agent-module-twin/device-details-module.png" alt-text="Select module identities from the tab.":::
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [investigate security recommendations](quickstart-investigate-security-recommendations.md)
defender-for-iot Quickstart Investigate Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/quickstart-investigate-security-alerts.md
- Title: "Quickstart: Investigate security alerts"
-description: Understand, drill down, and investigate Defender for IoT security alerts on your IoT devices.
- Previously updated : 11/09/2021---
-# Quickstart: Investigate security alerts
-
-Scheduled investigation and remediation of the alerts issued by Defender for IoT is the best way to ensure compliance, and protection across your IoT solution.
-
-## Investigate new security alerts
-
-The IoT Hub security alert list displays all of the aggregated security alerts for your IoT Hub.
-
-1. In the Azure portal, open the **IoT Hub** you want to investigate for new alerts.
-
-1. From the **Security** menu, select **Alerts**. All of the security alerts for the IoT Hub will display, and the alerts with a **New** flag, mark your alerts from the past 24 hours.
-
- :::image type="content" source="media/quickstart/investigate-new-security-alerts.png" alt-text="Investigate new IoT security alerts by using the new alert flag":::
-
-1. Select an alert from the list to open the alert details, and understand the alert specifics.
-
-## Security alert details
-
-Opening each aggregated alert displays the detailed alert description, remediation steps, and device ID for each device that triggered an alert. The alert severity, and direct investigation is accessible using Log Analytics.
-
-1. Navigate to **IoT Hub** > **Security** > **Alerts**.
-
-1. Select any security alert from the list to open it.
-
-1. Review the alert **description**, **severity**, **source of the detection**, **device details** of all devices that issued this alert in the aggregation period.
-
- :::image type="content" source="media/quickstart/drill-down-iot-alert-details.png" alt-text="Investigate and review the details of each device in an aggregated alert ":::
-
-1. After reviewing the alert specifics, use the **manual remediation step** instructions to help remediate, and resolve the issue that caused the alert.
-
- :::image type="content" source="media/quickstart/iot-alert-manual-remediation-steps.png" alt-text="Follow the manual remediation steps to help resolve or remediate your device security alerts":::
-
-1. If further investigation is required, **Investigate the alerts in Log Analytics** using the link.
-
- :::image type="content" source="media/quickstart/investigate-iot-alert-log-analytics.png" alt-text="To further investigate an alert, use the investigate using log analytics link provided on screen":::
-
-## Next steps
-
-Advance to the next article to learn more about security alerts types and possible customizations.
-
-> [!div class="nextstepaction"]
-> [Understanding IoT security alerts](concept-security-alerts.md)
defender-for-iot Quickstart Investigate Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/quickstart-investigate-security-recommendations.md
- Title: Investigate security recommendations
-description: Investigate security recommendations with the Defender for IoT security service.
- Previously updated : 11/09/2021---
-# Quickstart: Investigate security recommendations
-
-Timely analysis and mitigation of recommendations by Defender for IoT is the best way to improve security posture and reduce attack surface across your IoT solution.
-
-In this quickstart, we'll explore the information available in each IoT security recommendation, and explain how to drill down and use the details of each recommendation and related devices, to reduce risk.
-
-Let's get started.
-
-## Investigate new recommendations
-
-The IoT Hub recommendations list displays all of the aggregated security recommendations for your IoT Hub.
-
-1. In the Azure portal, open the **IoT Hub** you want to investigate for new recommendations.
-
-1. From the **Security** menu, select **Recommendations**. All of the security recommendations for the IoT Hub will display, and the recommendations with a **New** flag, mark your recommendations from the past 24 hours.
-
-1. Select and open any recommendation from the list to open the recommendation details and drill down to the specifics.
-
-## Security recommendation details
-
-Open each aggregated recommendation to display the detailed recommendation description, remediation steps, device ID for each device that triggered a recommendation. It also displays recommendation severity and direct-investigation access using Log Analytics.
-
-1. Select and open any security recommendation from the **IoT Hub** > **Security** > **Recommendations** list.
-
-1. Review the recommendation **description**, **severity**, **device details** of all devices that issued this recommendation in the aggregation period.
-
-1. After reviewing recommendation specifics, use the **manual remediation step** instructions to help remediate and resolve the issue that caused the recommendation.
-
- :::image type="content" source="media/quickstart/remediate-security-recommendations-inline.png" alt-text="Remediate security recommendations with Defender for IoT" lightbox="media/quickstart/remediate-security-recommendations-expanded.png":::
-
-1. Explore the recommendation details for a specific device by selecting the desired device in the drill-down page.
-
- :::image type="content" source="media/quickstart/explore-security-recommendation-detail-inline.png" alt-text="Investigate specific security recommendations for a device with Defender for IoT" lightbox="media/quickstart/explore-security-recommendation-detail-expanded.png":::
-
-1. If further investigation is required, **Investigate the recommendation in Log Analytics** using the link.
-
-## Next steps
-
-Advance to the next article to learn how to create custom alerts...
-
-> [!div class="nextstepaction"]
-> [Create custom alerts](quickstart-create-custom-alerts.md)
defender-for-iot Quickstart Onboard Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/quickstart-onboard-iot-hub.md
Title: 'Quickstart: Onboard Defender for IoT to an agent-based solution'
-description: In this quickstart, you will learn how to onboard and enable the Defender for IoT security service in your Azure IoT Hub.
+ Title: 'Quickstart: Enable Microsoft Defender for IoT on your Azure IoT Hub'
+description: Learn how to enable Defender for IoT in an Azure IoT hub.
Previously updated : 11/09/2021 Last updated : 01/16/2022
-# Quickstart: Onboard Defender for IoT to an agent-based solution
+# Quickstart: Enable Microsoft Defender for IoT on your Azure IoT Hub
-This article explains how to enable the Defender for IoT service on your existing IoT Hub. If you don't currently have an IoT Hub, see [Create an IoT hub using the Azure portal](../../iot-hub/iot-hub-create-through-portal.md) to get started.
+This article explains how to enable Microsoft Defender for IoT on an Azure IoT hub.
-You can manage your IoT security through the IoT Hub in Defender for IoT. The management portal located in the IoT Hub allows you to do the following:
+[Azure IoT Hub](../../iot-hub/iot-concepts-and-iot-hub.md) is a managed service that acts as a central message hub for communication between IoT applications and IoT devices. You can connect millions of devices and their backend solutions reliably and securely. Almost any device can be connected to an IoT Hub. Defender for IoT integrates into Azure IoT Hub to provide real-time monitoring, recommendations, and alerts.
-- Manage IoT Hub security.
+## Prerequisites
-- Basic management of an IoT device's security without installing an agent based on the IoT Hub telemetry.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-- Advanced management for the security of an IoT device based on the micro agent.
+- The ability to create a standard tier IoT Hub.
> [!NOTE] > Defender for IoT currently only supports standard tier IoT Hubs.
-## Prerequisites
+## Create an IoT Hub with Microsoft Defender for IoT
-None
+You can create a hub in the Azure portal. For all new IoT hubs, Defender for IoT is set to **On** by default.
-## Onboard Defender for IoT to an IoT Hub
+**To create an IoT Hub**:
-For all new IoT hubs, Defender for IoT is set to **On** by default. You can verify that Defender for IoT is toggled to **On** during the IoT Hub creation process.
+1. Follow the steps in [this article](../../iot-hub/iot-hub-create-through-portal.md#create-an-iot-hub).
-To verify the toggle is set to **On**:
+1. Under the **Management** tab, ensure that **Defender for IoT** is set to **On**. By default, Defender for IoT will be set to **On** .
-1. Navigate to the Azure portal.
+ :::image type="content" source="media/quickstart-onboard-iot-hub/management-tab.png" alt-text="Ensure the Defender for IoT toggle is set to on.":::
-1. Select **IoT Hub** from the list of Azure services.
+## Enable Defender for IoT on an existing IoT Hub
-1. Select **Create**.
+You can onboard Defender for IoT to an existing IoT Hub, where you can then monitor the device identity management, device to cloud, and cloud to device communication patterns.
- :::image type="content" source="media/quickstart-onboard-iot-hub/create-iot-hub.png" alt-text="Select the create button from the top toolbar." lightbox="media/quickstart-onboard-iot-hub/create-iot-hub-expanded.png":::
+**To enable Defender for IoT on an existing IoT Hub**:
-1. Select the **Management** tab, and verify that **Defender for IoT** toggle is set to **On**.
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
- :::image type="content" source="media/quickstart-onboard-iot-hub/management-tab.png" alt-text="Ensure the Defender for IoT toggle is set to on.":::
+1. Navigate to **IoT Hub** > **`Your hub`** > **Defender for IoT** > **Overview**.
-## Onboard Defender for IoT to an existing IoT Hub
+1. Select **Secure your IoT solution**, and complete the onboarding form.
-You can onboard Defender for IoT to an existing IoT Hub, where you can then monitor the device identity management, device to cloud, and cloud to device communication patterns.
+ :::image type="content" source="media/quickstart-onboard-iot-hub/secure-your-iot-solution.png" alt-text="Select the secure your IoT solution button to secure your solution." lightbox="media/quickstart-onboard-iot-hub/secure-your-iot-solution-expanded.png":::
-To onboard Defender for IoT to an existing IoT Hub:
+The **Secure your IoT solution** button will only appear if the IoT Hub has not already been onboarded, or if you set the Defender for IoT toggle to **Off** while onboarding.
-1. Navigate to the IoT Hub.
-1. Select the IoT Hub to be onboarded.
+## Verify that Defender for IoT is enabled
-1. Select any option under the **Security** section.
+**To verify that Defender for IoT is enabled**:
-1. Click **Secure your IoT solution** and complete the onboarding form.
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
- :::image type="content" source="media/quickstart-onboard-iot-hub/secure-your-iot-solution.png" alt-text="Select the secure your IoT solution button to secure your solution.":::
+1. Navigate to **IoT Hub** > **`Your hub`** > **Defender for IoT** > **Overview**.
-The **Secure your IoT solution** button will only appear if the IoT Hub has not already been onboarded, or if while onboarding you left the Defender for IoT toggle on **Off**.
+1. The Threat prevention, and Threat detection screen will appear.
+ :::image type="content" source="media/quickstart-onboard-iot-hub/threat-prevention.png" alt-text="Screenshot showing that Defender for IoT is enabled." lightbox="media/quickstart-onboard-iot-hub/threat-prevention-expanded.png":::
## Next steps
-Advance to the next article to configure your solution...
+Advance to the next article to add a resource group to your solution...
> [!div class="nextstepaction"]
-> [Create a Defender for IoT micro agent module twin (Preview)](quickstart-create-micro-agent-module-twin.md)
+> [Add a resource group to your IoT solution](tutorial-configure-your-solution.md)
defender-for-iot Quickstart Standalone Agent Binary Installation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/quickstart-standalone-agent-binary-installation.md
- Title: 'Quickstart: Install Defender for IoT micro agent (Preview)'
-description: In this quickstart, learn how to install, and authenticate the Defender for IoT micro agent.
Previously updated : 11/09/2021----
-# Quickstart: Install Defender for IoT micro agent (Preview)
-
-This article provides an explanation of how to install, and authenticate the Defender for IoT micro agent.
-
-## Prerequisites
-
-Before you install the Defender for IoT module, you must create a module identity in the IoT Hub. For more information on how to create a module identity, see [Create a Defender for IoT micro agent module twin (Preview)](quickstart-create-micro-agent-module-twin.md).
-
-## Install the package
-
-**To add the appropriate Microsoft package repository**:
-
-1. Download the repository configuration that matches your device operating system.
-
- - For Ubuntu 18.04
-
- ```bash
- curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
- ```
-
- - For Ubuntu 20.04
-
- ```bash
- curl https://packages.microsoft.com/config/ubuntu/20.04/prod.list > ./microsoft-prod.list
- ```
-
- - For Debian 9 (both AMD64 and ARM64)
-
- ```bash
- curl https://packages.microsoft.com/config/debian/stretch/multiarch/prod.list > ./microsoft-prod.list
- ```
-
-1. Copy the repository configuration to the `sources.list.d` directory.
-
- ```bash
- sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
- ```
-
-1. Install the Microsoft GPG public key:
-
- ```bash
- curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
- sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
- ```
-
-To install the Defender for IoT micro agent package on Debian, and Ubuntu based Linux distributions, use the following command:
-
-```bash
-sudo apt-get install defender-iot-micro-agent
-```
-
-## Micro agent authentication methods
-
-The two options used to authenticate the Defender for IoT micro agent are:
--- Module identity connection string.--- Certificate.-
-### Authenticate using a module identity connection string
-
-Ensure the [Prerequisites](#prerequisites) for this article are met, and that you create a module identity before starting these steps.
-
-#### Get the module identity connection string
-
-To get the module identity connection string from the IoT Hub:
-
-1. Navigate to the IoT Hub, and select your hub.
-
-1. In the left-hand menu, under the **Explorers** section, select **IoT devices**.
-
- :::image type="content" source="media/quickstart-standalone-agent-binary-installation/iot-devices.png" alt-text="Select IoT devices from the left-hand menu.":::
-
-1. Select a device from the Device ID list to view the **Device details** page.
-
-1. Select the **Module identities** tab.
-
-1. Select the **DefenderIotMicroAgent** module from the list of module identities associated with the device.
-
- :::image type="content" source="media/quickstart-standalone-agent-binary-installation/module-identities.png" alt-text="Select the module identities tab.":::
-
-1. In the **Module Identity Details** page, copy the Connection string (primary key) by selecting the **copy** button.
-
- :::image type="content" source="media/quickstart-standalone-agent-binary-installation/copy-button.png" alt-text="Select the copy button to copy the Connection string (primary key).":::
-
-#### Configure authentication using a module identity connection string
-
-To configure the agent to authenticate using a module identity connection string:
-
-1. Place a file named `connection_string.txt` containing the connection string encoded in utf-8 in the Defender for Cloud agent directory `/var/defender_iot_micro_agent` path by entering the following command:
-
- ```bash
- sudo bash -c 'echo "<connection string>" > /var/defender_iot_micro_agent/connection_string.txt'
- ```
-
- The `connection_string.txt` should be located in the following path location `/var/defender_iot_micro_agent/connection_string.txt`.
-
-1. Restart the service using this command:
-
- ```bash
- sudo systemctl restart defender-iot-micro-agent.service
- ```
-
-### Authenticate using a certificate
-
-To authenticate using a certificate:
-
-1. Procure a certificate by following [these instructions](../../iot-hub/tutorial-x509-scripts.md).
-
-1. Place the PEM-encoded public part of the certificate, and the private key, in to the Defender for Cloud Agent Directory in to the file called `certificate_public.pem`, and `certificate_private.pem`.
-
-1. Place the appropriate connection string in to the `connection_string.txt` file. the connection string should look like this:
-
- `HostName=<the host name of the iot hub>;DeviceId=<the id of the device>;ModuleId=<the id of the module>;x509=true`
-
- This string alerts the Defender for Cloud agent, to expect a certificate be provided for authentication.
-
-1. Restart the service using the following command:
-
- ```bash
- sudo systemctl restart defender-iot-micro-agent.service
- ```
-
-### Validate your installation
-
-To validate your installation:
-
-1. Making sure the micro agent is running properly with the following command:
-
- ```bash
- systemctl status defender-iot-micro-agent.service
- ```
-
-1. Ensure that the service is stable by making sure it is `active` and that the uptime of the process is appropriate
-
- :::image type="content" source="media/quickstart-standalone-agent-binary-installation/active-running.png" alt-text="Check to make sure your service is stable and active.":::
-
-## Testing the system end-to-end
-
-You can test the system from end to end by creating a trigger file on the device. The trigger file will cause the baseline scan in the agent to detect the file as a baseline violation.
-
-Create a file on the file system with the following command:
-
-```bash
-sudo touch /tmp/DefenderForIoTOSBaselineTrigger.txt
-```
-
-A baseline validation failure recommendation will occur in the hub, with a `CceId` of CIS-debian-9-DEFENDER_FOR_IOT_TEST_CHECKS-0.0:
--
-Allow up to one hour for the recommendation to appear in the hub.
-
-## Micro agent versioning
-
-To install a specific version of the Defender for IoT micro agent, run the following command:
-
-```bash
-sudo apt-get install defender-iot-micro-agent=<version>
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Quickstart: Create a Defender for IoT micro agent module twin (Preview)](quickstart-create-micro-agent-module-twin.md)
defender-for-iot Tutorial Configure Agent Based Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/tutorial-configure-agent-based-solution.md
+
+ Title: Configure Microsoft Defender for IoT agent-based solution
+description: Learn how to configure the Microsoft Defender for IoT agent-based solution
Last updated : 01/12/2022+++
+# Tutorial: Configure Microsoft Defender for IoT agent-based solution
+
+This tutorial will help you learn how to configure the Microsoft Defender for IoT agent-based solution.
+
+In this tutorial you will learn how to:
+
+> [!div class="checklist"]
+> - Enable data collection
+> - Create a log analytics workspace
+> - Enable geolocation and IP address handling
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An [IoT hub](../../iot-hub/iot-hub-create-through-portal.md).
+
+- You must have [enabled Microsoft Defender for IoT on your Azure IoT Hub](quickstart-onboard-iot-hub.md).
+
+- You must have [added a resource group to your IoT solution](quickstart-configure-your-solution.md)
+
+- You must have [created a Defender for IoT micro agent module twin (Preview)](quickstart-create-micro-agent-module-twin.md).
+
+- You must have [installed the Defender for IoT micro agent (Preview)](quickstart-standalone-agent-binary-installation.md)
+
+## Enable data collection
+
+**To enable data collection**:
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
+
+1. Navigate to **IoT Hub** > **`Your hub`** > **Defender for IoT** > **Settings** > **Data Collection**.
+
+ :::image type="content" source="media/how-to-configure-agent-based-solution/data-collection.png" alt-text="Select data collection from the security menu settings.":::
+
+1. Under **Microsoft Defender for IoT**, ensure that **Enable Microsoft Defender for IoT** is enabled.
+
+ :::image type="content" source="media/how-to-configure-agent-based-solution/enable-data-collection.png" alt-text="Screenshot showing you how to enable data collection.":::
+
+1. Select **Save**.
+
+## Create a log analytics workspace
+
+Defender for IoT allows you to store security alerts, recommendations, and raw security data, in your Log Analytics workspace. Log Analytics ingestion in IoT Hub is set to **off** by default in the Defender for IoT solution. It is possible, to attach Defender for IoT to a Log Analytic workspace, and to store the security data there as well.
+
+There are two types of information stored by default in your Log Analytics workspace by Defender for IoT:
+
+- Security alerts.
+
+- Recommendations.
+
+You can choose to add storage of an additional information type as `raw events`.
+
+> [!Note]
+> Storing `raw events` in Log Analytics carries additional storage costs.
+
+**To enable Log Analytics to work with micro agent**:
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
+
+1. Navigate to **IoT Hub** > **`Your hub`** > **Defender for IoT** > **Settings** > **Data Collection**.
+
+1. Under the Workspace configuration, switch the Log Analytics toggle to **On**.
+
+1. Select a subscription from the drop-down menu.
+
+1. Select a workspace from the drop-down menu. If you do not already have an existing Log Analytics workspace, you can select **Create New Workspace** to create a new one.
+
+1. Verify that the **Access to raw security data** option is selected.
+
+ :::image type="content" source="media/how-to-configure-agent-based-solution/data-settings.png" alt-text="Ensure Access to raw security data is selected.":::
+
+1. Select **Save**.
+
+Every month, the first 5 gigabytes of data ingested, per customer to the Azure Log Analytics service, is free. Every gigabyte of data ingested into your Azure Log Analytics workspace, is retained at no charge for the first 31 days. For more information on pricing, see, [Log Analytics pricing](https://azure.microsoft.com/pricing/details/monitor/).
+
+## Enable geolocation and IP address handling
+
+In order to secure your IoT solution, the IP addresses of the incoming, and outgoing connections for your IoT devices, IoT Edge, and IoT Hub(s) are collected and stored by default. This information is essential, and used to detect abnormal connectivity from suspicious IP address sources. For example, when there are attempts made that try to establish connections from an IP address source of a known botnet, or from an IP address source outside your geolocation. The Defender for IoT service, offers the flexibility to enable, and disable the collection of the IP address data at any time.
+
+**To enable the collection of IP address data**:
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
+
+1. Navigate to **IoT Hub** > **`Your hub`** > **Defender for IoT** > **Settings** > **Data Collection**.
+
+1. Ensure the IP data collection checkbox is selected.
+
+ :::image type="content" source="media/how-to-configure-agent-based-solution/geolocation.png" alt-text="Screenshot that shows the checkbox needed to be selected to enable geolocation.":::
+
+1. Select **Save**.
+
+## Clean up resources
+
+There are no resources to clean up.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Investigate security recommendations](tutorial-investigate-security-recommendations.md)
defender-for-iot Tutorial Configure Your Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/tutorial-configure-your-solution.md
+
+ Title: Add a resource group to your IoT solution
+description: In this quickstart, learn how to configure your end-to-end IoT solution using Microsoft Defender for IoT.
+ Last updated : 01/13/2022+++
+# Tutorial: Add a resource group to your IoT solution
+
+This article explains how to add a resource group to your Microsoft Defender for IoT solution. To learn more about resource groups, see [Manage Azure Resource Manager resource groups by using the Azure portal](../../azure-resource-manager/management/manage-resource-groups-portal.md).
+
+With Defender for IoT, you can monitor your entire IoT solution in one dashboard. From that dashboard, you can surface all of your IoT devices, IoT platforms, and back-end resources in Azure.
+
+Once enabled, Defender for IoT will automatically identify other Azure services, and connect to related services that are affiliated with your IoT solution.
+
+You can select other Azure resource groups that are part of your IoT solution. Your selections allow you to add entire subscriptions, resource groups, or single resources.
+
+After defining all of the resource relationships, Defender for IoT will use Defender for Cloud to provide you security recommendations, and alerts for your resources.
+
+In this tutorial you'll learn how to:
+
+> [!div class="checklist"]
+> - Add a resource group to your IoT solution
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An [IoT hub](../../iot-hub/iot-hub-create-through-portal.md).
+
+- You must have [enabled Microsoft Defender for IoT on your Azure IoT Hub](quickstart-onboard-iot-hub.md).
+
+## Add Azure resources to your IoT solution
+
+**To add new resource to your IoT solution**:
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
+
+1. Search for, and select **IoT Hub**.
+
+1. Navigate to **Defender for IoT** > **Settings** > **Monitored Resources**.
+
+1. Select **Edit**, and select the monitored resources that belong to your IoT solution.
+
+1. In the Solution Management window, select your subscription from the drop-down menu.
+
+1. Select all applicable resource groups from the drop-down menu.
+
+1. Select **Apply**.
+
+A new resource group will now be added to your IoT solution.
+
+Defender for IoT will now monitor you're newly added resource groups, and surfaces relevant security recommendations, and alerts as part of your IoT solution.
+
+## Next steps
+
+Advance to the next article to learn how to create Defender-IoT-micro-agent.
+
+> [!div class="nextstepaction"]
+> [Create a Defender for IoT micro agent module twin (Preview)](tutorial-create-micro-agent-module-twin.md)
defender-for-iot Tutorial Create Micro Agent Module Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/tutorial-create-micro-agent-module-twin.md
+
+ Title: Create a DefenderforIoTMicroAgent module twin (Preview)
+description: In this tutorial, you will learn how to create a DefenderIotMicroAgent module twin for new devices.
Last updated : 01/16/2022++++
+# Tutorial: Create a DefenderIotMicroAgent module twin (Preview)
+
+This tutorial will help you learn how to create an individual `DefenderIotMicroAgent` module twin for new devices.
+
+## Device twins
+
+For IoT solutions built in Azure, device twins play a key role in both device management and process automation.
+
+Defender for IoT fully integrates with your existing IoT device management platform. Full integration, enables you to manage your device's security status, and allows you to make use of all existing device control capabilities. Integration is achieved by making use of the IoT Hub twin mechanism.
+
+Learn more about the concept of [Understand and use device twins in IoT Hub](../../iot-hub/iot-hub-devguide-device-twins.md).
+
+## Defender-IoT-micro-agent twin
+
+Defender for IoT uses a Defender-IoT-micro-agent twin for each device. The Defender-IoT-micro-agent twin holds all of the information that is relevant to device security, for each specific device in your solution. Device security properties are configured through a dedicated Defender-IoT-micro-agent twin for safer communication, to enable updates, and maintenance that requires fewer resources.
+
+## Understanding DefenderIotMicroAgent module twins
+
+Device twins play a key role in both device management and process automation, for IoT solutions that are built in to Azure.
+
+Defender for IoT offers the capability to fully integrate your existing IoT device management platform, enabling you to manage your device security status and make use of the existing device control capabilities. You can integrate your Defender for IoT by using the IoT Hub twin mechanism.
+
+To learn more about the general concept of module twins in Azure IoT Hub, see [Understand and use module twins in IoT Hub](../../iot-hub/iot-hub-devguide-module-twins.md).
+
+Defender for IoT uses the module twin mechanism, and maintains a Defender-IoT-micro-agent twin named `DefenderIotMicroAgent` for each of your devices.
+
+To take full advantage of all Defender for IoT feature's, you need to create, configure, and use the Defender-IoT-micro-agent twins for every device in the service.
+
+In this tutorial you will learn how to:
+
+> [!div class="checklist"]
+> - Create a DefenderIotMicroAgent module twin
+> - Verify the creation of a module twin
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- Verify you are running one of the following [operating systems](concept-agent-portfolio-overview-os-support.md#agent-portfolio-overview-and-os-support-preview).
+
+- An [IoT hub](../../iot-hub/iot-hub-create-through-portal.md).
+
+- You must have [enabled Microsoft Defender for IoT on your Azure IoT Hub](quickstart-onboard-iot-hub.md).
+
+- You must have [added a resource group to your IoT solution](quickstart-configure-your-solution.md)
+
+## Create a DefenderIotMicroAgent module twin
+
+A `DefenderIotMicroAgent` module twin can be created by manually editing each module twin to include specific configurations for each device.
+
+**To create a DefenderIotMicroAgent module twin for a device**:
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
+
+1. Navigate to **IoT Hub** > **`Your hub`** > **Device management** > **Devices**.
+
+1. Select your device from the list.
+
+1. Select **Add module identity**.
+
+1. In the Module Identity Name field, enter `DefenderIotMicroAgent`.
+
+1. Select **Save**.
+
+## Verify the creation of a module twin
+
+**To verify the creation of a DefenderIotMicroAgent module twin on a specific device**:
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
+
+1. Navigate to **IoT Hub** > **`Your hub`** > **Device management** > **Devices**.
+
+1. Select your device.
+
+1. Under the Module identities menu, confirm the existence of the `DefenderIotMicroAgent` module in the list of module identities associated with the device.
+
+ :::image type="content" source="media/quickstart-create-micro-agent-module-twin/device-details-module.png" alt-text="Select module identities from the tab.":::
+
+## Clean up resources
+
+There are no resources to clean up.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Install the Defender for IoT micro agent (Preview)](tutorial-standalone-agent-binary-installation.md)
defender-for-iot Tutorial Investigate Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/tutorial-investigate-security-alerts.md
+
+ Title: Investigate security alerts
+description: Learn how to investigate Defender for IoT security alerts on your IoT devices.
+ Last updated : 01/13/2022++
+# Tutorial: Investigate security alerts
+
+This tutorial will help you learn how to investigate, and remediate the alerts issued by Defender for IoT. Remediating alerts is the best way to ensure compliance, and protection across your IoT solution.
+
+In this tutorial you will learn how to:
+
+> [!div class="checklist"]
+> - Investigate security alerts
+> - Investigate security alert details
+> - Investigate alerts in Log Analytics workspace
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An [IoT hub](../../iot-hub/iot-hub-create-through-portal.md).
+
+- You must have [enabled Microsoft Defender for IoT on your Azure IoT Hub](quickstart-onboard-iot-hub.md).
+
+- You must have [added a resource group to your IoT solution](quickstart-configure-your-solution.md)
+
+- You must have [created a Defender for IoT micro agent module twin (Preview)](quickstart-create-micro-agent-module-twin.md).
+
+- You must have [installed the Defender for IoT micro agent (Preview)](quickstart-standalone-agent-binary-installation.md)
+
+- You must have [configured the Microsoft Defender for IoT agent-based solution](how-to-configure-agent-based-solution.md)
+
+- Learned how to [investigate security recommendations](quickstart-investigate-security-recommendations.md)
+
+## Investigate security alerts
+
+The Defender for IoT security alert list displays all of the aggregated security alerts for your IoT Hub.
+
+**To investigate security alerts**:
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
+
+1. Navigate to **IoT Hub** > **`Your hub`** > **Defender for IoT** > **Security Alerts**.
+
+1. Select an alert from the list to open the alert's details.
+
+## Investigate security alert details
+
+Opening each aggregated alert displays the detailed alert description, remediation steps, and device ID for each device that triggered an alert. The alert severity, and direct investigation is accessible using Log Analytics.
+
+**To investigate security alert details**:
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
+
+1. Navigate to **IoT Hub** > **`Your hub`** > **Defender for IoT** > **Security Alerts**.
+
+1. Select any security alert from the list to open it.
+
+1. Review the alert **description**, **severity**, **source of the detection**, **device details** of all devices that issued this alert in the aggregation period.
+
+ :::image type="content" source="media/quickstart/drill-down-iot-alert-details.png" alt-text="Investigate and review the details of each device in an aggregated alert." lightbox="media/quickstart/drill-down-iot-alert-details-expanded.png":::
+
+1. After reviewing the alert specifics, use the **manual remediation step** instructions to help remediate, and resolve the issue that caused the alert.
+
+ :::image type="content" source="media/quickstart/iot-alert-manual-remediation-steps.png" alt-text="Follow the manual remediation steps to help resolve or remediate your device security alerts":::
+
+## Investigate alerts in Log Analytics workspace
+
+You can access your alerts and investigate them with the Log Analytics workspace.
+
+**To access your alerts in your Log Analytics workspace after configuration**:
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
+
+1. Navigate to **IoT Hub** > **`Your hub`** > **Defender for IoT** > **Security Alerts**.
+
+1. Select an alert.
+
+1. Select **Investigate alerts in Log Analytics workspace**.
+
+ :::image type="content" source="media/how-to-configure-agent-based-solution/log-analytic.png" alt-text="Screenshot that shows where to select to investigate in the log analytics workspace.":::
+
+## Clean up resources
+
+There are no resources to clean up.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> Learn how to [Connect your data from Defender for IoT for device builders to Microsoft Sentinel (Public preview)](how-to-configure-with-sentinel.md)
defender-for-iot Tutorial Investigate Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/tutorial-investigate-security-recommendations.md
+
+ Title: Investigate security recommendations
+description: Learn how to investigate security recommendations with the Defender for IoT.
+ Last updated : 01/12/2022++
+# Tutorial: Investigate security recommendations
+
+This tutorial will help you learn how to explore the information available in each IoT security recommendation, and explain how to use the details of each recommendation and related devices, to reduce risks.
+
+Timely analysis and mitigation of recommendations by Defender for IoT is the best way to improve security posture and reduce attack surface across your IoT solution.
+
+In this tutorial you will learn how to:
+
+> [!div class="checklist"]
+> - Investigate new recommendations
+> - Investigate security recommendation details
+> - Investigate recommendations in Log Analytics workspace
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An [IoT hub](../../iot-hub/iot-hub-create-through-portal.md).
+
+- You must have [enabled Microsoft Defender for IoT on your Azure IoT Hub](quickstart-onboard-iot-hub.md).
+
+- You must have [added a resource group to your IoT solution](quickstart-configure-your-solution.md)
+
+- You must have [created a Defender for IoT micro agent module twin (Preview)](quickstart-create-micro-agent-module-twin.md).
+
+- You must have [installed the Defender for IoT micro agent (Preview)](quickstart-standalone-agent-binary-installation.md)
+
+- You must have [configured the Microsoft Defender for IoT agent-based solution](how-to-configure-agent-based-solution.md)
+
+## Investigate recommendations
+
+The IoT Hub recommendations list displays all of the aggregated security recommendations for your IoT Hub.
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
+
+1. Navigate to **IoT Hub** > **`Your hub`** > **Defender for IoT** > **Recommendations**.
+
+1. Select a recommendation from the list to open the recommendation's details.
+
+## Investigate security recommendation details
+
+Open each aggregated recommendation to display the detailed recommendation description, remediation steps, device ID for each device that triggered a recommendation. It also displays recommendation severity and direct-investigation access using Log Analytics.
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
+
+1. Navigate to **IoT Hub** > **`Your hub`** > **Defender for IoT** > **Recommendations**.
+
+1. Review the recommendation **description**, **severity**, **device details** of all devices that issued this recommendation in the aggregation period.
+
+1. After reviewing recommendation specifics, use the **manual remediation step** instructions to help remediate and resolve the issue that caused the recommendation.
+
+ :::image type="content" source="media/quickstart/remediate-security-recommendations-inline.png" alt-text="Remediate security recommendations with Defender for IoT" lightbox="media/quickstart/remediate-security-recommendations-expanded.png":::
+
+1. Explore the recommendation details for a specific device by selecting the desired device in the drill-down page.
+
+ :::image type="content" source="media/quickstart/explore-security-recommendation-detail-inline.png" alt-text="Investigate specific security recommendations for a device with Defender for IoT" lightbox="media/quickstart/explore-security-recommendation-detail-expanded.png":::
+
+## Investigate recommendations in Log Analytics workspace
+
+**To access your recommendations in Log Analytics workspace**:
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
+
+1. Navigate to **IoT Hub** > **`Your hub`** > **Defender for IoT** > **Recommendations**.
+
+1. Select a recommendation from the list.
+
+1. Select **Investigate recommendations in Log Analytics workspace**.
+
+ :::image type="content" source="media/how-to-configure-agent-based-solution/recommendation-alert.png" alt-text="Screenshot showing how to view a recommendation in the log analytics workspace.":::
+
+For more information on querying data from Log Analytics, see [Get started with log queries in Azure Monitor](../../azure-monitor/logs/get-started-queries.md).
+
+## Clean up resources
+
+There are no resources to clean up.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Investigate security alerts](tutorial-investigate-security-alerts.md)
defender-for-iot Tutorial Standalone Agent Binary Installation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/tutorial-standalone-agent-binary-installation.md
+
+ Title: Install the Microsoft Defender for IoT micro agent (Preview)
+description: Learn how to install and authenticate the Defender for IoT micro agent.
Last updated : 01/13/2022++
+#Customer intent: As an Azure admin I want to install the Defender for IoT agent on devices connected to an Azure IoT Hub
++
+# Tutorial: Install the Defender for IoT micro agent (Preview)
+
+This tutorial will help you learn how to install and authenticate the Defender for IoT micro agent.
+
+In this tutorial you will learn how to:
+
+> [!div class="checklist"]
+> - Download and install the micro agent
+> - Authenticate the micro agent
+> - Validate the installation
+> - Test the system
+> - Install a specific micro agent version
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An [IoT hub](../../iot-hub/iot-hub-create-through-portal.md).
+
+- Verify you are running one of the following [operating systems](concept-agent-portfolio-overview-os-support.md#agent-portfolio-overview-and-os-support-preview).
+
+- You must have [enabled Microsoft Defender for IoT on your Azure IoT Hub](quickstart-onboard-iot-hub.md).
+
+- You must have [added a resource group to your IoT solution](quickstart-configure-your-solution.md)
+
+- You must have [Create a Defender for IoT micro agent module twin (Preview)](quickstart-create-micro-agent-module-twin.md).
+
+## Download and install the micro agent
+
+Depending on your setup, the appropriate Microsoft package will need to be installed.
+
+**To add the appropriate Microsoft package repository**:
+
+1. Download the repository configuration that matches your device operating system.
+
+ - For Ubuntu 18.04
+
+ ```bash
+ curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
+ ```
+
+ - For Ubuntu 20.04
+
+ ```bash
+ curl https://packages.microsoft.com/config/ubuntu/20.04/prod.list > ./microsoft-prod.list
+ ```
+
+ - For Debian 9 (both AMD64 and ARM64)
+
+ ```bash
+ curl https://packages.microsoft.com/config/debian/stretch/multiarch/prod.list > ./microsoft-prod.list
+ ```
+
+1. Use the following command to copy the repository configuration to the `sources.list.d` directory:
+
+ ```bash
+ sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
+ ```
+
+1. Install the Microsoft GPG public key with the following command:
+
+ ```bash
+ curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
+ sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
+ ```
+
+1. Ensure that you have updated the apt using the following command:
+
+ ```bash
+ sudo apt-get update
+ ```
+
+1. Use the following command to install the Defender for IoT micro agent package on Debian, or Ubuntu based Linux distributions:
+
+ ```bash
+ sudo apt-get install defender-iot-micro-agent
+ ```
+
+## Authenticate the micro agent
+
+There are two options that can be used to authenticate the Defender for IoT micro agent:
+
+- [Module identity connection string](#authenticate-using-a-module-identity-connection-string).
+
+- [Authenticate using a certificate](#authenticate-using-a-certificate).
+
+### Authenticate using a module identity connection string
+
+You will need to copy the module identity connection string from the DefenderIoTMicroAgent module identity details.
+
+**To copy the module identity's connection string**:
+
+1. Navigate to the **IoT Hub** > **`Your hub`** > **Device management** > **Devices** .
+
+ :::image type="content" source="media/quickstart-standalone-agent-binary-installation/iot-devices.png" alt-text="Select IoT devices from the left-hand menu.":::
+
+1. Select a device from the Device ID list.
+
+1. Select the **Module Identities** tab.
+
+1. Select the **DefenderIotMicroAgent** module from the list of module identities associated with the device.
+
+ :::image type="content" source="media/quickstart-standalone-agent-binary-installation/module-identities.png" alt-text="Select the module identities tab.":::
+
+1. Copy the Connection string (primary key) by selecting the **copy** button.
+
+ :::image type="content" source="media/quickstart-standalone-agent-binary-installation/copy-button.png" alt-text="Select the copy button to copy the Connection string (primary key).":::
+
+### Configure authentication using the module identity connection string
+
+**To configure the agent to authenticate using a module identity connection string**:
+
+1. Create a file named `connection_string.txt` containing the copied connection string encoded in utf-8 in the Defender for Cloud agent directory `/var/defender_iot_micro_agent` path by entering the following command:
+
+ ```bash
+ sudo bash -c 'echo "<connection string>" > /var/defender_iot_micro_agent/connection_string.txt'
+ ```
+
+ The `connection_string.txt` will now be located in the following path location `/var/defender_iot_micro_agent/connection_string.txt`.
+
+1. Restart the service using this command:
+
+ ```bash
+ sudo systemctl restart defender-iot-micro-agent.service
+ ```
+
+### Authenticate using a certificate
+
+**To authenticate using a certificate**:
+
+1. Procure a certificate by following [these instructions](../../iot-hub/tutorial-x509-scripts.md).
+
+1. Place the PEM-encoded public part of the certificate, and the private key, in to the Defender for Cloud Agent Directory in to the file called `certificate_public.pem`, and `certificate_private.pem`.
+
+1. Place the appropriate connection string in to the `connection_string.txt` file. The connection string should look like this:
+
+ `HostName=<the host name of the iot hub>;DeviceId=<the id of the device>;ModuleId=<the id of the module>;x509=true`
+
+ This string alerts the Defender for Cloud agent, to expect a certificate be provided for authentication.
+
+1. Restart the service using the following command:
+
+ ```bash
+ sudo systemctl restart defender-iot-micro-agent.service
+ ```
+
+## Validate the installation
+
+**To validate your installation**:
+
+1. Use the following command to ensure the micro agent is running properly with:
+
+ ```bash
+ systemctl status defender-iot-micro-agent.service
+ ```
+
+1. Ensure that the service is stable by making sure it is `active`, and that the uptime of the process is appropriate.
+
+ :::image type="content" source="media/quickstart-standalone-agent-binary-installation/active-running.png" alt-text="Check to make sure your service is stable and active.":::
+
+## Test the system
+
+You can test the system by creating a trigger file on the device. The trigger file will cause the baseline scan in the agent to detect the file as a baseline violation.
+
+1. Create a file on the file system with the following command:
+
+ ```bash
+ sudo touch /tmp/DefenderForIoTOSBaselineTrigger.txt
+ ```
+
+1. Restart the agent using the command:
+
+ ```bash
+ sudo systemctl restart defender-iot-micro-agent.service
+ ```
+
+Allow up to one hour for the recommendation to appear in the hub.
+
+A baseline validation failure recommendation will occur in the hub, with a `CceId` of CIS-debian-9-DEFENDER_FOR_IOT_TEST_CHECKS-0.0:
++
+## Install a specific micro agent version
+
+You can install a specific version of the micro agent using a specific command.
+
+**To install a specific version of the Defender for IoT micro agent**:
+
+1. Open a terminal.
+
+1. Run the following command:
+
+ ```bash
+ sudo apt-get install defender-iot-micro-agent=<version>
+ ```
+
+## Clean up resources
+
+There are no resources to clean up.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Configure Microsoft Defender for IoT agent-based solution](tutorial-configure-agent-based-solution.md)
defender-for-iot Concept Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/concept-sentinel-integration.md
By bringing rich telemetry into Microsoft Sentinel from Microsoft Defender for I
:::image type="content" source="media/concept-sentinel-integration/chart-small.png" alt-text="Screenshot of a chart showing alert flow." lightbox="media/concept-sentinel-integration/chart-large.png":::
-To set up the integration, see [Integrate Microsoft Defender for IoT and Microsoft Sentinel](/azure/sentinel/iot-solution?tabs=use-out-of-the-box-analytics-rules-recommended).
+To set up the integration, see [Integrate Microsoft Defender for IoT and Microsoft Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended).
## OT Security
Microsoft Sentinel MITRE ATT&CK for ICS workbooks show the result of mapping ale
:::image type="content" source="media/concept-sentinel-integration/mitre-attack.png" alt-text="Image of Mitre attack graph":::
-The Workbooks are described in the [Visualize and monitor Defender for IoT data](/azure/sentinel/iot-solution?tabs=use-out-of-the-box-analytics-rules-recommended)
+The Workbooks are described in the [Visualize and monitor Defender for IoT data](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
section of the integration tutorial. Workbooks are deployed to your Microsoft Sentinel workspace as part of the IoT OT Threat Monitoring with Defender for IoT solution. ### Analytics rules Create Microsoft Sentinel incidents for relevant alerts generated by Defender for IoT, either by using out-of-the-box analytics rules provided in the IoT OT Threat Monitoring with Defender for IoT solution, configuring analytics rules manually, or by configuring your data connector to automatically create incidents for all alerts generated by Defender for IoT.
-The Analytics rules are described in the [Detect threats out-of-the-box with Defender for IoT data](/azure/sentinel/iot-solution?tabs=use-out-of-the-box-analytics-rules-recommended) section of the integration tutorial. The rules are deployed to your Microsoft Sentinel workspace as part of the IoT OT Threat Monitoring with Defender for IoT solution.
+The Analytics rules are described in the [Detect threats out-of-the-box with Defender for IoT data](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended) section of the integration tutorial. The rules are deployed to your Microsoft Sentinel workspace as part of the IoT OT Threat Monitoring with Defender for IoT solution.
### SOAR playbooks
Use SOAR playbooks, for example to:
- Send an email to relevant stakeholders when suspicious activity is detected, for example unplanned PLC reprogramming. The mail may be sent to OT personnel, such as a control engineer responsible on the related production line.
-The playbooks are described in the [Automate response to Defender for IoT alerts](/azure/sentinel/iot-solution?tabs=use-out-of-the-box-analytics-rules-recommended) section of the integration tutorial.
+The playbooks are described in the [Automate response to Defender for IoT alerts](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended) section of the integration tutorial.
Playbooks are deployed to your Microsoft Sentinel workspace as part of the IoT OT Threat Monitoring with Defender for IoT solution. ## Next steps -- [Integrate Microsoft Defender for IoT and Microsoft Sentinel](/azure/sentinel/iot-solution?tabs=use-out-of-the-box-analytics-rules-recommended)
+- [Integrate Microsoft Defender for IoT and Microsoft Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
-- [Detect threats out-of-the-box with Defender for IoT data](/azure/sentinel/detect-threats-custom)
+- [Detect threats out-of-the-box with Defender for IoT data](../../sentinel/detect-threats-custom.md)
-- [Tutorial Use playbooks with automation rules in Microsoft Sentinel](/azure/sentinel/tutorial-respond-threats-playbook)
+- [Tutorial Use playbooks with automation rules in Microsoft Sentinel](../../sentinel/tutorial-respond-threats-playbook.md)
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/release-notes.md
Microsoft plans to release updates for Defender for IoT no less than once per qu
The new **IoT OT Threat Monitoring with Defender for IoT solution** is available and provides enhanced capabilities for Microsoft Defender for IoT integration with Microsoft Sentinel. The **IoT OT Threat Monitoring with Defender for IoT solution** is a set of bundled content, including analytics rules, workbooks, and playbooks, configured specifically for Defender for IoT data. This solution currently supports only Operational Networks (OT/ICS).
-For information on integrating with Microsoft Sentinel, see [Tutorial: Integrate Defender for Iot and Sentinel](/azure/sentinel/iot-solution?tabs=use-out-of-the-box-analytics-rules-recommended)
+For information on integrating with Microsoft Sentinel, see [Tutorial: Integrate Defender for Iot and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
### Apache Log4j vulnerability
In sensor and on-premises management console Alerts, the term Manage this Event
## Next steps
-[Getting started with Defender for IoT](getting-started.md)
+[Getting started with Defender for IoT](getting-started.md)
devtest-labs Lab Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/lab-services-overview.md
You can use two different Azure services to set up lab environments in the cloud
- [Azure Lab Services](https://azure.microsoft.com/services/lab-services) provides managed classroom labs.
- Lab Services does all infrastructure management, from spinning up VMs and scaling infrastructure to handling errors. After an IT administrator creates a Lab Services lab account, instructors can [create classroom labs](/azure/lab-services/how-to-manage-classroom-labs#create-a-classroom-lab) in the account. An instructor specifies the number and type of VMs they need for the class, and adds users to the class. Once users register in the class, they can access the VMs to do class exercises and homework.
+ Lab Services does all infrastructure management, from spinning up VMs and scaling infrastructure to handling errors. After an IT administrator creates a Lab Services lab account, instructors can [create classroom labs](../lab-services/how-to-manage-classroom-labs.md#create-a-classroom-lab) in the account. An instructor specifies the number and type of VMs they need for the class, and adds users to the class. Once users register in the class, they can access the VMs to do class exercises and homework.
## Key capabilities
The following table compares the two types of Azure lab environments:
See the following articles: - [About Lab Services](../lab-services/lab-services-overview.md)-- [About DevTest Labs](devtest-lab-overview.md)
+- [About DevTest Labs](devtest-lab-overview.md)
devtest Troubleshoot Expired Removed Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest/offer/troubleshoot-expired-removed-subscription.md
If your Visual Studio subscription expires or is removed, all the subscription b
> [!IMPORTANT] > You must transfer your resources to another Azure subscription before your current Azure subscription is disabled or you will lose access to your data. >
-> If you donΓÇÖt take one of these actions, your Azure subscription will be disabled at the time specified in your email notification. If the subscription is disabled, you can reenable it as a pay-as-you-go subscription by following [these steps](/azure/cost-management-billing/manage/switch-azure-offer).
+> If you donΓÇÖt take one of these actions, your Azure subscription will be disabled at the time specified in your email notification. If the subscription is disabled, you can reenable it as a pay-as-you-go subscription by following [these steps](../../cost-management-billing/manage/switch-azure-offer.md).
## Maintain a subscription to use monthly credits
-There are several ways to continue using a monthly credit for Azure. To save your Azure resources, you must [transfer your resources](/azure/azure-resource-manager/management/move-resource-group-and-subscription) to another Azure subscription, regardless of which of the following action you choose:
+There are several ways to continue using a monthly credit for Azure. To save your Azure resources, you must [transfer your resources](../../azure-resource-manager/management/move-resource-group-and-subscription.md) to another Azure subscription, regardless of which of the following action you choose:
- **If you purchase your Visual Studio subscription directly**, purchase a new subscription or renew your subscription through Microsoft Store.
There are several ways to continue using a monthly credit for Azure. To save you
## Convert your Azure subscription to pay-as-you-go
-If you no longer need a Visual Studio subscription or credit but you want to continue using your Azure resources, convert your Azure subscription to pay-as-you-go pricing by [removing your spending limit](/azure/cost-management-billing/manage/spending-limit#remove-the-spending-limit-in-azure-portal).
+If you no longer need a Visual Studio subscription or credit but you want to continue using your Azure resources, convert your Azure subscription to pay-as-you-go pricing by [removing your spending limit](../../cost-management-billing/manage/spending-limit.md#remove-the-spending-limit-in-azure-portal).
expressroute Expressroute About Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-about-encryption.md
We support the following [standard ciphers](https://1.ieee802.org/security/802-1
* GCM-AES-XPN-128 * GCM-AES-XPN-256 ### Does ExpressRoute Direct MACsec support Secure Channel Identifier (SCI)?
-Yes, you can set [Secure Channel Identifier (SCI)](https://en.wikipedia.org/wiki/IEEE_802.1AE) on the ExpressRoute Direct ports. Refer to [Configure MACsec](https://docs.microsoft.com/azure/expressroute/expressroute-howto-macsec).
+Yes, you can set [Secure Channel Identifier (SCI)](https://en.wikipedia.org/wiki/IEEE_802.1AE) on the ExpressRoute Direct ports. Refer to [Configure MACsec](./expressroute-howto-macsec.md).
## End-to-end encryption by IPsec FAQ IPsec is an [IETF standard](https://tools.ietf.org/html/rfc6071). It encrypts data at the Internet Protocol (IP) level or Network Layer 3. You can use IPsec to encrypt an end-to-end connection between your on-premises network and your virtual network (VNET) on Azure. See other FAQs below. ### Can I enable IPsec in addition to MACsec on my ExpressRoute Direct ports?
If Azure VPN gateway is used, check the [performance numbers here](../vpn-gatewa
## Next steps See [Configure MACsec](expressroute-howto-macsec.md) for more information about the MACsec configuration.
-See [Configure IPsec](site-to-site-vpn-over-microsoft-peering.md) for more information about the IPsec configuration.
+See [Configure IPsec](site-to-site-vpn-over-microsoft-peering.md) for more information about the IPsec configuration.
expressroute Expressroute Troubleshooting Expressroute Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-troubleshooting-expressroute-overview.md
Test your private peering connectivity by **counting** packets arriving and leav
:::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/test-private-peering.png" alt-text="Screenshot of troubleshooting connectivity issues options.":::
-1. Execute the [PsPing](https://docs.microsoft.com/sysinternals/downloads/psping) test from your on-premises IP address to your Azure IP address and keep it running during the connectivity test.
+1. Execute the [PsPing](/sysinternals/downloads/psping) test from your on-premises IP address to your Azure IP address and keep it running during the connectivity test.
1. Fill out the fields of the form, making sure to enter the same on-premises and Azure IP addresses used in Step 5. Then select **Submit** and then wait for your results to load. Once your results are ready, review the information for interpreting them below.
For more information or help, check out the following links:
[CreatePeering]: ./expressroute-howto-routing-portal-resource-manager.md [ARP]: ./expressroute-troubleshooting-arp-resource-manager.md [HA]: ./designing-for-high-availability-with-expressroute.md
-[DR-Pvt]: ./designing-for-disaster-recovery-with-expressroute-privatepeering.md
+[DR-Pvt]: ./designing-for-disaster-recovery-with-expressroute-privatepeering.md
expressroute How To Configure Custom Bgp Communities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/how-to-configure-custom-bgp-communities.md
BGP communities are groupings of IP prefixes tagged with a community value. This
> The `12076:` is required before your custom community value. >
-1. Retrieve your virtual network and review its updated properties. The **RegionalCommunity** value is predefined based on the Azure region of the virtual network; to view the regional BGP community values for private peering, see [ExpressRoute routing requirements](https://docs.microsoft.com/azure/expressroute/expressroute-routing#bgp). The **VirtualNetworkCommunity** value should match your custom definition.
+1. Retrieve your virtual network and review its updated properties. The **RegionalCommunity** value is predefined based on the Azure region of the virtual network; to view the regional BGP community values for private peering, see [ExpressRoute routing requirements](./expressroute-routing.md#bgp). The **VirtualNetworkCommunity** value should match your custom definition.
```azurepowershell-interactive $virtualnetwork = @{
BGP communities are groupings of IP prefixes tagged with a community value. This
## Next steps - [Verify ExpressRoute connectivity](expressroute-troubleshooting-expressroute-overview.md).-- [Troubleshoot your network performance](expressroute-troubleshooting-network-performance.md)
+- [Troubleshoot your network performance](expressroute-troubleshooting-network-performance.md)
frontdoor Concept Origin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-origin.md
Azure Front Door Standard/Premium sends periodic HTTP/HTTPS probe requests to ea
>[!NOTE] >For faster failovers, set the interval to a lower value. The lower the value, the higher the health probe volume your backends receive. For example, if the interval is set to 30 seconds with say, 100 Front Door POPs globally, each backend will receive about 200 probe requests per minute.
-For more information, see [Health probes](/azure/frontdoor/front-door-health-probes).
+For more information, see [Health probes](../front-door-health-probes.md).
### Load-balancing settings
There are four traffic routing methods available in Azure Front Door Standard/Pr
* **[Priority](#priority):** You can assign priorities to your backends when you want to configure a primary backend to service all traffic. The secondary backend can be a backup in case the primary backend becomes unavailable. * **[Weighted](#weighted):** You can assign weights to your backends when you want to distribute traffic across a set of backends. Whether you want to evenly distribute or according to the weight coefficients.
-All Azure Front Door Standard/Premium configurations include monitoring of backend health and automated instant global failover. For more information, see [Backend Monitoring](/azure/frontdoor/front-door-health-probes). Your Front Door can work based off of a single routing method. But depending on your application needs, you can also combine multiple routing methods to build an optimal routing topology.
+All Azure Front Door Standard/Premium configurations include monitoring of backend health and automated instant global failover. For more information, see [Backend Monitoring](../front-door-health-probes.md). Your Front Door can work based off of a single routing method. But depending on your application needs, you can also combine multiple routing methods to build an optimal routing topology.
### <a name = "latency"></a>Lowest latencies based traffic-routing
The weighted method enables some useful scenarios:
## Next steps
-Learn how to [create a Front Door Standard/Premium](create-front-door-portal.md)
+Learn how to [create a Front Door Standard/Premium](create-front-door-portal.md)
frontdoor Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/overview.md
Key features included with Azure Front Door Standard/Premium (Preview):
- Accelerated application performance by using **[split TCP-based](../front-door-routing-architecture.md#splittcp)** anycast protocol. -- Intelligent **[health probe](/azure/frontdoor/front-door-health-probes)** monitoring and load balancing among **[origins](concept-origin.md)**.
+- Intelligent **[health probe](../front-door-health-probes.md)** monitoring and load balancing among **[origins](concept-origin.md)**.
- Define your own **[custom domain](how-to-add-custom-domain.md)** with flexible domain validation.
Subscribe to the RSS feed and view the latest Azure Front Door feature updates o
## Next steps
-* Learn how to [create a Front Door](create-front-door-portal.md).
+* Learn how to [create a Front Door](create-front-door-portal.md).
governance Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/irs-1075-sept2016.md
The IRS 1075 September 2016 blueprint sample provides governance guardrails using [Azure Policy](../../policy/overview.md) that help you assess specific
-[IRS 1075 September 2016](/azure/governance/blueprints/samples/irs-1075-sept2016) controls. This blueprint helps
+IRS 1075 September 2016 controls. This blueprint helps
customers deploy a core set of policies for any Azure-deployed architecture that must implement controls for IRS 1075 September 2016.
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
healthcare-apis Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/disaster-recovery.md
Your access to the Azure API for FHIR will be maintained if the key vault hostin
The export job will be picked up from another region after 10 minutes without an update to the job status. Follow the guidance for Azure storage for recovering your storage account in the event of a regional outage. For more information, see [Disaster recovery and storage account failover](../../storage/common/storage-disaster-recovery-guidance.md).
-Ensure that you grant the same permissions to the system identity of the Azure API for FHIR. Also, if the storage account is configured with selected networks, see [How to export FHIR data](/azure/healthcare-apis/fhir/export-data).
+Ensure that you grant the same permissions to the system identity of the Azure API for FHIR. Also, if the storage account is configured with selected networks, see [How to export FHIR data](../fhir/export-data.md).
### IoMT FHIR Connector
healthcare-apis Move Fhir Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/move-fhir-service.md
+
+ Title: Move FHIR service to another subscription or resource group
+description: This article describes how to move Azure an API for FHIR service instance
++++ Last updated : 01/14/2022+++
+# Move FHIR service to another subscription or resource group
+
+In this article, you'll learn how to move an Azure API for FHIR service instance to another subscription or another resource group.
++
+Moving to a different region is not supported, though the option may be available from the list. See more information on [Move operation support for resources](../../azure-resource-manager/management/move-support-resources.md).
+
+> [!Note]
+> Moving an instance of Azure API for FHIR between subscriptions or resource groups is supported, as long as Private Link is NOT enabled and no IoMT connectors are created.
+
+## Move to another subscription
+
+You can move an Azure API for FHIR service instance to another subscription from the portal. However, the runtime and data for the service are not moved. On average the **move** operation takes approximately 15 minutes or so, and the actual time may vary.
+
+The **move** operation takes a few simple steps.
+
+1. Select a FHIR service instance
+
+Select the FHIR service from the source subscription and then the target subscription.
+
+ :::image type="content" source="media/move/move-source-target.png" alt-text="Screenshot of Move to another subscription with source and target." lightbox="media/move/move-source-target.png":::
+
+2. Validate the move operation
+
+This step validates whether the selected resource can be moved. It takes a few minutes and returns a status from **Pending validation** to **Succeeded** or **Failed**. If the validation failed, you can view the error details, fix the error, and restart the **move** operation.
+
+ :::image type="content" source="media/move/move-validation.png" alt-text="Screenshot of Move to another subscription with validation." lightbox="media/move/move-validation.png":::
+
+3. Review and confirm the move operation
+
+After reviewing the move operation summary, select the confirmation checkbox at the bottom of the screen and press the Move button to complete the operation.
+
+ :::image type="content" source="media/move/move-review.png" alt-text="Screenshot of Move to another subscription with confirmation." lightbox="media/move/move-review.png":::
+
+Optionally, you can check the activity log in the source subscription and target subscription.
+
+## Move to another resource group
+
+The process works similarly to **Move to another subscription**, except the selected FHIR service will be moved to a different resource group in the same subscription.
+
+## Next steps
+
+In this article, you've learned how to move the FHIR service. For more information about the FHIR service, see
+
+>[!div class="nextstepaction"]
+>[Supported FHIR Features](fhir-features-supported.md)
+
healthcare-apis Bulk Importing Fhir Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/bulk-importing-fhir-data.md
While tools such as [Postman](../use-postman.md), [cURL](../using-curl.md), and
The [FHIR Importer](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/FhirImporter) is an Azure Function or microservice, written in C#, that imports FHIR bundles in JSON or NDJSON formats as soon as they're uploaded to an Azure storage container. - Behind the scenes, the Azure Storage trigger starts the Azure Function when a new document is detected and the document is the input to the function.-- It processes multiple documents in parallel and provides a basic retry logic using [HTTP call retries](https://docs.microsoft.com/dotnet/architecture/microservices/implement-resilient-applications/implement-http-call-retries-exponential-backoff-polly) when the FHIR service is too busy to handle the requests.
+- It processes multiple documents in parallel and provides a basic retry logic using [HTTP call retries](/dotnet/architecture/microservices/implement-resilient-applications/implement-http-call-retries-exponential-backoff-polly) when the FHIR service is too busy to handle the requests.
The FHIR Importer works for the FHIR service in Healthcare APIs and Azure API for FHIR.
In this article, you've learned about the tools and the steps for bulk-importing
>[Configure export settings and set up a storage account](configure-export-data.md) >[!div class="nextstepaction"]
->[Moving data from Azure API for FHIR to Azure Synapse Analytics](move-to-synapse.md)
+>[Moving data from Azure API for FHIR to Azure Synapse Analytics](move-to-synapse.md)
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/configure-export-data.md
Previously updated : 12/16/2021+ Last updated : 01/14/2022
After you've completed this final step, you're ready to export the data using $e
> [!Note] > Only storage accounts in the same subscription as that for FHIR service are allowed to be registered as the destination for $export operations.
+## Use Azure storage accounts behind firewalls
+
+FHIR service supports a secure export operation. Choose one of the two options below:
+
+* Allowing FHIR service as a Microsoft Trusted Service to access the Azure storage account.
+
+* Allowing specific IP addresses associated with FHIR service to access the Azure storage account.
+This option provides two different configurations depending on whether the storage account is in the same location as, or is in a different location from that of the FHIR service.
+
+### Allowing FHIR service as a Microsoft Trusted Service
+
+Select a storage account from the Azure portal, and then select the **Networking** blade. Select **Selected networks** under the **Firewalls and virtual networks** tab.
+
+ :::image type="content" source="media/export-data/storage-networking-1.png" alt-text="Screenshot of Azure Storage Networking Settings." lightbox="media/export-data/storage-networking-1.png":::
+
+Select **Microsoft.HealthcareApis/workspaces** from the **Resource type** dropdown list and your workspace from the **Instance name** dropdown list.
+
+Under the **Exceptions** section, select the box **Allow trusted Microsoft services to access this storage account** and save the setting.
++
+Next, specify the FHIR service instance in the selected workspace instance for the storage account using the PowerShell command.
+
+```
+$subscription="xxx"
+$tenantId = "xxx"
+$resourceGroupName = "xxx"
+$storageaccountName = "xxx"
+$workspacename="xxx"
+$fhirname="xxx"
+$resourceId = "/subscriptions/$subscription/resourceGroups/$resourcegroup/providers/Microsoft.HealthcareApis/workspaces/$workspacename/fhirservices/$fhirname"
+
+Add-AzStorageAccountNetworkRule -ResourceGroupName $resourceGroupName -Name $storageaccountName -TenantId $tenantId -ResourceId $resourceId
+```
+
+You can see that the networking setting for the storage account shows **two selected** in the **Instance name** dropdown list. One is linked to the workspace instance and the second is linked to the FHIR service instance.
+
+ :::image type="content" source="media/export-data/storage-networking-2.png" alt-text="Screenshot of Azure Storage Networking Settings with resource type and instance names." lightbox="media/export-data/storage-networking-2.png":::
+
+Note that you'll need to install "Add-AzStorageAccountNetworkRule" using an administrator account. For more information, see [Configure Azure Storage firewalls and virtual networks](../../storage/common/storage-network-security.md)
+
+`
+Install-Module Az.Storage -Repository PsGallery -AllowClobber -Force
+`
+
+You're now ready to export FHIR data to the storage account securely. Note that the storage account is on selected networks and is not publicly accessible. To access the files, you can either enable and use private endpoints for the storage account, or enable all networks for the storage account to access the data there if possible.
+
+> [!IMPORTANT]
+> The user interface will be updated later to allow you to select the Resource type for FHIR service and a specific service instance.
+
+### Allowing specific IP addresses for the Azure storage account in a different region
+
+Select **Networking** of the Azure storage account from the
+portal.
+
+Select **Selected networks**. Under the Firewall section, specify the IP address in the **Address range** box. Add IP ranges to
+allow access from the internet or your on-premises networks. You can
+find the IP address in the table below for the Azure region where the
+FHIR service is provisioned.
+
+|**Azure Region** |**Public IP Address** |
+|:-|:-|
+| Australia East | 20.53.44.80 |
+| Canada Central | 20.48.192.84 |
+| Central US | 52.182.208.31 |
+| East US | 20.62.128.148 |
+| East US 2 | 20.49.102.228 |
+| East US 2 EUAP | 20.39.26.254 |
+| Germany North | 51.116.51.33 |
+| Germany West Central | 51.116.146.216 |
+| Japan East | 20.191.160.26 |
+| Korea Central | 20.41.69.51 |
+| North Central US | 20.49.114.188 |
+| North Europe | 52.146.131.52 |
+| South Africa North | 102.133.220.197 |
+| South Central US | 13.73.254.220 |
+| Southeast Asia | 23.98.108.42 |
+| Switzerland North | 51.107.60.95 |
+| UK South | 51.104.30.170 |
+| UK West | 51.137.164.94 |
+| West Central US | 52.150.156.44 |
+| West Europe | 20.61.98.66 |
+| West US 2 | 40.64.135.77 |
+
+> [!NOTE]
+> The above steps are similar to the configuration steps described in the document How to convert data to FHIR (Preview). For more information, see [Host and use templates](./convert-data.md#host-and-use-templates)
+
+### Allowing specific IP addresses for the Azure storage account in the same region
+
+The configuration process is the same as above except a specific IP
+address range in Classless Inter-Domain Routing (CIDR) format is used instead, 100.64.0.0/10. The reason why the IP address range, which includes 100.64.0.0 ΓÇô 100.127.255.255, must be specified is because the actual IP address used by the service varies, but will be within the range, for each $export request.
+
+> [!Note]
+> It is possible that a private IP address within the range of 10.0.2.0/24 may be used instead. In that case, the $export operation will not succeed. You can retry the $export request, but there is no guarantee that an IP address within the range of 100.64.0.0/10 will be used next time. That's the known networking behavior by design. The alternative is to configure the storage account in a different region.
+ ## Next steps
-In this article, you learned about the three steps in configuring export settings that allows you to export data out of FHIR service account to a storage account. For more information about the Bulk Export feature that allows data to be exported from the FHIR service, see
+In this article, you learned about the three steps in configuring export settings that allow you to export data out of FHIR service account to a storage account. For more information about the Bulk Export feature that allows data to be exported from the FHIR service, see
>[!div class="nextstepaction"]
->[How to export FHIR data](export-data.md)
+>[How to export FHIR data](export-data.md)
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/convert-data.md
Previously updated : 12/10/2021 Last updated : 01/14/2022
In the table below, you'll find the IP address for the Azure region where the FH
> [!NOTE]
-> The above steps are similar to the configuration steps described in the document How to export FHIR data. For more information, see [Secure Export to Azure Storage](./export-data.md#secure-export-to-azure-storage)
+> The above steps are similar to the configuration steps described in the document How to configure FHIR export settings. For more information, see [Configure export settings](./configure-export-data.md)
For a private network access (i.e. private link), you can also disable the public network access of ACR. * Select Networking blade of the Azure storage account from the portal.
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/export-data.md
Currently we support $export for ADLS Gen2 enabled storage accounts, with the fo
- User cannot take advantage of [hierarchical namespaces](../../storage/blobs/data-lake-storage-namespace.md), yet there isn't a way to target export to a specific subdirectory within the container. We only provide the ability to target a specific container (where we create a new folder for each export). - Once an export is complete, we never export anything to that folder again, since subsequent exports to the same container will be inside a newly created folder.
+To export data to storage accounts behind the firewalls, see [Configure settings for export](configure-export-data.md).
## Settings and parameters
The FHIR service supports the following query parameters. All of these parameter
> [!Note] > Only storage accounts in the same subscription as that for FHIR service are allowed to be registered as the destination for $export operations.-
-## Secure Export to Azure Storage
-
-FHIR service supports a secure export operation. Choose one of the two options below:
-
-* Allowing FHIR service as a Microsoft Trusted Service to access the Azure storage account.
-
-* Allowing specific IP addresses associated with FHIR service to access the Azure storage account.
-This option provides two different configurations depending on whether the storage account is in the same location as, or is in a different location from that of the FHIR service.
-
-### Allowing FHIR service as a Microsoft Trusted Service
-
-Select a storage account from the Azure portal, and then select the **Networking** blade. Select **Selected networks** under the **Firewalls and virtual networks** tab.
-
-> [!IMPORTANT]
-> Ensure that youΓÇÖve granted access permission to the storage account for FHIR service using its managed identity. For more details, see [Configure export setting and set up the storage account](./configure-export-data.md).
-
- :::image type="content" source="media/export-data/storage-networking.png" alt-text="Azure Storage Networking Settings." lightbox="media/export-data/storage-networking.png":::
-
-Under the **Exceptions** section, select the box **Allow trusted Microsoft services to access this storage account** and save the setting.
--
-You're now ready to export FHIR data to the storage account securely. Note that the storage account is on selected networks and is not publicly accessible. To access the files, you can either enable and use private endpoints for the storage account, or enable all networks for the storage account for a short period of time.
-
-> [!IMPORTANT]
-> The user interface will be updated later to allow you to select the Resource type for FHIR service and a specific service instance.
-
-### Allowing specific IP addresses for the Azure storage account in a different region
-
-Select **Networking** of the Azure storage account from the
-portal.
-
-Select **Selected networks**. Under the Firewall section, specify the IP address in the **Address range** box. Add IP ranges to
-allow access from the internet or your on-premises networks. You can
-find the IP address in the table below for the Azure region where the
-FHIR service service is provisioned.
-
-|**Azure Region** |**Public IP Address** |
-|:-|:-|
-| Australia East | 20.53.44.80 |
-| Canada Central | 20.48.192.84 |
-| Central US | 52.182.208.31 |
-| East US | 20.62.128.148 |
-| East US 2 | 20.49.102.228 |
-| East US 2 EUAP | 20.39.26.254 |
-| Germany North | 51.116.51.33 |
-| Germany West Central | 51.116.146.216 |
-| Japan East | 20.191.160.26 |
-| Korea Central | 20.41.69.51 |
-| North Central US | 20.49.114.188 |
-| North Europe | 52.146.131.52 |
-| South Africa North | 102.133.220.197 |
-| South Central US | 13.73.254.220 |
-| Southeast Asia | 23.98.108.42 |
-| Switzerland North | 51.107.60.95 |
-| UK South | 51.104.30.170 |
-| UK West | 51.137.164.94 |
-| West Central US | 52.150.156.44 |
-| West Europe | 20.61.98.66 |
-| West US 2 | 40.64.135.77 |
-
-> [!NOTE]
-> The above steps are similar to the configuration steps described in the document How to convert data to FHIR (Preview). For more information, see [Host and use templates](./convert-data.md#host-and-use-templates)
-
-### Allowing specific IP addresses for the Azure storage account in the same region
-
-The configuration process is the same as above except a specific IP
-address range in CIDR format is used instead, 100.64.0.0/10. The reason why the IP address range, which includes 100.64.0.0 ΓÇô 100.127.255.255, must be specified is because the actual IP address used by the service varies, but will be within the range, for each $export request.
-
-> [!Note]
-> It is possible that a private IP address within the range of 10.0.2.0/24 may be used instead. In that case, the $export operation will not succeed. You can retry the $export request, but there is no guarantee that an IP address within the range of 100.64.0.0/10 will be used next time. That's the known networking behavior by design. The alternative is to configure the storage account in a different region.
## Next steps
In this article, you've learned how to export FHIR resources using the $export c
>[Export de-identified data](de-identified-export.md) >[!div class="nextstepaction"]
->[Export to Synapse](move-to-synapse.md)
+>[Export to Synapse](move-to-synapse.md)
healthcare-apis Healthcare Apis Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/healthcare-apis-faqs.md
Previously updated : 01/03/2022 Last updated : 01/14/2022
During the public preview phase, Azure Healthcare APIs is available for you to u
Please refer to the [Products by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir) page for the most current information. ### What are the subscription quota limits for the Azure Healthcare APIs?
-Please refer to [Healthcare APIs service limits](https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-healthcare-apis) for the most current information.
+Please refer to [Healthcare APIs service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-healthcare-apis) for the most current information.
+
+### What is the backup and recovery policy for the Azure Healthcare APIs?
+Data for the managed service is automatically backed up every 12 hours, and the backups are kept for 7 days. Data can be restored by the support team. Customers can make a request to restore the data, or change the default data backup policy, through a support ticket.
## More frequently asked questions [FAQs about Azure Healthcare APIs FHIR service](./fhir/fhir-faq.md)
Please refer to [Healthcare APIs service limits](https://docs.microsoft.com/azur
[FAQs about Azure Healthcare APIs IoT connector](./iot/iot-connector-faqs.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Device Data Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/iot/device-data-through-iot-hub.md
This tutorial provides the steps to connect and route device data from IoT Hub t
- An active Azure subscription - [Create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) - FHIR service resource with at least one IoT connector - [Deploy IoT connector using Azure portal](deploy-iot-connector-in-azure.md)-- Azure IoT Hub resource connected with real or simulated device(s) - [Create an IoT Hub using the Azure portal](/azure/iot-hub/iot-hub-create-through-portal)
+- Azure IoT Hub resource connected with real or simulated device(s) - [Create an IoT Hub using the Azure portal](../../iot-hub/iot-hub-create-through-portal.md)
> [!TIP] > If you are using an Azure IoT Hub simulated device application, feel free to pick the application of your choice amongst different supported languages and systems.
Below is a diagram of the IoT device message flow from IoT Hub into IoT connecto
## Create a managed identity for IoT Hub
-For this tutorial, we'll be using an IoT Hub with a [user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview).
+For this tutorial, we'll be using an IoT Hub with a [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
-The user-assigned managed identity will be used to provide access to your IoT connector device message event hub using [Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/overview).
+The user-assigned managed identity will be used to provide access to your IoT connector device message event hub using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
-Follow these directions to create a user-assigned managed identity with your IoT Hub: [IoT Hub support for managed identities](/azure/iot-hub/iot-hub-managed-identity#user-assigned-managed-identity).
+Follow these directions to create a user-assigned managed identity with your IoT Hub: [IoT Hub support for managed identities](../../iot-hub/iot-hub-managed-identity.md#user-assigned-managed-identity).
## Connect IoT Hub with IoT connector Azure IoT Hub supports a feature called [message routing](../../iot-hub/iot-hub-devguide-messages-d2c.md). Message routing provides the capability to send device data to various Azure services (for example: Event Hubs, Storage Accounts, and Service Buses). IoT connector uses this feature to allow an IoT Hub to connect and send device messages to the IoT connector device message event hub endpoint.
-Follow these directions to grant access to the IoT Hub user-assigned managed identity to your IoT connector device message event hub and set up message routing: [Configure message routing with managed identities](/azure/iot-hub/iot-hub-managed-identity#egress-connectivity-from-iot-hub-to-other-azure-resources).
+Follow these directions to grant access to the IoT Hub user-assigned managed identity to your IoT connector device message event hub and set up message routing: [Configure message routing with managed identities](../../iot-hub/iot-hub-managed-identity.md#egress-connectivity-from-iot-hub-to-other-azure-resources).
## Send device message to IoT Hub
To learn about the different stages of data flow within IoT connector, see:
>[!div class="nextstepaction"] >[IoT connector data flow](iot-data-flow.md)
-(FHIR&#174;) is a registered trademark of HL7 and is used with the permission of HL7.
+(FHIR&#174;) is a registered trademark of HL7 and is used with the permission of HL7.
healthcare-apis Iot Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/iot/iot-connector-overview.md
IoT connector transforms device data into Fast Healthcare Interoperability Resou
Below is an overview of each step IoT connector does once IoMT device data is received. Each step will be further explained in the [IoT connector data flow](./iot-data-flow.md) article. > [!NOTE]
-> Learn more about [Azure Event Hubs](/azure/event-hubs) use cases, features and architectures.
+> Learn more about [Azure Event Hubs](../../event-hubs/index.yml) use cases, features and architectures.
:::image type="content" source="media/iot-data-flow/iot-data-flow.png" alt-text="IoMT data flows from IoT devices into an event hub. IoMT data is ingested by IoT connector as it is normalized, grouped, transformed, and persisted in the FHIR service." lightbox="media/iot-data-flow/iot-data-flow.png":::
IoT connector may also be used with the following Microsoft solutions to provide
* [Microsoft Teams](./iot-connector-teams.md) ## Secure
-IoT connector uses Azure [Resource-based Access Control](/azure/role-based-access-control/overview) and [Managed Identities](/azure/active-directory/managed-identities-azure-resources/overview) for granular security and access control of your IoT connector assets.
+IoT connector uses Azure [Resource-based Access Control](../../role-based-access-control/overview.md) and [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md) for granular security and access control of your IoT connector assets.
## Next steps
For more information about deploying IoT connector, see:
>[!div class="nextstepaction"] >[Deploying IoT connector in the Azure portal](./deploy-iot-connector-in-azure.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
iot-develop Quickstart Devkit Espressif Esp32 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-espressif-esp32.md
As a next step, explore the following articles to learn more about working with
> [!div class="nextstepaction"] > [Azure RTOS embedded development quickstarts](quickstart-devkit-mxchip-az3166.md) > [!div class="nextstepaction"]
-> [Azure IoT device development documentation](/azure/iot-develop/)
+> [Azure IoT device development documentation](./index.yml)
iot-dps How To Send Additional Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/how-to-send-additional-data.md
If the custom allocation policy webhook wishes to return some data to the device
This feature is available in C, C#, JAVA and Node.js client SDKs. To learn more about the Azure IoT SDKs available for IoT Hub and the IoT Hub Device Provisioning service, see [Microsoft Azure IoT SDKs]( https://github.com/Azure/azure-iot-sdks).
-[IoT Plug and Play (PnP)](/azure/iot-develop/overview-iot-plug-and-play) devices use the payload to send their model ID when they register with DPS. You can find examples of this usage in the PnP samples in the SDK or sample repositories. For example, [C# PnP thermostat](https://github.com/Azure-Samples/azure-iot-samples-csharp/blob/main/iot-hub/Samples/device/PnpDeviceSamples/Thermostat/Program.cs) or [Node.js PnP temperature controller](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/pnp_temperature_controller.js).
+[IoT Plug and Play (PnP)](../iot-develop/overview-iot-plug-and-play.md) devices use the payload to send their model ID when they register with DPS. You can find examples of this usage in the PnP samples in the SDK or sample repositories. For example, [C# PnP thermostat](https://github.com/Azure-Samples/azure-iot-samples-csharp/blob/main/iot-hub/Samples/device/PnpDeviceSamples/Thermostat/Program.cs) or [Node.js PnP temperature controller](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/pnp_temperature_controller.js).
## Next steps
-* To learn how to provision devices using a custom allocation policy, see [How to use custom allocation policies](./how-to-use-custom-allocation-policies.md)
+* To learn how to provision devices using a custom allocation policy, see [How to use custom allocation policies](./how-to-use-custom-allocation-policies.md)
iot-dps Virtual Network Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/virtual-network-support.md
Common approaches to restricting connectivity include [DPS IP filter rules](./io
Devices that operate in on-premises networks can use [Virtual Private Network (VPN)](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoute](https://azure.microsoft.com/services/expressroute/) private peering to connect to a VNET in Azure and access DPS resources through private endpoints.
-A private endpoint is a private IP address allocated inside a customer-owned VNET by which an Azure resource is accessible. By having a private endpoint for your DPS resource, you will be able to allow devices operating inside your VNET to request provisioning by your DPS resource without allowing traffic to the public endpoint.
+A private endpoint is a private IP address allocated inside a customer-owned VNET by which an Azure resource is accessible. By having a private endpoint for your DPS resource, you will be able to allow devices operating inside your VNET to request provisioning by your DPS resource without allowing traffic to the public endpoint. Each DPS resource can support multiple private endpoints, each of which may be located in a VNET in a different region.
## Prerequisites
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
Learn more about the [Defender for IoT micro agent](../defender-for-iot/device-b
:::image type="content" source="media/how-to-connect-downstream-iot-edge-device/select-device.png" alt-text="Screenshot showing where your device is located for selection.":::
-1. Select the `DefenderIotMicroAgent` module twin that you created from [these instructions](../defender-for-iot/device-builders/quickstart-create-micro-agent-module-twin.md#create-defenderiotmicroagent-module-twin).
+1. Select the `DefenderIotMicroAgent` module twin that you created from [these instructions](../defender-for-iot/device-builders/quickstart-create-micro-agent-module-twin.md#create-a-defenderiotmicroagent-module-twin).
:::image type="content" source="media/how-to-connect-downstream-iot-edge-device/defender-micro-agent.png" alt-text="Screenshot showing the location of the DefenderIotMicroAgent.":::
iot-hub Tutorial X509 Openssl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-x509-openssl.md
Last updated 02/26/2021
-#Customer intent: As a developer, I want to be able to use X.509 certificates to authenticate devices to an IoT hub. This step of the tutorial needs to introduce me to OpenSSL that I can use to generate test certificates.
+#Customer intent: As a developer, I want to be able to use X.509 certificates to authenticate devices to an IoT hub. This step of the tutorial needs to introduce me to OpenSSL that I can use to generate test certificates.
# Tutorial: Using OpenSSL to create test certificates
Although you can purchase X.509 certificates from a trusted certification author
Create a directory structure for the certification authority.
-* The **certs** directory stores new certificates.
-* The **db** directory is used for the certificate database.
-* The **private** directory stores the CA private key.
+* The *certs* directory stores new certificates.
+* The *db* directory is used for the certificate database.
+* The *private* directory stores the CA private key.
```bash mkdir rootca
Create a directory structure for the certification authority.
## Step 2 - Create a root CA configuration file
-Before creating a CA, create a configuration file and save it as `rootca.conf` in the rootca directory.
+Before creating a CA, create a configuration file and save it as *rootca.conf* in the *rootca* directory.
```xml [default]
name_opt = utf8,esc_ctrl,multiline,lname,align
commonName = "Test Root CA" [ca_default]
-home = ../rootca
+home = ../rootca
database = $home/db/index serial = $home/db/serial crlnumber = $home/db/crlnumber
subjectKeyIdentifier = hash
## Step 3 - Create a root CA
-First, generate the key and the certificate signing request (CSR) in the rootca directory.
+First, generate a private key and the certificate signing request (CSR) in the *rootca* directory.
```bash openssl req -new -config rootca.conf -out rootca.csr -keyout private/rootca.key ```
-Next, create a self-signed CA certificate. Self-signing is suitable for testing purposes. Specify the ca_ext configuration file extensions on the command line. These indicate that the certificate is for a root CA and can be used to sign certificates and certificate revocation lists (CRLs). Sign the certificate, and commit it to the database.
+Next, create a self-signed CA certificate. Self-signing is suitable for testing purposes. Specify the `ca_ext` configuration file extensions on the command line. These indicate that the certificate is for a root CA and can be used to sign certificates and certificate revocation lists (CRLs). Sign the certificate, and commit it to the database.
```bash openssl ca -selfsign -config rootca.conf -in rootca.csr -out rootca.crt -extensions ca_ext
Next, create a self-signed CA certificate. Self-signing is suitable for testing
## Step 4 - Create the subordinate CA directory structure
-Create a directory structure for the subordinate CA.
+Create a directory structure for the subordinate CA at the same level as the *rootca* directory.
```bash mkdir subca
Create a directory structure for the subordinate CA.
## Step 5 - Create a subordinate CA configuration file
-Create a configuration file and save it as subca.conf in the `subca` directory.
+Create a configuration file and save it as *subca.conf* in the *subca* directory.
```bash [default]
name_opt = utf8,esc_ctrl,multiline,lname,align
commonName = "Test Subordinate CA" [ca_default]
-home = .
+home = .
database = $home/db/index serial = $home/db/serial crlnumber = $home/db/crlnumber
private_key = $home/private/$name.key
RANDFILE = $home/private/random new_certs_dir = $home/certs unique_subject = no
-copy_extensions = copy
+copy_extensions = copy
default_days = 365
-default_crl_days = 90
+default_crl_days = 90
default_md = sha256 policy = policy_c_o_match
subjectKeyIdentifier = hash
## Step 6 - Create a subordinate CA
-Create a new serial number in the `rootca/db/serial` file for the subordinate CA certificate.
+From the *subca* directory, create a new serial number in the *rootca/db/serial* file for the subordinate CA certificate.
```bash openssl rand -hex 16 > ../rootca/db/serial
Create a new serial number in the `rootca/db/serial` file for the subordinate CA
>[!IMPORTANT] >You must create a new serial number for every subordinate CA certificate and every device certificate that you create. Different certificates cannot have the same serial number.
-This example shows you how to create a subordinate or registration CA. Because you can use the root CA to sign certificates, creating a subordinate CA isnΓÇÖt strictly necessary. Having a subordinate CA does, however, mimic real world certificate hierarchies in which the root CA is kept offline and subordinate CAs issue client certificates.
+This example shows you how to create a subordinate or registration CA. Because you can use the root CA to sign certificates, creating a subordinate CA isnΓÇÖt strictly necessary. Having a subordinate CA does, however, mimic real world certificate hierarchies in which the root CA is kept offline and subordinate CAs issue client certificates.
-Use the configuration file to generate a key and a certificate signing request (CSR).
+Use the configuration file to generate a private key and a certificate signing request (CSR).
```bash openssl req -new -config subca.conf -out subca.csr -keyout private/subca.key ```
-Submit the CSR to the root CA and use the root CA to issue and sign the subordinate CA certificate. Specify sub_ca_ext for the extensions switch on the command line. The extensions indicate that the certificate is for a CA that can sign certificates and certificate revocation lists (CRLs). When prompted, sign the certificate, and commit it to the database.
+Submit the CSR to the root CA and use the root CA to issue and sign the subordinate CA certificate. Specify `sub_ca_ext` for the extensions switch on the command line. The extensions indicate that the certificate is for a CA that can sign certificates and certificate revocation lists (CRLs). When prompted, sign the certificate, and commit it to the database.
```bash openssl ca -config ../rootca/rootca.conf -in subca.csr -out subca.crt -extensions sub_ca_ext
You now have both a root CA certificate and a subordinate CA certificate. You ca
3. Enter a display name in the **Certificate Name** field, and select the PEM certificate file you created previously.
-> [!NOTE]
-> The .crt certificates created above are the same as .pem certificates. You can simply change the extension when uploading a certificate to prove possession, or you can use the following OpenSSL command:
-
-```bash
-openssl x509 -in mycert.crt -out mycert.pem -outform PEM
-```
+ > [!NOTE]
+ > The .crt certificates created above are the same as .pem certificates. You can simply change the extension when uploading a certificate to prove possession, or you can use the following OpenSSL command:
+ >
+ > ```bash
+ > openssl x509 -in mycert.crt -out mycert.pem -outform PEM
+ > ```
4. Select **Save**. Your certificate is shown in the certificate list with a status of **Unverified**. The verification process will prove that you own the certificate.
-
5. Select the certificate to view the **Certificate Details** dialog. 6. Select **Generate Verification Code**. For more information, see [Prove Possession of a CA certificate](tutorial-x509-prove-possession.md).
openssl x509 -in mycert.crt -out mycert.pem -outform PEM
8. Generate a private key.
- ```bash
- $ openssl genpkey -out pop.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048
- ```
+ ```bash
+ openssl genpkey -out pop.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048
+ ```
9. Generate a certificate signing request (CSR) from the private key. Add the verification code as the subject of your certificate.
- ```bash
- openssl req -new -key pop.key -out pop.csr
-
- --
- Country Name (2 letter code) [XX]:.
- State or Province Name (full name) []:.
- Locality Name (eg, city) [Default City]:.
- Organization Name (eg, company) [Default Company Ltd]:.
- Organizational Unit Name (eg, section) []:.
- Common Name (eg, your name or your server hostname) []:BB0C656E69AF75E3FB3C8D922C1760C58C1DA5B05AAA9D0A
- Email Address []:
-
- Please enter the following 'extra' attributes
- to be sent with your certificate request
- A challenge password []:
- An optional company name []:
-
- ```
+ ```bash
+ openssl req -new -key pop.key -out pop.csr
+
+ --
+ Country Name (2 letter code) [XX]:.
+ State or Province Name (full name) []:.
+ Locality Name (eg, city) [Default City]:.
+ Organization Name (eg, company) [Default Company Ltd]:.
+ Organizational Unit Name (eg, section) []:.
+ Common Name (eg, your name or your server hostname) []:BB0C656E69AF75E3FB3C8D922C1760C58C1DA5B05AAA9D0A
+ Email Address []:
+
+ Please enter the following 'extra' attributes
+ to be sent with your certificate request
+ A challenge password []:
+ An optional company name []:
+
+ ```
10. Create a certificate using the subordinate CA configuration file and the CSR for the proof of possession certificate.
- ```bash
+ ```bash
openssl ca -config subca.conf -in pop.csr -out pop.crt -extensions client_ext
+ ```
- ```
-
-11. Select the new certificate in the **Certificate Details** view. To find the PEM file, navigate to the certs folder.
+11. Select the new certificate in the **Certificate Details** view. To find the PEM file, navigate to the *certs* folder.
12. After the certificate uploads, select **Verify**. The CA certificate status should change to **Verified**.
Navigate to your IoT Hub in the Azure portal and create a new IoT device identit
## Step 9 - Create a client device certificate
-To generate a client certificate, you must first generate a private key. The following command shows how to use OpenSSL to create a private key. Create the key in the subca directory.
+To generate a client certificate, you must first generate a private key. The following command shows how to use OpenSSL to create a private key. Create the key in the *subca* directory.
```bash openssl genpkey -out device.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048 ```
-Create a certificate signing request (CSR) for the key. You do not need to enter a challenge password or an optional company name. You must, however, enter the device ID in the common name field. You can also enter your own values for the other parameters such as **Country**, **Organization Name**, and so on.
+Create a certificate signing request (CSR) for the key. You do not need to enter a challenge password or an optional company name. You must, however, enter the device ID in the common name field. You can also enter your own values for the other parameters such as **Country Name**, **Organization Name**, and so on.
```bash openssl req -new -key device.key -out device.csr
Go to [Testing Certificate Authentication](tutorial-x509-test-certificate.md) to
```bash openssl pkcs12 -export -in device.crt -inkey device.key -out device.pfx
-```
+```
key-vault Quick Create Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/quick-create-node.md
The code samples below will show you how to create a client, set a certificate,
## Integrating with App Configuration
-The Azure SDK provides a helper method, [parseKeyVaultCertificateIdentifier](/javascript/api/@azure/keyvault-certificates#parseKeyVaultCertificateIdentifier_string_), to parse the given Key Vault certificate ID. This is necessary if you use [App Configuration](/azure/azure-app-configuration/) references to Key Vault. App Config stores the Key Vault certificate ID. You need the _parseKeyVaultCertificateIdentifier_ method to parse that ID to get the certificate name. Once you have the certificate name, you can get the current certificate using code from this quickstart.
+The Azure SDK provides a helper method, [parseKeyVaultCertificateIdentifier](/javascript/api/@azure/keyvault-certificates#parseKeyVaultCertificateIdentifier_string_), to parse the given Key Vault certificate ID. This is necessary if you use [App Configuration](../../azure-app-configuration/index.yml) references to Key Vault. App Config stores the Key Vault certificate ID. You need the _parseKeyVaultCertificateIdentifier_ method to parse that ID to get the certificate name. Once you have the certificate name, you can get the current certificate using code from this quickstart.
## Next steps
In this quickstart, you created a key vault, stored a certificate, and retrieved
- See an [Access Key Vault from App Service Application Tutorial](../general/tutorial-net-create-vault-azure-web-app.md) - See an [Access Key Vault from Virtual Machine Tutorial](../general/tutorial-net-virtual-machine.md) - See the [Azure Key Vault developer's guide](../general/developers-guide.md)-- Review the [Key Vault security overview](../general/security-features.md)
+- Review the [Key Vault security overview](../general/security-features.md)
key-vault Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/logging.md
You can use the Key Vault solution in Azure Monitor logs to review Key Vault `Au
For more information, including how to set this up, see [Azure Key Vault in Azure Monitor](../../azure-monitor/insights/key-vault-insights-overview.md).
-For understanding how to analyze logs, see [Sample kusto log queries](https://docs.microsoft.com/azure/key-vault/general/monitor-key-vault#analyzing-logs)
+For understanding how to analyze logs, see [Sample kusto log queries](./monitor-key-vault.md#analyzing-logs)
## Next steps
For understanding how to analyze logs, see [Sample kusto log queries](https://do
- [Azure monitor](../../azure-monitor/index.yml) - For a tutorial that uses Azure Key Vault in a .NET web application, see [Use Azure Key Vault from a web application](tutorial-net-create-vault-azure-web-app.md). - For programming references, see [the Azure Key Vault developer's guide](developers-guide.md).-- For a list of Azure PowerShell 1.0 cmdlets for Azure Key Vault, see [Azure Key Vault cmdlets](/powershell/module/az.keyvault/#key_vault).
+- For a list of Azure PowerShell 1.0 cmdlets for Azure Key Vault, see [Azure Key Vault cmdlets](/powershell/module/az.keyvault/#key_vault).
key-vault Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/whats-new.md
For more information, see [Configure key auto-rotation in Key Vault](../keys/how
Integration of Azure Key Vault with Azure Policy has reached general availability and is now ready for production use. This capability is a step towards our commitment to simplifying secure secrets management in Azure, while also enhancing policy enforcements that you can define on Key Vault, keys, secrets and certificates. Azure Policy provides the ability to place guardrails on Key Vault and its objects to ensure they are compliant with your organizations security recommendations and compliance regulations. It allows you to perform real time policy-based enforcement and on-demand compliance assessment of existing secrets in your Azure environment. The results of audits performed by policy will be available to you in a compliance dashboard where you will be able to see a drill down of which resources and components are compliant and which are not. Azure policy for Key Vault will provide you with a full suite of built-in policies offering governance of your keys, secrets, and certificates.
-You can learn more about how to [Integrate Azure Key Vault with Azure Policy](https://docs.microsoft.com/azure/key-vault/general/azure-policy?tabs=certificates) and assign a new policy. Announcement is linked [here](https://azure.microsoft.com/updates/gaazurepolicyforkeyvault).
+You can learn more about how to [Integrate Azure Key Vault with Azure Policy](./azure-policy.md?tabs=certificates) and assign a new policy. Announcement is linked [here](https://azure.microsoft.com/updates/gaazurepolicyforkeyvault).
## June 2021
First preview version (version 2014-12-08-preview) was announced on January 8, 2
## Next steps
-If you have additional questions, please contact us through [support](https://azure.microsoft.com/support/options/).
+If you have additional questions, please contact us through [support](https://azure.microsoft.com/support/options/).
key-vault How To Configure Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/how-to-configure-key-rotation.md
Configuration of expiry notification for event grid key near expiry event. You c
:::image type="content" source="../media/keys/key-rotation/key-rotation-5.png" alt-text="Configure Notification"::: For more information about event grid notifications in Key Vault, see
-[Azure Key Vault as Event Grid source](https://docs.microsoft.com/azure/event-grid/event-schema-key-vault?tabs=event-grid-event-schema)
+[Azure Key Vault as Event Grid source](../../event-grid/event-schema-key-vault.md?tabs=event-grid-event-schema)
## Configure key rotation with ARM template
Key rotation policy can also be configured using ARM templates.
- [Monitoring Key Vault with Azure Event Grid](../general/event-grid-overview.md) - [Use an Azure RBAC to control access to keys, certificates and secrets](../general/rbac-guide.md)-- [Azure Data Encryption At Rest](https://docs.microsoft.com/azure/security/fundamentals/encryption-atrest)-- [Azure Storage Encryption](https://docs.microsoft.com/azure/storage/common/storage-service-encryption)-- [Azure Disk Encryption](https://docs.microsoft.com/azure/virtual-machines/disk-encryption)
+- [Azure Data Encryption At Rest](../../security/fundamentals/encryption-atrest.md)
+- [Azure Storage Encryption](../../storage/common/storage-service-encryption.md)
+- [Azure Disk Encryption](../../virtual-machines/disk-encryption.md)
key-vault Quick Create Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/quick-create-node.md
The code sample below will show you how to create a client, set a key, retrieve
## Integrating with App Configuration
-The Azure SDK provides a helper method, [parseKeyVaultKeyIdentifier](/javascript/api/@azure/keyvault-keys#functions), to parse the given Key Vault Key ID. This is necessary if you use [App Configuration](/azure/azure-app-configuration/) references to Key Vault. App Config stores the Key Vault Key ID. You need the _parseKeyVaultKeyIdentifier_ method to parse that ID to get the key name. Once you have the key name, you can get the current key value using code from this quickstart.
+The Azure SDK provides a helper method, [parseKeyVaultKeyIdentifier](/javascript/api/@azure/keyvault-keys#functions), to parse the given Key Vault Key ID. This is necessary if you use [App Configuration](../../azure-app-configuration/index.yml) references to Key Vault. App Config stores the Key Vault Key ID. You need the _parseKeyVaultKeyIdentifier_ method to parse that ID to get the key name. Once you have the key name, you can get the current key value using code from this quickstart.
## Next steps
In this quickstart, you created a key vault, stored a key, and retrieved that ke
- Read an [Overview of Azure Key Vault Keys](about-keys.md) - How to [Secure access to a key vault](../general/security-features.md) - See the [Azure Key Vault developer's guide](../general/developers-guide.md)-- Review the [Key Vault security overview](../general/security-features.md)
+- Review the [Key Vault security overview](../general/security-features.md)
key-vault Quick Create Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/secrets/quick-create-node.md
The code samples below will show you how to create a client, set a secret, retri
## Integrating with App Configuration
-The Azure SDK provides a helper method, [parseKeyVaultSecretIdentifier](/javascript/api/@azure/keyvault-secrets/#parseKeyVaultSecretIdentifier_string_), to parse the given Key Vault Secret ID. This is necessary if you use [App Configuration](/azure/azure-app-configuration/) references to Key Vault. App Config stores the Key Vault Secret ID. You need the _parseKeyVaultSecretIdentifier_ method to parse that ID to get the secret name. Once you have the secret name, you can get the current secret value using code from this quickstart.
+The Azure SDK provides a helper method, [parseKeyVaultSecretIdentifier](/javascript/api/@azure/keyvault-secrets/#parseKeyVaultSecretIdentifier_string_), to parse the given Key Vault Secret ID. This is necessary if you use [App Configuration](../../azure-app-configuration/index.yml) references to Key Vault. App Config stores the Key Vault Secret ID. You need the _parseKeyVaultSecretIdentifier_ method to parse that ID to get the secret name. Once you have the secret name, you can get the current secret value using code from this quickstart.
## Next steps
In this quickstart, you created a key vault, stored a secret, and retrieved that
- Read an [Overview of Azure Key Vault Secrets](about-secrets.md) - How to [Secure access to a key vault](../general/security-features.md) - See the [Azure Key Vault developer's guide](../general/developers-guide.md)-- Review the [Key Vault security overview](../general/security-features.md)
+- Review the [Key Vault security overview](../general/security-features.md)
load-balancer Load Balancer Common Deployment Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-common-deployment-errors.md
This article describes some common Azure Load Balancer deployment errors and pro
|MarketplacePurchaseEligibilityFailed | Switch to the correct Administrative account to enable purchases due to subscription being an EA Subscription. You can read more [here](../marketplace/marketplace-faq-publisher-guide.yml#what-could-block-a-customer-from-completing-a-purchase-). | |ResourceDeploymentFailure| If your load balancer is in a failed state, follow these steps to bring it back from the failed state:<ol><li>Go to https://resources.azure.com, and sign in with your Azure portal credentials.</li><li>Select **Read/Write**.</li><li>On the left, expand **Subscriptions**, and then expand the Subscription with the Load Balancer to update.</li><li>Expand **ResourceGroups**, and then expand the resource group with the Load Balancer to update.</li><li>Select **Microsoft.Network** > **LoadBalancers**, and then select the Load Balancer to update, **LoadBalancer_1**.</li><li>On the display page for **LoadBalancer_1**, select **GET** > **Edit**.</li><li>Update the **ProvisioningState** value from **Failed** to **Succeeded**.</li><li>Select **PUT**.</li></ol>| |LoadBalancerWithoutFrontendIPCantHaveChildResources | A Load Balancer resource that has no frontend IP configurations, cannot have associated child resources or components associated to it. In order to mitigate this error, add a frontend IP configuration and then add the resources you are trying to add. |
-| LoadBalancerRuleCountLimitReachedForNic | A backend pool member's network interface (virtual machine, virtual machine scale set) cannot be associated to more than 300 rules. Reduce the number of rules or leverage another Load Balancer. This limit is documented on the [Load Balancer limits page](https://aka.ms/lblimits).
+| LoadBalancerRuleCountLimitReachedForNic | A backend pool member's network interface (virtual machine, virtual machine scale set) cannot be associated to more than 300 rules. Reduce the number of rules or leverage another Load Balancer. This limit is documented on the [Load Balancer limits page](/azure/azure-resource-manager/management/azure-subscription-service-limits#load-balancer).
| LoadBalancerInUseByVirtualMachineScaleSet | The Load Balancer resource is in use by a virtual machine scale set and cannot be deleted. Use the ARM ID provided in the error message to search for the virtual machine scale set in order to delete it. | ## Next steps * Look through the Azure Load Balancer [SKU comparison table](./skus.md)
-* Learn about [Azure Load Balancer limits](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer)
+* Learn about [Azure Load Balancer limits](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer)
load-balancer Load Balancer Standard Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-standard-virtual-machine-scale-sets.md
When you use the virtual machine scale set in the back-end pool of the load bala
## Virtual Machine Scale Set Instance-level IPs
-When virtual machine scale sets with [public IPs per instance](https://docs.microsoft.com/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-networking) are created with a load balancer in front, the SKU of the instance IPs is determined by the SKU of the Load Balancer (i.e. Basic or Standard). Note that when using a Standard Load Balancer, the individual instance IPs are all of type Standard "no-zone" (though the Load Balancer frontend could be zonal or zone-redundant).
+When virtual machine scale sets with [public IPs per instance](../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md) are created with a load balancer in front, the SKU of the instance IPs is determined by the SKU of the Load Balancer (i.e. Basic or Standard). Note that when using a Standard Load Balancer, the individual instance IPs are all of type Standard "no-zone" (though the Load Balancer frontend could be zonal or zone-redundant).
## Outbound rules
Use the following methods to deploy a virtual machine scale set with an existing
* [Configure a virtual machine scale set with an existing instance of Azure Load Balancer using the Azure portal](./configure-vm-scale-set-portal.md) * [Configure a virtual machine scale set with an existing instance of Azure Load Balancer using Azure PowerShell](./configure-vm-scale-set-powershell.md) * [Configure a virtual machine scale set with an existing instance of Azure Load Balancer using the Azure CLI](./configure-vm-scale-set-cli.md)
-* [Update or delete an existing instance of Azure Load Balancer used by a virtual machine scale set](./update-load-balancer-with-vm-scale-set.md)
+* [Update or delete an existing instance of Azure Load Balancer used by a virtual machine scale set](./update-load-balancer-with-vm-scale-set.md)
load-testing How To Appservice Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/how-to-appservice-insights.md
In this article, you'll learn how to gain more insights from Azure App Service workloads by using Azure Load Testing Preview and Azure App Service diagnostics.
-[App Service diagnostics](/azure/app-service/overview-diagnostics/) is an intelligent and interactive way to help troubleshoot your app, with no configuration required. When you run into issues with your app, App Service diagnostics can help you resolve the issue easily and quickly.
+[App Service diagnostics](../app-service/overview-diagnostics.md) is an intelligent and interactive way to help troubleshoot your app, with no configuration required. When you run into issues with your app, App Service diagnostics can help you resolve the issue easily and quickly.
You can take advantage of App Service diagnostics when you run load tests on applications that run on App Service.
You can take advantage of App Service diagnostics when you run load tests on app
## Get more insights when you test an App Service workload
-In this section, you use [App Service diagnostics](/azure/app-service/overview-diagnostics/) to get more insights from load testing an Azure App Service workload.
+In this section, you use [App Service diagnostics](../app-service/overview-diagnostics.md) to get more insights from load testing an Azure App Service workload.
1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
In this section, you use [App Service diagnostics](/azure/app-service/overview-d
- Learn how to [parameterize a load test](./how-to-parameterize-load-tests.md) with secrets. -- Learn how to [configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
+- Learn how to [configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
load-testing How To Parameterize Load Tests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/how-to-parameterize-load-tests.md
When you create a load test in the Azure portal, or you use a [YAML test configu
> [!NOTE] > If you run a load test as part of your CI/CD process, you might also use the related secret store. Skip to [Use the CI/CD secret store](#cicd_secrets).
-1. [Add the secret to your key vault](/azure/key-vault/secrets/quick-create-portal#add-a-secret-to-key-vault), if you haven't already done so.
+1. [Add the secret to your key vault](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault), if you haven't already done so.
1. Retrieve the key vault secret identifier for your secret. You'll use this secret identifier to configure your load test.
The values of the parameters aren't stored when they're passed from the CI/CD wo
- For information about high-scale load tests, see [Set up a high-scale load test](./how-to-high-scale-load.md). -- To learn about performance test automation, see [Configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
+- To learn about performance test automation, see [Configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
load-testing How To Use A Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/how-to-use-a-managed-identity.md
This article shows how you can create a managed identity for an Azure Load Testing Preview resource and how to use it to read secrets from your Azure key vault.
-A managed identity in Azure Active Directory (Azure AD) allows your resource to easily access other Azure AD-protected resources, such as Azure Key Vault. The identity is managed by the Azure platform. For more information about managed identities in Azure AD, see [Managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/overview).
+A managed identity in Azure Active Directory (Azure AD) allows your resource to easily access other Azure AD-protected resources, such as Azure Key Vault. The identity is managed by the Azure platform. For more information about managed identities in Azure AD, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
Azure Load Testing supports only system-assigned identities. A system-assigned identity is associated with your Azure Load Testing resource and is removed when your resource is deleted. A resource can have only one system-assigned identity.
The `tenantId` property identifies which Azure AD tenant the identity belongs to
A managed identity allows the Azure Load testing resource to access other Azure resources. In this section, you grant the Azure Load Testing service access to read secret values from your key vault.
-If you don't already have a key vault, follow the instructions in [Azure Key Vault quickstart](/azure/key-vault/secrets/quick-create-cli) to create it.
+If you don't already have a key vault, follow the instructions in [Azure Key Vault quickstart](../key-vault/secrets/quick-create-cli.md) to create it.
1. In the Azure portal, go to your Azure Key Vault resource.
You've now granted access to your Azure Load Testing resource to read the secret
## Next steps
-To learn how to parameterize a load test by using secrets, see [Parameterize a load test](./how-to-parameterize-load-tests.md).
+To learn how to parameterize a load test by using secrets, see [Parameterize a load test](./how-to-parameterize-load-tests.md).
logic-apps Logic Apps Examples And Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-examples-and-scenarios.md
Azure Logic Apps integrates with many services, such as Azure Functions, Azure A
* [Tutorial: Call or trigger logic apps by using Azure Functions and Azure Service Bus](../logic-apps/logic-apps-scenario-function-sb-trigger.md) * [Tutorial: Create a streaming customer insights dashboard with Azure Logic Apps and Azure Functions](../logic-apps/logic-apps-scenario-social-serverless.md) * [Tutorial: Create a function that integrates with Azure Logic Apps and Azure Cognitive Services to analyze Twitter post sentiment](../azure-functions/functions-twitter-email.md)
-* [Tutorial: Build an AI-powered social dashboard by using Power BI and Azure Logic Apps](https://aka.ms/logicappsdemo)
+* [Tutorial: Build an AI-powered social dashboard by using Power BI and Azure Logic Apps](/shows/)
* [Tutorial: Monitor virtual machine changes by using Azure Event Grid and Logic Apps](../event-grid/monitor-virtual-machine-changes-event-grid-logic-app.md) * [Tutorial: IoT remote monitoring and notifications with Azure Logic Apps connecting your IoT hub and mailbox](../iot-hub/iot-hub-monitoring-notifications-with-azure-logic-apps.md) * [Blog: Call SOAP services by using Azure Logic Apps](/archive/blogs/logicapps/using-soap-services-with-logic-apps)
logic-apps Logic Apps Scenario Social Serverless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-scenario-social-serverless.md
The workflow that you create monitors a hashtag on Twitter.
You can [build the entire solution in Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) and [deploy the solution with Azure Resource Manager template](../logic-apps/logic-apps-deploy-azure-resource-manager-templates.md). For a video walkthrough that shows how to create this solution,
-[watch this Channel 9 video](https://aka.ms/logicappsdemo).
+[watch this Channel 9 video](/shows/).
## Trigger on customer data
logic-apps Secure Single Tenant Workflow Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md
ms.suite: integration Previously updated : 01/06/2022 Last updated : 01/15/2022 # As a developer, I want to connect to my single-tenant logic app workflows with virtual networks using private endpoints and VNet integration.
For more information, review [Create single-tenant logic app workflows in Azure
To secure outbound traffic from your logic app, you can integrate your logic app with a virtual network. First, create and test an example workflow. You can then set up VNet integration.
+> [!IMPORTANT]
+> You can't change the subnet size after assignment, so use a subnet that's large enough to accommodate
+> the scale that your app might reach. To avoid any issues with subnet capacity, use a `/26` subnet with 64 addresses.
+> If you create the subnet for virtual network integration with the Azure portal, you must use `/27` as the minimum subnet size.
++ ### Create and test the workflow 1. If you haven't already, in the [Azure portal](https://portal.azure.com), create a single-tenant based logic app, and a blank workflow.
To secure outbound traffic from your logic app, you can integrate your logic app
> [!IMPORTANT] > For the Azure Logic Apps runtime to work, you need to have an uninterrupted connection to the backend storage. > For Azure-hosted managed connectors to work, you need to have an uninterrupted connection to the managed API service.
+> With VNet integration, you need to make sure no firewall or network security policy is blocking these connections.
-### Considerations for outbound traffic through private endpoints
+### Considerations for outbound traffic through VNet integration
Setting up virtual network integration affects only outbound traffic. To secure inbound traffic, which continues to use the App Service shared endpoint, review [Set up inbound traffic through private endpoints](#set-up-inbound).
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/single-tenant-overview-compare.md
With the **Logic App (Standard)** resource type, you can create these workflow t
> unavailable, or unsupported triggers, actions, and connectors, see > [Changed, limited, unavailable, or unsupported capabilities](#limited-unavailable-unsupported).
+### Summary differences between stateful and stateless workflows
+
+<center>
+
+| Stateless | Stateful |
+|--|-|
+| Doesn't store run history, inputs, or outputs by default | Stores run history, inputs, and outputs |
+| Managed connector triggers are unavailable or not allowed | Managed connector triggers are available and allowed |
+| No support for chunking | Supports chunking |
+| No support for asynchronous operations | Supports asynchronous operations |
+| Best for workflows with max duration under 5 minutes | Edit default max run duration in host configuration |
+| Best for handling small message sizes (under 64K) | Handles large messages |
+|||
+
+</center>
+ <a name="nested-behavior"></a> ### Nested behavior differences between stateful and stateless workflows
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-endpoints.md
However [managed online endpoints](#managed-online-endpoints-vs-kubernetes-onlin
### Autoscaling
-Autoscale automatically runs the right amount of resources to handle the load on your application. Managed endpoints support autoscaling through integration with the [Azure monitor autoscale](/azure/azure-monitor/autoscale/autoscale-overview) feature. You can configure metrics-based scaling (for instance, CPU utilization >70%), schedule-based scaling (for example, scaling rules for peak business hours), or a combination.
+Autoscale automatically runs the right amount of resources to handle the load on your application. Managed endpoints support autoscaling through integration with the [Azure monitor autoscale](../azure-monitor/autoscale/autoscale-overview.md) feature. You can configure metrics-based scaling (for instance, CPU utilization >70%), schedule-based scaling (for example, scaling rules for peak business hours), or a combination.
:::image type="content" source="media/concept-endpoints/concept-autoscale.png" alt-text="Screenshot showing that autoscale flexibly provides between min and max instances, depending on rules":::
Specify the storage output location to any datastore and path. By default, batch
- [Deploy models with REST (preview)](how-to-deploy-with-rest.md) - [How to monitor managed online endpoints](how-to-monitor-online-endpoints.md) - [How to view managed online endpoint costs](how-to-view-online-endpoints-costs.md)-- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview)
+- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview)
machine-learning Concept Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-vulnerability-management.md
While Azure Machine Learning patches base images with each release, whether you
By default, dependencies are layered on top of base images provided by Azure ML when building environments. You can also use your own base images when using environments in Azure Machine Learning. Once you install more dependencies on top of the Microsoft-provided images, or bring your own base images, vulnerability management becomes your responsibility.
-Associated to your Azure Machine Learning workspace is an Azure Container Registry instance that's used as a cache for container images. Any image materialized, is pushed to the container registry, and used if experimentation or deployment is triggered for the corresponding environment. Azure Machine Learning doesn't delete any image from your container registry, and it's your responsibility to evaluate the need of an image over time. To monitor and maintain environment hygiene, you can use [Microsoft Defender for Container Registry](/azure/defender-for-cloud/defender-for-container-registries-usage) to help scan your images for vulnerabilities. To automate your processes based on triggers from Microsoft Defender, see [Automate responses to Microsoft Defender for Cloud triggers](/azure/defender-for-cloud/workflow-automation).
+Associated to your Azure Machine Learning workspace is an Azure Container Registry instance that's used as a cache for container images. Any image materialized, is pushed to the container registry, and used if experimentation or deployment is triggered for the corresponding environment. Azure Machine Learning doesn't delete any image from your container registry, and it's your responsibility to evaluate the need of an image over time. To monitor and maintain environment hygiene, you can use [Microsoft Defender for Container Registry](../defender-for-cloud/defender-for-container-registries-usage.md) to help scan your images for vulnerabilities. To automate your processes based on triggers from Microsoft Defender, see [Automate responses to Microsoft Defender for Cloud triggers](../defender-for-cloud/workflow-automation.md).
## Vulnerability management on compute hosts
For code-based training experiences, you control which Azure Machine Learning en
* Automated ML jobs run on environments that layer on top of Azure ML [base docker images](https://github.com/Azure/AzureML-Containers).
- * Designer jobs are compartmentalized into [Components](concept-designer.md#component). Each component has its own environment that layers on top of the Azure ML base docker images. For more information on components, see the [Component reference](/azure/machine-learning/component-reference/component-reference).
+ * Designer jobs are compartmentalized into [Components](concept-designer.md#component). Each component has its own environment that layers on top of the Azure ML base docker images. For more information on components, see the [Component reference](./component-reference/component-reference.md).
## Next steps * [Azure Machine Learning Base Images Repository](https://github.com/Azure/AzureML-Containers)
-* [Data Science Virtual Machine release notes](/azure/machine-learning/data-science-virtual-machine/release-notes)
-* [AzureML Python SDK Release Notes](/azure/machine-learning/azure-machine-learning-release-notes)
-* [Machine learning enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security)
+* [Data Science Virtual Machine release notes](./data-science-virtual-machine/release-notes.md)
+* [AzureML Python SDK Release Notes](./azure-machine-learning-release-notes.md)
+* [Machine learning enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security)
machine-learning Reference Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/reference-known-issues.md
might not be pre-installed in your image yet.
### Virtual Machine Generation 2 (Gen 2) not working When you try to create Data Science VM based on Virtual Machine Generation 2 (Gen 2) it fails.
-Currently, we maintain and provide images for Data Science VM based on Windows 2019 Server only for Generation 1 virtual machines. [Gen 2](/azure/virtual-machines/generation-2) are not yet supported and we plan to support them in near future.
+Currently, we maintain and provide images for Data Science VM based on Windows 2019 Server only for Generation 1 virtual machines. [Gen 2](../../virtual-machines/generation-2.md) are not yet supported and we plan to support them in near future.
### Accessing SQL Server
Your final screen should look like this:
-![Enable Hyper-V](./media/workaround/hyperv-enable-dsvm.png)
+![Enable Hyper-V](./media/workaround/hyperv-enable-dsvm.png)
machine-learning How To Autoscale Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-autoscale-endpoints.md
Last updated 11/03/2021
Autoscale automatically runs the right amount of resources to handle the load on your application. [Managed endpoints](concept-endpoints.md) supports autoscaling through integration with the Azure Monitor autoscale feature.
-Azure Monitor autoscaling supports a rich set of rules. You can configure metrics-based scaling (for instance, CPU utilization >70%), schedule-based scaling (for example, scaling rules for peak business hours), or a combination. For more information, see [Overview of autoscale in Microsoft Azure](/azure/azure-monitor/autoscale/autoscale-overview).
+Azure Monitor autoscaling supports a rich set of rules. You can configure metrics-based scaling (for instance, CPU utilization >70%), schedule-based scaling (for example, scaling rules for peak business hours), or a combination. For more information, see [Overview of autoscale in Microsoft Azure](../azure-monitor/autoscale/autoscale-overview.md).
:::image type="content" source="media/how-to-autoscale-endpoints/concept-autoscale.png" alt-text="Diagram for autoscale adding/removing instance as needed":::
If you are not going to use your deployments, delete them:
To learn more about autoscale with Azure Monitor, see the following articles: -- [Understand autoscale settings](/azure/azure-monitor/autoscale/autoscale-understanding-settings)-- [Overview of common autoscale patterns](/azure/azure-monitor/autoscale/autoscale-common-scale-patterns)-- [Best practices for autoscale](/azure/azure-monitor/autoscale/autoscale-best-practices)-- [Troubleshooting Azure autoscale](/azure/azure-monitor/autoscale/autoscale-troubleshoot)
+- [Understand autoscale settings](../azure-monitor/autoscale/autoscale-understanding-settings.md)
+- [Overview of common autoscale patterns](../azure-monitor/autoscale/autoscale-common-scale-patterns.md)
+- [Best practices for autoscale](../azure-monitor/autoscale/autoscale-best-practices.md)
+- [Troubleshooting Azure autoscale](../azure-monitor/autoscale/autoscale-troubleshoot.md)
machine-learning Tutorial Create Secure Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-create-secure-workspace-template.md
Templates provide a convenient way to create reproducible service deployments. The template defines what will be created, with some information provided by you when you use the template. For example, specifying a unique name for the Azure Machine Learning workspace.
-In this tutorial, you learn how to use a [Microsoft Bicep](/azure/azure-resource-manager/bicep/overview) and [Hashicorp Terraform](https://www.terraform.io/) template to create the following Azure resources:
+In this tutorial, you learn how to use a [Microsoft Bicep](../azure-resource-manager/bicep/overview.md) and [Hashicorp Terraform](https://www.terraform.io/) template to create the following Azure resources:
* Azure Virtual Network. The following resources are secured behind this VNet: * Azure Machine Learning workspace
You must also have either a Bash or Azure PowerShell command line.
# [Bicep](#tab/bicep)
-1. To install the command-line tools, see [Set up Bicep development and deployment environments](/azure/azure-resource-manager/bicep/install).
+1. To install the command-line tools, see [Set up Bicep development and deployment environments](../azure-resource-manager/bicep/install.md).
1. The Bicep template used in this article is located at [https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-end-to-end-secure](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-end-to-end-secure). Use the following commands to clone the GitHub repo to your development environment:
The template consists of multiple files. The following table describes what each
> [!IMPORTANT]
-> The DSVM and Azure Bastion is used as an easy way to connect to the secured workspace for this tutorial. In a production environment, we recommend using an [Azure VPN gateway](/azure/vpn-gateway/vpn-gateway-about-vpngateways) or [Azure ExpressRoute](/azure/expressroute/expressroute-introduction) to access the resources inside the VNet directly from your on-premises network.
+> The DSVM and Azure Bastion is used as an easy way to connect to the secured workspace for this tutorial. In a production environment, we recommend using an [Azure VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [Azure ExpressRoute](../expressroute/expressroute-introduction.md) to access the resources inside the VNet directly from your on-premises network.
## Configure the template
After the template completes, use the following steps to connect to the DSVM:
> [!IMPORTANT] > The Data Science Virtual Machine (DSVM) and any compute instance resources bill you for every hour that they are running. To avoid excess charges, you should stop these resources when they are not in use. For more information, see the following articles: >
-> * [Create/manage VMs (Linux)](/azure/virtual-machines/linux/tutorial-manage-vm).
-> * [Create/manage VMs (Windows)](/azure/virtual-machines/windows/tutorial-manage-vm).
+> * [Create/manage VMs (Linux)](../virtual-machines/linux/tutorial-manage-vm.md).
+> * [Create/manage VMs (Windows)](../virtual-machines/windows/tutorial-manage-vm.md).
> * [Create/manage compute instance](how-to-create-manage-compute-instance.md). To continue learning how to use the secured workspace from the DSVM, see [Tutorial: Get started with a Python script in Azure Machine Learning](tutorial-1st-experiment-hello-world.md).
marketplace Azure Private Plan Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-private-plan-troubleshooting.md
Here 's some common customer-blocking issues and information on how to resolve t
### How do I control my costs, and understand how much I am spending on marketplace? - Involve your Microsoft account team for a detailed analysis of your particular environment, Azure subscription hierarchy, and EA setup.-- For more information, see [Cost Management Billing Overview](/azure/cost-management-billing/cost-management-billing-overview).
+- For more information, see [Cost Management Billing Overview](../cost-management-billing/cost-management-billing-overview.md).
### Azure Administrator - The Azure Administrator is responsible for controlling usersΓÇÖ Role Based Access Control. They have the ability to grant Marketplace purchase rights, and determine how these rights can be exercised, and into which Azure Subscriptions the user has access control. - Involve your Microsoft account team for a detailed analysis of your particular environment, Azure subscription hierarchy, and EA setup. - Microsoft recommends that at least two users carry the Azure Administrator role. Refer to the appropriate documentation-- For more information, see the documentation on [Roles and Security Planning](/azure/active-directory/roles/security-planning).
+- For more information, see the documentation on [Roles and Security Planning](../active-directory/roles/security-planning.md).
### Marketplace purchases succeeded, but the deployment fails. The error message typically refers to contracts terms and conditions, what else can be going on?
While troubleshooting the Azure Subscription Hierarchy, keep these things in min
## Troubleshooting Checklist -- ISV to ensure the SaaS private plan is using the correct tenant ID for the customer - [How to find your tenant ID - Azure Active Directory | Microsoft Docs](/azure/active-directory/fundamentals/active-directory-how-to-find-tenant). For VMs use the [Azure Subscription ID. (video guide)](/azure/media-services/latest/setup-azure-subscription-how-to?tabs=portal)
+- ISV to ensure the SaaS private plan is using the correct tenant ID for the customer - [How to find your tenant ID - Azure Active Directory | Microsoft Docs](../active-directory/fundamentals/active-directory-how-to-find-tenant.md). For VMs use the [Azure Subscription ID. (video guide)](../media-services/latest/setup-azure-subscription-how-to.md?tabs=portal)
- ISV to ensure that the Customer is not buying through a CSP. Private Plans are not available on a CSP-managed subscription. - Customer to ensure customer is logging in with an email ID that is registered under the same tenant ID (use the same user ID they used in step #1 above) - ISV to ask the customer to find the Private Plan in Azure Marketplace: [Private offers in Azure Marketplace - Microsoft marketplace | Microsoft Docs](/marketplace/private-plans)-- Customer to ensure marketplace is enabled - [Azure Marketplace | Microsoft Docs](/azure/cost-management-billing/manage/ea-azure-marketplace) ΓÇô if it is not, the user has to contact their Azure Administrator to enable marketplace, for more information regarding Azure Marketplace, see [Azure Marketplace | Microsoft Docs](/azure/cost-management-billing/manage/ea-azure-marketplace).
+- Customer to ensure marketplace is enabled - [Azure Marketplace | Microsoft Docs](../cost-management-billing/manage/ea-azure-marketplace.md) ΓÇô if it is not, the user has to contact their Azure Administrator to enable marketplace, for more information regarding Azure Marketplace, see [Azure Marketplace | Microsoft Docs](../cost-management-billing/manage/ea-azure-marketplace.md).
- (Customer) If the offer is still not visible, itΓÇÖs possible that the customer has Private Marketplace enabled - Customer to Ask the Azure Administrator to enable the specific Private Plan in Private Marketplace: [Create and manage Private Azure Marketplace in the Azure portal - Microsoft marketplace | Microsoft Docs](/marketplace/create-manage-private-azure-marketplace-new) - If the Private Plan is visible, and the deployment fails, the troubleshooting moves to ensuring the customer allows for Marketplace billing:
- - (Customer) The Azure Administrator must follow the instructions in [Azure EA portal administration | Microsoft Docs](/azure/cost-management-billing/manage/ea-portal-administration), and discuss with their Microsoft Representative the steps to enable billing for Marketplace
- - (customer) [This documentation](/azure/cost-management-billing/manage/ea-portal-administration) explains the details to enable Marketplace billing for customers with an Azure Enterprise Agreement.
+ - (Customer) The Azure Administrator must follow the instructions in [Azure EA portal administration | Microsoft Docs](../cost-management-billing/manage/ea-portal-administration.md), and discuss with their Microsoft Representative the steps to enable billing for Marketplace
+ - (customer) [This documentation](../cost-management-billing/manage/ea-portal-administration.md) explains the details to enable Marketplace billing for customers with an Azure Enterprise Agreement.
### If all else fails, open a ticket and create a HAR file
While troubleshooting the Azure Subscription Hierarchy, keep these things in min
## Next steps -- [Create an Azure Support Request](/azure/azure-portal/supportability/how-to-create-azure-support-request)
+- [Create an Azure Support Request](../azure-portal/supportability/how-to-create-azure-support-request.md)
marketplace Azure Vm Test Drive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-test-drive.md
Enter values between 0-99 in the boxes to indicate how many of Hot, Warm, or Col
The ARM template for your test drive is a coded container of all the Azure resources that comprise your solution. To create the ARM deployment template you'll need for your test drive, see [Azure Resource Manager test drive](azure-resource-manager-test-drive.md#write-the-test-drive-template). Once your template is complete, return here to learn how to uploaded your ARM template and complete the configuration.
-To publish successfully, it is important to validate the formatting of the ARM template. Two ways to do this are by using an [online API tool](/rest/api/resources/deployments/validate) or with a [test deployment](/azure/azure-resource-manager/templates/deploy-portal). Once you are ready to upload your template, drag .zip file into the area indicated, or **Browse** for the file.
+To publish successfully, it is important to validate the formatting of the ARM template. Two ways to do this are by using an [online API tool](/rest/api/resources/deployments/validate) or with a [test deployment](../azure-resource-manager/templates/deploy-portal.md). Once you are ready to upload your template, drag .zip file into the area indicated, or **Browse** for the file.
Enter a **Test drive duration**, in hours. This is the number of hours the test drive will stay active. The test drive terminates automatically after this time period ends.
Enter a **Test drive duration**, in hours. This is the number of hours the test
For Microsoft to deploy the test drive on your behalf, connect to your Azure Subscription and Azure Active Directory (AAD) by completing the steps below, then select **Save draft**.
-1. **Azure subscription ID** ΓÇô This grants access to Azure services and the Azure portal. The subscription is where resource usage is reported and services are billed. Consider creating a [separate Azure subscription](/azure/cost-management-billing/manage/create-subscription) to use for test drives if you don't have one already. You can find your Azure subscription ID by signing into the Azure portal and searching *Subscriptions* in the search bar.
-2. **Azure AD tenant ID** ΓÇô Enter your Azure Active Directory (AD) tenant ID by going to **Azure Active Directory** > **Properties** > **Directory ID** within the Azure portal. If you don't have a tenant ID, create a new one in Azure Active Directory. For help with setting up a tenant, see [Quickstart: Set up a tenant](/azure/active-directory/develop/quickstart-create-new-tenant).
+1. **Azure subscription ID** ΓÇô This grants access to Azure services and the Azure portal. The subscription is where resource usage is reported and services are billed. Consider creating a [separate Azure subscription](../cost-management-billing/manage/create-subscription.md) to use for test drives if you don't have one already. You can find your Azure subscription ID by signing into the Azure portal and searching *Subscriptions* in the search bar.
+2. **Azure AD tenant ID** ΓÇô Enter your Azure Active Directory (AD) tenant ID by going to **Azure Active Directory** > **Properties** > **Directory ID** within the Azure portal. If you don't have a tenant ID, create a new one in Azure Active Directory. For help with setting up a tenant, see [Quickstart: Set up a tenant](../active-directory/develop/quickstart-create-new-tenant.md).
3. Before proceeding with the other fields, provision the Microsoft Test-Drive application to your tenant. We will use this application to perform operations on your test drive resources. 1. If you don't have it yet, install the [Azure Az PowerShell module](/powershell/azure/install-az-ps). 2. Add the Service Principal for Microsoft Test-Drive application.
- 1. Run `Connect-AzAccount` and provide credentials to sign in to your Azure account, which requires the Azure active directory **Global Administrator** [built-in role](/azure/active-directory/roles/permissions-reference).
+ 1. Run `Connect-AzAccount` and provide credentials to sign in to your Azure account, which requires the Azure active directory **Global Administrator** [built-in role](../active-directory/roles/permissions-reference.md).
2. Create a new service principal: `New-AzADServicePrincipal -ApplicationId d7e39695-0b24-441c-a140-047800a05ede -DisplayName 'Microsoft TestDrive'`. 3. Ensure the service principal has been created: `Get-AzADServicePrincipal -DisplayName 'Microsoft TestDrive'`. :::image type="content" source="media/test-drive/commands-to-verify-service-principal.png" alt-text="Shows how to ensure the principal has been created.":::
Select **Save draft** before continuing with **Next steps** below.
## Next steps -- [Review and publish your offer](review-publish-offer.md)
+- [Review and publish your offer](review-publish-offer.md)
media-services Analyze Video Audio Files Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/analyze-video-audio-files-concept.md
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-Media Services lets you extract insights from your video and audio files using the audio and video analyzer presets. This article describes the analyzer presets used to extract insights. If you want more detailed insights from your videos, use the [Azure Video Analyzer for Media service](/azure/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-overview). To understand when to use Video Analyzer for Media vs. Media Services analyzer presets, check out the [comparison document](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md).
+Media Services lets you extract insights from your video and audio files using the audio and video analyzer presets. This article describes the analyzer presets used to extract insights. If you want more detailed insights from your videos, use the [Azure Video Analyzer for Media service](../../azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-overview.md). To understand when to use Video Analyzer for Media vs. Media Services analyzer presets, check out the [comparison document](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md).
There are two modes for the Audio Analyzer preset, basic and standard. See the description of the differences in the table below.
Videos that are found to contain adult or racy content might be available for pr
``` ## Next steps
-[Tutorial: Analyze videos with Azure Media Services](analyze-videos-tutorial.md)
+[Tutorial: Analyze videos with Azure Media Services](analyze-videos-tutorial.md)
media-services Encode Basic Encoding Python Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/encode-basic-encoding-python-quickstart.md
This quickstart shows you how to do basic encoding with Python and Azure Media S
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- [Create a resource group](/azure/azure-resource-manager/management/manage-resource-groups-portal#create-resource-groups) to use with this quickstart.
+- [Create a resource group](../../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) to use with this quickstart.
- [Create a Media Services v3 account](account-create-how-to.md).-- [Get your storage account key](/azure/storage/common/storage-account-keys-manage#view-account-access-keys).-- [Create a service principal and key](/azure/purview/create-service-principal-azure).
+- [Get your storage account key](../../storage/common/storage-account-keys-manage.md#view-account-access-keys).
+- [Create a service principal and key](../../purview/create-service-principal-azure.md).
## Get the sample
Get familiar with the [Media Services Python SDK](/python/api/azure-mgmt-media/)
- Learn about the [Azure Python SDKs](/azure/developer/python) - Learn more about [usage patterns for Azure Python SDKs](/azure/developer/python/azure-sdk-library-usage-patterns) - Find more Azure Python SDKs in the [Azure Python SDK index](/azure/developer/python/azure-sdk-library-package-index)-- [Azure Storage Blob Python SDK reference](/python/api/azure-storage-blob/)
+- [Azure Storage Blob Python SDK reference](/python/api/azure-storage-blob/)
media-services Security Private Link Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/security-private-link-concept.md
Private Link allows Media Services to be accessed from private networks. When us
## Azure Private Endpoint and Azure Private Link
-An [Azure Private Endpoint](/azure/private-link/private-endpoint-overview) is a network interface that uses a private IP address from your virtual network. This network interface connects you privately and securely to a service via Azure Private Link.
+An [Azure Private Endpoint](../../private-link/private-endpoint-overview.md) is a network interface that uses a private IP address from your virtual network. This network interface connects you privately and securely to a service via Azure Private Link.
Media Services endpoints may be accessed from a virtual network using private endpoints. Private endpoints may also be accessed from peered virtual networks or other networks connected to the virtual network using Express Route or VPN.
-[Azure Private Links](/azure/private-link/) allow access to Media Services private endpoints in your virtual network without exposing them to the public Internet. It routes traffic over the Microsoft backbone network.
+[Azure Private Links](../../private-link/index.yml) allow access to Media Services private endpoints in your virtual network without exposing them to the public Internet. It routes traffic over the Microsoft backbone network.
## Restricting access
Internet access to the endpoints in the Media Services account can be restricted
| Service | Media Services integration | Private link documentation | | - | -- | -- |
-| Azure Storage | Used to store media | [Use private endpoints for Azure Storage](/azure/storage/common/storage-private-endpoints) |
-| Azure Key Vault | Used to store [customer managed keys](security-customer-managed-keys-portal-tutorial.md) | [Configure Azure Key Vault networking settings](/azure/key-vault/general/how-to-azure-key-vault-network-security) |
-| Azure Resource Manager | Provides access to Media Services APIs | [Use REST API to create private link for managing Azure resources](/azure/azure-resource-manager/management/create-private-link-access-rest) |
-| Event Grid | Provides [notifications of Media Services events](./monitoring/job-state-events-cli-how-to.md) | [Configure private endpoints for Azure Event Grid topics or domains](/azure/event-grid/configure-private-endpoints) |
+| Azure Storage | Used to store media | [Use private endpoints for Azure Storage](../../storage/common/storage-private-endpoints.md) |
+| Azure Key Vault | Used to store [customer managed keys](security-customer-managed-keys-portal-tutorial.md) | [Configure Azure Key Vault networking settings](../../key-vault/general/how-to-azure-key-vault-network-security.md) |
+| Azure Resource Manager | Provides access to Media Services APIs | [Use REST API to create private link for managing Azure resources](../../azure-resource-manager/management/create-private-link-access-rest.md) |
+| Event Grid | Provides [notifications of Media Services events](./monitoring/job-state-events-cli-how-to.md) | [Configure private endpoints for Azure Event Grid topics or domains](../../event-grid/configure-private-endpoints.md) |
## Private endpoints are created on the Media Services account
For pricing details, see [Azure Private Link Pricing](https://azure.microsoft.co
## Private Link how-tos and FAQs - [Create a Media Services and Storage account with a Private Link using an Azure Resource Management template](security-private-link-arm-how-to.md)-- [Create a Private Link for a Streaming Endpoint](security-private-link-streaming-endpoint-how-to.md)
+- [Create a Private Link for a Streaming Endpoint](security-private-link-streaming-endpoint-how-to.md)
media-services Security Private Link Connect Private Endpoint Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/security-private-link-connect-private-endpoint-concept.md
Clients on a VNet using the private endpoint should use the same DNS name to con
> [!IMPORTANT] > Use the same DNS names to the Media Services endpoints when using private endpoints as youΓÇÖd otherwise use. Please don't connect to the Media Services endpoints using its privatelink subdomain URL.
-Media Services creates a [private DNS zone](/azure/dns/private-dns-overview) attached to the VNet with the necessary updates for the private endpoints, by default. However, if you're using your own DNS server, you may need to make additional changes to your DNS configuration. The section on DNS changes below describes the updates required for private endpoints.
+Media Services creates a [private DNS zone](../../dns/private-dns-overview.md) attached to the VNet with the necessary updates for the private endpoints, by default. However, if you're using your own DNS server, you may need to make additional changes to your DNS configuration. The section on DNS changes below describes the updates required for private endpoints.
## DNS changes for private endpoints
The recommended DNS zone names for private endpoints for storage services, and t
For more information about configuring your own DNS server to support private endpoints, refer to the following articles: -- [Name resolution for resources in Azure virtual networks](/azure/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances#name-resolution-that-uses-your-own-dns-server)-- [DNS configuration for private endpoints](/azure/private-link/private-endpoint-overview#dns-configuration)
+- [Name resolution for resources in Azure virtual networks](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server)
+- [DNS configuration for private endpoints](../../private-link/private-endpoint-overview.md#dns-configuration)
## Public network access flag
The `publicNetworkAccess` flag on the Media Services account can be used to allo
## Service level IP allowlists
-When `publicNetworkAccess` is enabled, requests from the public internet are allowed, subject to service level IP allowlists. If `publicNetworkAccess` is disabled, requests from the public internet are blocked, regardless of the IP allowlist settings. IP allowlists only apply to requests from the public internet; requests to private endpoints are not filtered by the IP allowlists.
+When `publicNetworkAccess` is enabled, requests from the public internet are allowed, subject to service level IP allowlists. If `publicNetworkAccess` is disabled, requests from the public internet are blocked, regardless of the IP allowlist settings. IP allowlists only apply to requests from the public internet; requests to private endpoints are not filtered by the IP allowlists.
media-services Security Private Link Streaming Endpoint How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/security-private-link-streaming-endpoint-how-to.md
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-This article shows you how to use a private link with a Streaming Endpoint. It's assumed that you already know how to create an [Azure resource group](/azure/azure-resource-manager/management/manage-resource-groups-portal), a [Media Services account](account-create-how-to.md), and an [Azure virtual network](/azure/virtual-network/quick-create-portal).
+This article shows you how to use a private link with a Streaming Endpoint. It's assumed that you already know how to create an [Azure resource group](../../azure-resource-manager/management/manage-resource-groups-portal.md), a [Media Services account](account-create-how-to.md), and an [Azure virtual network](../../virtual-network/quick-create-portal.md).
You'll be creating a private endpoint resource which is a link between a virtual network and a streaming endpoint. This deployment creates a network interface IP address inside the virtual network. The private link allows you to connect the network interface in the private network to the streaming endpoint in the Media Services account. You'll also be creating DNS zones which pass the private IP addresses.
To use the streaming endpoint inside your virtual network, create private DNS zo
1. Review your settings and make sure they're correct. 1. Select **Create**. The private endpoint deployment screen appears.
-While the deployment is in progress, it's also creating an [Azure Resource Manager (ARM) template](/azure/azure-resource-manager/templates/overview). You can use ARM templates to automate deployment. To see the template, select **Template** from the menu.
+While the deployment is in progress, it's also creating an [Azure Resource Manager (ARM) template](../../azure-resource-manager/templates/overview.md). You can use ARM templates to automate deployment. To see the template, select **Template** from the menu.
## Clean up resources
-If you aren't planning to use the resources created in this exercise, simply delete the resource group. If you don't delete the resources, you will be charged for them.
+If you aren't planning to use the resources created in this exercise, simply delete the resource group. If you don't delete the resources, you will be charged for them.
notification-hubs Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notification-hubs/availability-zones.md
# Availability zones
-Azure Notification Hubs now supports [availability zones](/azure/availability-zones/az-overview), providing fault-isolated locations within the same Azure region. To ensure resiliency, three separate availability zones are present in all availability zone-enabled regions. When you use availability zones, both data and metadata are replicated across data centers in the availability zone.
+Azure Notification Hubs now supports [availability zones](../availability-zones/az-overview.md), providing fault-isolated locations within the same Azure region. To ensure resiliency, three separate availability zones are present in all availability zone-enabled regions. When you use availability zones, both data and metadata are replicated across data centers in the availability zone.
## Feature availability
-Availability zones support will be included as part of an upcoming Azure Notification Hubs Premium SKU. It will only be available in [Azure regions](/azure/availability-zones/az-region) where availability zones are present.
+Availability zones support will be included as part of an upcoming Azure Notification Hubs Premium SKU. It will only be available in [Azure regions](../availability-zones/az-region.md) where availability zones are present.
> [!NOTE] > Until Azure Notification Hubs Premium is released, availability zones is by invitation only. If you are interested in using this feature, contact your customer success manager at Microsoft, or create an Azure support ticket which will be triaged by the support team.
At this time, you can only enable availability zones on new namespaces. Notifica
## Next steps -- [Azure availability zones](/azure/availability-zones/az-overview)-- [Azure services that support availability zones](/azure/availability-zones/az-region)
+- [Azure availability zones](../availability-zones/az-overview.md)
+- [Azure services that support availability zones](../availability-zones/az-region.md)
object-anchors Move Azure Object Anchors Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/object-anchors/how-tos/move-azure-object-anchors-account.md
To complete the move of the Object Anchors account, delete the source Object Anc
In this tutorial, you moved an Object Anchors account from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, refer to: > [!div class="nextstepaction"]
-> [Move resources to a new resource group or subscription](/azure/azure-resource-manager/management/move-resource-group-and-subscription)
+> [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md)
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-high-availability.md
Here are some failure scenarios that require user action to recover:
| **Scenario** | **Recovery plan** | | - | - | | <b> Region failure | Failure of a region is a rare event. However, if you need protection from a region failure, you can configure one or more read replicas in other regions for disaster recovery (DR). (See [this article](./howto-read-replicas-portal.md) about creating and managing read replicas for details). In the event of a region-level failure, you can manually promote the read replica configured on the other region to be your production database server. |
-| <b> Availability zone failure | Failure of a Availability zone is also a rare event. However, if you need protection from a Availability zone failure, you can configure one or more read replicas or consider using our [Flexible Server](https://docs.microsoft.com/azure/postgresql/flexible-server/concepts-high-availability) offering which provides zone redundant high availability.
+| <b> Availability zone failure | Failure of a Availability zone is also a rare event. However, if you need protection from a Availability zone failure, you can configure one or more read replicas or consider using our [Flexible Server](./flexible-server/concepts-high-availability.md) offering which provides zone redundant high availability.
| <b> Logical/user errors | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](./concepts-backup.md) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [pg_dump](https://www.postgresql.org/docs/11/app-pgdump.html), and then use [pg_restore](https://www.postgresql.org/docs/11/app-pgrestore.html) to restore those tables into your database. |
Azure Database for PostgreSQL provides fast restart capability of database serve
## Next steps - Learn about [Azure regions](../availability-zones/az-overview.md) - Learn about [handling transient connectivity errors](concepts-connectivity.md)-- Learn how to [replicate your data with read replicas](howto-read-replicas-portal.md)
+- Learn how to [replicate your data with read replicas](howto-read-replicas-portal.md)
postgresql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-azure-advisor-recommendations.md
Recommendations are available from the **Overview** navigation sidebar in the Az
:::image type="content" source="../media/concepts-azure-advisor-recommendations/advisor-example.png" alt-text="Screenshot of the Azure portal showing an Azure Advisor recommendation."::: ## Recommendation types Azure Database for PostgreSQL prioritize the following types of recommendations:
-* **Performance**: To improve the speed of your PostgreSQL server. This includes CPU usage, memory pressure, connection pooling, disk utilization, and product-specific server parameters. For more information, see [Advisor Performance recommendations](/azure/advisor/advisor-performance-recommendations).
-* **Reliability**: To ensure and improve the continuity of your business-critical databases. This includes storage limits, connection limits, and hyperscale data distribution recommendations. For more information, see [Advisor Reliability recommendations](/azure/advisor/advisor-high-availability-recommendations).
-* **Cost**: To optimize and reduce your overall Azure spending. This includes server right-sizing recommendations. For more information, see [Advisor Cost recommendations](/azure/advisor/advisor-cost-recommendations).
+* **Performance**: To improve the speed of your PostgreSQL server. This includes CPU usage, memory pressure, connection pooling, disk utilization, and product-specific server parameters. For more information, see [Advisor Performance recommendations](../../advisor/advisor-performance-recommendations.md).
+* **Reliability**: To ensure and improve the continuity of your business-critical databases. This includes storage limits, connection limits, and hyperscale data distribution recommendations. For more information, see [Advisor Reliability recommendations](../../advisor/advisor-high-availability-recommendations.md).
+* **Cost**: To optimize and reduce your overall Azure spending. This includes server right-sizing recommendations. For more information, see [Advisor Cost recommendations](../../advisor/advisor-cost-recommendations.md).
## Understanding your recommendations * **Daily schedule**: For Azure PostgreSQL databases, we check server telemetry and issue recommendations on a twice a day schedule. If you make a change to your server configuration, existing recommendations will remain visible until we re-examine telemetry at either 7PM or 7AM according to PST. * **Performance history**: Some of our recommendations are based on performance history. These recommendations will only appear after a server has been operating with the same configuration for 7 days. This allows us to detect patterns of heavy usage (e.g. high CPU activity or high connection volume) over a sustained time period. If you provision a new server or change to a new vCore configuration, these recommendations will be paused temporarily. This prevents legacy telemetry from triggering recommendations on a newly reconfigured server. However, this also means that performance history-based recommendations may not be identified immediately. ## Next steps
-For more information, see [Azure Advisor Overview](/azure/advisor/advisor-overview).
+For more information, see [Azure Advisor Overview](../../advisor/advisor-overview.md).
purview Concept Best Practices Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-automation.md
This article provides a summary of the options available, and guidance on what t
**SDK** | <ul><li><a href="/dotnet/api/overview/azure" target="_blank">.NET</a></li><li><a href="/java/api/overview/azure" target="_blank">Java</a></li><li><a href="/javascript/api/overview/azure" target="_blank">JavaScript</a></li><li><a href="/python/api/overview/azure" target="_blank">Python</a></li></ul> | Custom Development | Γ£ô | Γ£ô | Γ£ô | ## Resource Management
-[Azure Resource Manager](/azure/azure-resource-manager/management/overview) is a deployment and management service, which enables customers to create, update, and delete resources in Azure. When deploying Azure resources repeatedly, ARM templates can be used to ensure consistency, this approach is referred to as Infrastructure as Code.
+[Azure Resource Manager](../azure-resource-manager/management/overview.md) is a deployment and management service, which enables customers to create, update, and delete resources in Azure. When deploying Azure resources repeatedly, ARM templates can be used to ensure consistency, this approach is referred to as Infrastructure as Code.
-To implement infrastructure as code, we can build [ARM templates](/azure/azure-resource-manager/templates/overview) using JSON or [Bicep](/azure/azure-resource-manager/bicep/overview), or open-source alternatives such as [Terraform](/azure/developer/terraform/overview).
+To implement infrastructure as code, we can build [ARM templates](../azure-resource-manager/templates/overview.md) using JSON or [Bicep](../azure-resource-manager/bicep/overview.md), or open-source alternatives such as [Terraform](/azure/developer/terraform/overview).
When to use? * Scenarios that require repeated Azure Purview deployments, templates ensure Azure Purview along with any other dependent resources are deployed in a consistent manner.
-* When coupled with [deployment scripts](/azure/azure-resource-manager/templates/deployment-script-template), templated solutions can traverse the control and data planes, enabling the deployment of end-to-end solutions. For example, create an Azure Purview account, register sources, trigger scans.
+* When coupled with [deployment scripts](../azure-resource-manager/templates/deployment-script-template.md), templated solutions can traverse the control and data planes, enabling the deployment of end-to-end solutions. For example, create an Azure Purview account, register sources, trigger scans.
## Command Line Azure CLI and Azure PowerShell are command-line tools that enable you to manage Azure resources such as Azure Purview. While the list of commands will grow over time, only a subset of Azure Purview control plane operations is currently available. For an up-to-date list of commands currently available, check out the documentation ([Azure CLI](/cli/azure/purview) | [Azure PowerShell](/powershell/module/az.purview)).
When to use?
* Applications or processes that need to publish or consume Apache Atlas events in real time. ## Streaming (Diagnostic Logs)
-Azure Purview can send platform logs and metrics via "Diagnostic settings" to one or more destinations (Log Analytics Workspace, Storage Account, or Azure Event Hubs). [Available metrics](/azure/purview/how-to-monitor-with-azure-monitor#available-metrics) include `Data Map Capacity Units`, `Data Map Storage Size`, `Scan Canceled`, `Scan Completed`, `Scan Failed`, and `Scan Time Taken`.
+Azure Purview can send platform logs and metrics via "Diagnostic settings" to one or more destinations (Log Analytics Workspace, Storage Account, or Azure Event Hubs). [Available metrics](./how-to-monitor-with-azure-monitor.md#available-metrics) include `Data Map Capacity Units`, `Data Map Storage Size`, `Scan Canceled`, `Scan Completed`, `Scan Failed`, and `Scan Time Taken`.
Once configured, Azure Purview automatically sends these events to the destination as a JSON payload. From there, application subscribers that need to consume and act on these events can do so with the option of orchestrating downstream logic.
When to use?
* [Docs](/python/api/azure-mgmt-purview/?view=azure-python&preserve-view=true) | [PyPi](https://pypi.org/project/azure-mgmt-purview/) azure-mgmt-purview ## Next steps
-* [Azure Purview REST API](/rest/api/purview)
+* [Azure Purview REST API](/rest/api/purview)
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/manage-integration-runtimes.md
If you select the **Use system proxy** option for the HTTP proxy, the self-hoste
</system.net> ```
- You can then add proxy server details as shown in the following example:
+ You can then add proxy server details as shown in the following example and include Azure Purview, data sources and other relevant services endpoints in the bypass list:
```xml <system.net>
- <defaultProxy enabled="true">
- <proxy bypassonlocal="true" proxyaddress="http://proxy.domain.org:8888/" />
- </defaultProxy>
+ <defaultProxy>
+ <bypasslist>
+ <add address="scaneastus2test.blob.core.windows.net" />
+ <add address="scaneastus2test.queue.core.windows.net" />
+ <add address="Atlas-abcd1234-1234-abcd-abcd-1234567890ab.servicebus.windows.net" />
+ <add address="contosopurview1.purview.azure.com" />
+ <add address="contososqlsrv1.database.windows.net" />
+ <add address="contosoadls1.dfs.core.windows.net" />
+ <add address="contosoakv1.vault.azure.net" />
+ <add address="contosoblob11.blob.core.windows.net" />
+ </bypasslist>
+ <proxy proxyaddress="http://10.1.0.1:3128" bypassonlocal="True" />
+ </defaultProxy>
</system.net> ```- The proxy tag allows additional properties to specify required settings like `scriptLocation`. See [\<proxy\> Element (Network Settings)](/dotnet/framework/configure-apps/file-schema/network/proxy-element-network-settings) for syntax. ```xml
When scanning Parquet files using the Self-hosted IR, the service locates the Ja
## Proxy server considerations
-If your corporate network environment uses a proxy server to access the internet, configure the self-hosted integration runtime to use appropriate proxy settings. You can set the proxy during the initial registration phase.
+If your corporate network environment uses a proxy server to access the internet, configure the self-hosted integration runtime to use appropriate proxy settings. You can set the proxy during the initial registration phase or after it is being registered.
:::image type="content" source="media/manage-integration-runtimes/self-hosted-proxy.png" alt-text="Specify the proxy":::
There are three configuration options:
- **Use system proxy**: The self-hosted integration runtime uses the proxy setting that is configured in diahost.exe.config and diawp.exe.config. If these files specify no proxy configuration, the self-hosted integration runtime connects to the cloud service directly without going through a proxy. - **Use custom proxy**: Configure the HTTP proxy setting to use for the self-hosted integration runtime, instead of using configurations in diahost.exe.config and diawp.exe.config. **Address** and **Port** values are required. **User Name** and **Password** values are optional, depending on your proxy's authentication setting. All settings are encrypted with Windows DPAPI on the self-hosted integration runtime and stored locally on the machine.
+> [!IMPORTANT]
+> Currently, **custom proxy** is not supported in Azure Purview.
+ The integration runtime host service restarts automatically after you save the updated proxy settings. After you register the self-hosted integration runtime, if you want to view or update proxy settings, use Microsoft Integration Runtime Configuration Manager.
You can use the configuration manager tool to view and update the HTTP proxy.
> [!NOTE] > If you set up a proxy server with NTLM authentication, the integration runtime host service runs under the domain account. If you later change the password for the domain account, remember to update the configuration settings for the service and restart the service. Because of this requirement, we suggest that you access the proxy server by using a dedicated domain account that doesn't require you to update the password frequently.
+If using system proxy, configure the outbound [network rules](#networking-requirements) from self-hosted integration runtime virtual machine to required endpoints.
+ ## Installation best practices You can install the self-hosted integration runtime by downloading a Managed Identity setup package from [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=39717).
remote-rendering Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/reference/network-requirements.md
Here is some sample output from running the ```RenderingSession.ps1``` script:
![Retrieve hostname from powershell output](./media/session-hostname-powershell.png)
-ARR session VMs do not work with the built-in command line 'ping' tool. Instead, a ping tool that works with TCP/UDP must be used. A simple tool called PsPing [(download link)](https://docs.microsoft.com/sysinternals/downloads/psping) can be used for this purpose.
+ARR session VMs do not work with the built-in command line 'ping' tool. Instead, a ping tool that works with TCP/UDP must be used. A simple tool called PsPing [(download link)](/sysinternals/downloads/psping) can be used for this purpose.
The calling syntax is: ```PowerShell
Example output from running PsPing:
## Next steps
-* [Quickstart: Render a model with Unity](../quickstarts/render-model.md)
+* [Quickstart: Render a model with Unity](../quickstarts/render-model.md)
search Cognitive Search Concept Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-concept-intro.md
In Azure Cognitive Search, AI enrichment refers to a pipeline process that adds machine learning to [indexer-based indexing](search-indexer-overview.md). Steps in the pipeline create new information where none previously existed: extracting information from images, detecting sentiment or key phrases from chunks of text, and recognizing entities, to name a few. All of these processes result in making previously unsearchable content available to full text search and knowledge mining scenarios.
-Azure Blob Storage is the most commonly used input, but any indexer-supported data source can provide the initial content. A [**skillset**](cognitive-search-working-with-skillsets.md), attached to an indexer, adds the AI processing. The indexer extracts content and sets up the pipeline, while the skillset identifies, analyzes, and creates new information out of blobs, images, and raw text. Output is a [**search index**](search-what-is-an-index.md) or optional [**knowledge store**](knowledge-store-concept-intro.md).
+[**Azure Blob Storage**](../storage/blobs/storage-blobs-overview.md)) is the most commonly used input, but any indexer-supported data source can provide the initial content. A [**skillset**](cognitive-search-working-with-skillsets.md), attached to an indexer, adds the AI processing. The indexer extracts content and sets up the pipeline, while AI processing identifies, analyzes, and creates new information out of blobs, images, and raw text. Output is a [**search index**](search-what-is-an-index.md) or optional [**knowledge store**](knowledge-store-concept-intro.md).
![Enrichment pipeline diagram](./media/cognitive-search-intro/cogsearch-architecture.png "enrichment pipeline overview")
Built-in skills fall into these categories:
+ **Image processing** skills include [Optical Character Recognition (OCR)](cognitive-search-skill-ocr.md) and identification of [visual features](cognitive-search-skill-image-analysis.md), such as facial detection, image interpretation, image recognition (famous people and landmarks), or attributes like image orientation. These skills create text representations of image content for full text search in Azure Cognitive Search.
-+ **Natural language processing** skills include [entity recognition](cognitive-search-skill-entity-recognition-v3.md), [language detection](cognitive-search-skill-language-detection.md), [key phrase extraction](cognitive-search-skill-keyphrases.md), text manipulation, [sentiment detection (including opinion mining)](cognitive-search-skill-sentiment-v3.md), and [personal identifiable information (PII) detection](cognitive-search-skill-pii-detection.md). With these skills, unstructured text is mapped as searchable and filterable fields in an index.
++ **Natural language processing** skills include [entity recognition](cognitive-search-skill-entity-recognition-v3.md), [language detection](cognitive-search-skill-language-detection.md), [key phrase extraction](cognitive-search-skill-keyphrases.md), text manipulation, [sentiment detection (including opinion mining)](cognitive-search-skill-sentiment-v3.md), and [personal identifiable information detection](cognitive-search-skill-pii-detection.md). With these skills, unstructured text is mapped as searchable and filterable fields in an index. Built-in skills are based on pre-trained machine learning models in Cognitive Services APIs: [Computer Vision](../cognitive-services/computer-vision/index.yml) and [Language Service](../cognitive-services/language-service/overview.md). You should [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md) if you want these resources for larger workloads.
Additionally, you might consider adding a custom skill if you have open-source,
### Use-cases for built-in skills
-A [skillset](cognitive-search-defining-skillset.md) that's assembled using built-in skills is well-suited for the following application scenarios:
+A [skillset](cognitive-search-defining-skillset.md) that's assembled using built-in skills is well suited for the following application scenarios:
+ [Optical Character Recognition (OCR)](cognitive-search-skill-ocr.md) that recognizes typeface and handwritten text in scanned documents (JPEG) is perhaps the most commonly used skill. Attaching the OCR skill will identify, extract, and ingest text from JPEG files.
search Cognitive Search Skill Sentiment V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-sentiment-v3.md
Parameters are case-sensitive.
| Parameter Name | Description | |-|-| | `defaultLanguageCode` | (optional) The language code to apply to documents that don't specify language explicitly. <br/> See the [full list of supported languages](../cognitive-services/language-service/sentiment-opinion-mining/language-support.md). |
-| `modelVersion` | (optional) Specifies the [version of the model](/azure/cognitive-services/language-service/sentiment-opinion-mining/how-to/call-api#specify-the-sentiment-analysis-model) to use when calling sentiment analysis. It will default to the most recent version when not specified. We recommend you do not specify this value unless it's necessary. |
+| `modelVersion` | (optional) Specifies the [version of the model](../cognitive-services/language-service/sentiment-opinion-mining/how-to/call-api.md#specify-the-sentiment-analysis-model) to use when calling sentiment analysis. It will default to the most recent version when not specified. We recommend you do not specify this value unless it's necessary. |
| `includeOpinionMining` | If set to `true`, enables [the opinion mining feature](../cognitive-services/language-service/sentiment-opinion-mining/overview.md#opinion-mining), which allows aspect-based sentiment analysis to be included in your output results. Defaults to `false`. | ## Skill inputs
If a language is not supported, a warning is generated and no sentiment results
## See also + [Built-in skills](cognitive-search-predefined-skills.md)
-+ [How to define a skillset](cognitive-search-defining-skillset.md)
++ [How to define a skillset](cognitive-search-defining-skillset.md)
search Monitor Azure Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/monitor-azure-cognitive-search.md
The metrics and logs you can collect are discussed in the following sections.
## Analyzing metrics
-You can analyze metrics for Azure Cognitive Search with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started) for details on using this tool.
+You can analyze metrics for Azure Cognitive Search with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
For a list of the platform metrics collected for Azure Cognitive Search, see [Azure Cognitive Search monitoring data reference (metrics)](monitor-azure-cognitive-search-data-reference.md#metrics).
For a list of the platform metrics collected for Azure Cognitive Search, see [Az
Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema). The extended schema for Azure Cognitive Search resource logs is found in the [Azure Cognitive Search monitoring data reference (schemas)](monitor-azure-cognitive-search-data-reference.md#schemas).
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md). The extended schema for Azure Cognitive Search resource logs is found in the [Azure Cognitive Search monitoring data reference (schemas)](monitor-azure-cognitive-search-data-reference.md#schemas).
-The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log within Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of platform log within Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
For a list of the types of resource logs collected for Azure Cognitive Search, see [Azure Cognitive Search monitoring data reference (resource logs)](monitor-azure-cognitive-search-data-reference.md#resource-logs).
For a list of the tables used by Azure Monitor Logs and queryable by Log Analyti
### Sample Kusto queries > [!IMPORTANT]
-> When you select **Logs** from the Azure Cognitive Search menu, Log Analytics is opened with the query scope set to the current search service. This means that log queries will only include data from that resource. If you want to run a query that includes data from other search services or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](/azure/azure-monitor/logs/scope) for details.
+> When you select **Logs** from the Azure Cognitive Search menu, Log Analytics is opened with the query scope set to the current search service. This means that log queries will only include data from that resource. If you want to run a query that includes data from other search services or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
Following are queries that you can use to help you monitor your search service. See the [**Azure Cognitive Search monitoring data reference**](monitor-azure-cognitive-search-data-reference.md) for descriptions of schema elements.
AzureDiagnostics
## Alerts
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-metric-overview), [logs](/azure/azure-monitor/alerts/alerts-unified-log), and the [activity log](/azure/azure-monitor/alerts/activity-log-alerts). Different types of alerts have benefits and drawbacks.
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks.
The following table lists common and recommended alert rules for Azure Cognitive Search. On a search service, throttling or query latency that exceeds a given threshold are the most commonly used alerts, but you might also want to be notified if a search service is deleted.
search Search Howto Create Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-create-indexers.md
description: Set properties on an indexer to determine data origin and destinati
+ - Previously updated : 01/07/2022+ Last updated : 01/17/2022 # Creating indexers in Azure Cognitive Search
Last updated 01/07/2022
A search indexer provides an automated workflow for reading content from an external data source, and ingesting that content into a search index on your search service. Indexers support two workflows: + Extract text and metadata during indexing for full text search scenarios+ + Apply integrated machine learning and AI models to analyze content that is *not* intrinsically searchable, such as images and large undifferentiated text. This extended workflow is called [AI enrichment](cognitive-search-concept-intro.md) and it's indexer-driven.
-Using indexers significantly reduces the quantity and complexity of the code you need to write. This article focuses on the mechanics of creating an indexer as preparation for more advanced work with source-specific indexers and [skillsets](cognitive-search-working-with-skillsets.md).
+Using indexers significantly reduces the quantity and complexity of the code you need to write. This article focuses on the basics of creating an indexer. Depending on the data source and your workflow, additional configuration might be necessary.
-## Indexer structure
+## Indexer definitions
-The following indexer definitions are typical of what you might create for text-based and AI enrichment scenarios.
+When creating an indexer, the definition will adhere to one of two patterns: text-based indexing or AI enrichment with skills.
-### Indexing for full text search
+### Indexer definition for full text search
-The original purpose of an indexer was to simplify the complex process of loading an index by providing a mechanism for connecting to and reading text and numeric content from fields in a data source, serialize that content as JSON documents, and hand off those documents to the search engine for indexing. This is still a primary use case, and for this operation, you'll need to create an indexer with the properties defined in the following example.
+Full text search is the primary use case for indexers, and for this operation, an indexer uses the following properties.
-```http
-POST /indexers?api-version=[api-version]
+```json
{ "name": (required) String that uniquely identifies the indexer,
- "dataSourceName": (required) String indicated which existing data source to use,
- "targetIndexName": (required) String,
+ "description": (optional),
+ "dataSourceName": (required) String indicating which existing data source to use,
+ "targetIndexName": (required) String indicating which existing index to use,
"parameters": { "batchSize": null,
- "maxFailedItems": null,
- "maxFailedItemsPerBatch": null
+ "maxFailedItems": 0,
+ "maxFailedItemsPerBatch": 0,
+ "base64EncodeKeys": false,
+ "configuration": {}
},
- "fieldMappings": [ optional unless there are field discrepancies that need resolution]
+ "fieldMappings": (optional) unless field discrepancies need resolution,
+ "disabled": null,
+ "schedule": null,
+ "encryptionKey": null
} ```
-The **`name`**, **`dataSourceName`**, and **`targetIndexName`** properties are required, and depending on how you create the indexer, both data source and index must already exist on the service before you can run the indexer.
+Parameters modify run time behaviors, such as how many errors to accept before failing the entire job. The parameters above are available for all indexers and are documented in the [REST API reference](/rest/api/searchservice/create-indexer#request-body). Source-specific indexers for blobs, SQL, and Cosmos DB provide additional "configuration" parameters for source-specific behaviors. For example, if the source is Blob Storage, you can set a parameter that filters on file extensions: `"parameters" : { "configuration" : { "indexedFileNameExtensions" : ".pdf,.docx" } }`.
-The **`parameters`** property modifies run time behaviors, such as how many errors to accept before failing the entire job. Parameters are also how you would specify source-specific behaviors. For example, if the source is Blob storage, you can set a parameter that filters on file extensions: `"parameters" : { "configuration" : { "indexedFileNameExtensions" : ".pdf,.docx" } }`.
+Field mappings are used to explicitly map source-to-destination fields if those fields differ by name or type.
-The **`field mappings`** property is used to explicitly map source-to-destination fields if those fields differ by name or type. Other properties (not shown), are used to [specify a schedule](search-howto-schedule-indexers.md), create the indexer in a disabled state, or specify an [encryption key](search-security-manage-encryption-keys.md) for supplemental encryption of data at rest.
+An indexer will run immediately when you create it on the search service. If you don't want indexer execution, set "disabled" to true.
-### Indexing for AI enrichment
+You can also [specify a schedule](search-howto-schedule-indexers.md) or set an [encryption key](search-security-manage-encryption-keys.md) for supplemental encryption of the indexer definition.
-Because indexers are the mechanism by which a search service makes outbound requests, indexers were extended to support AI enrichments, adding infrastructure and objects to implement this use case.
+### Indexing for AI enrichment
-All of the above properties and parameters apply to indexers that perform AI enrichment. The following properties are specific to AI enrichment: **`skillSetName`**, **`outputFieldMappings`**, **`cache`** (preview and REST only).
+Indexers also drive [AI enrichment](cognitive-search-concept-intro.md). All of the above properties and parameters apply, but the following properties are specific to AI enrichment: **`skillSetName`**, **`outputFieldMappings`**, **`cache`**. A few other required and similarly named properties are added for context.
-```http
-POST /indexers?api-version=[api-version]
+```json
{ "name": (required) String that uniquely identifies the indexer,
- "dataSourceName": (required) String, name of an existing data source,
+ "dataSourceName": (required) String, provides raw content that will be enriched,
"targetIndexName": (required) String, name of an existing index, "skillsetName" : (required for AI enrichment) String, name of an existing skillset, "cache": {
- "storageConnectionString" : (required for caching AI enriched content) Connection string to a blob container,
+ "storageConnectionString" : (required if you enable the cache) Connection string to a blob container,
"enableReprocessing": true },
- "parameters": {
- "batchSize": null,
- "maxFailedItems": null,
- "maxFailedItemsPerBatch": null
- },
- "fieldMappings": [],
- "outputFieldMappings" : (required for AI enrichment) { ... },
+ "parameters": { },
+ "fieldMappings": (optional) Maps fields in the underlying data source to fields in an index,
+ "outputFieldMappings" : (required) Maps skill outputs to fields in an index,
} ```
-AI enrichment is beyond the scope of this article. For more information, start with these articles: [AI enrichment](cognitive-search-concept-intro.md), [Skillsets in Azure Cognitive Search](cognitive-search-working-with-skillsets.md), and [Create Skillset (REST)](/rest/api/searchservice/create-skillset).
+AI enrichment is out of scope for this article. For more information, start with [Skillsets in Azure Cognitive Search](cognitive-search-working-with-skillsets.md), [Create a skillset](cognitive-search-defining-skillset.md), [Map enrichment output fields](cognitive-search-output-field-mapping.md), and [Enable caching for AI enrichment](search-howto-incremental-index.md).
## Prerequisites + Use a [supported data source](search-indexer-overview.md#supported-data-sources). ++ [Create a search index](search-how-to-create-search-index.md) that can accept incoming data.+ + Have admin rights. All operations related to indexers, including GET requests for status or definitions, require an [admin api-key](search-security-api-keys.md) on the request.
-All [service tiers limit](search-limits-quotas-capacity.md#indexer-limits) the number of objects that you can create. If you are experimenting on the Free tier, you can only have 3 objects of each type and 2 minutes of indexer processing (not including skillset processing).
++ Be under the [maximum limits](search-limits-quotas-capacity.md#indexer-limits) for your service tier. The Free tier allows three objects of each type and 1-3 minutes of indexer processing or 3-10 if there is a skillset.+
+## Prepare data
+
+Indexers work with data sets. When you run an indexer, it connects to your data source, retrieves the data from the container or folder, optionally serializes it into JSON before passing it to the search engine for indexing. This section describes the requirements of incoming data for text-based indexing.
+
+If your data is already JSON, the structure or shape of incoming data should correspond to the schema of your search index. Most indexes are fairly flat, where the fields collection consists of fields at the same level, but hierarchical or nested structures are possible through [complex fields and collections](search-howto-complex-data-types.md).
+
+If your data is relational, you will need to provide it as a flattened row set, where each row becomes a full or partial search document in the index. To flatten relational data into a row set, you should create a SQL view, or build a query that returns parent and child records in the same row. For example, the built-in hotels sample dataset is a SQL database that has 50 records (one for each hotel), linked to room records in a related table. The query that flattens the collective data into a row set embeds all of the room information in JSON documents in each hotel record. The embedded room information is a generated by a query that uses a **FOR JSON AUTO** clause. You can learn more about this technique in [define a query that returns embedded JSON](index-sql-relational-data.md#define-a-query-that-returns-embedded-json). This is just one example; you can find other approaches that will produce the same result.
+
+If your data is file-based, the indexer generally creates one search document for each file, where the search document consists of fields for content and metadata. Depending on the file type, the indexer can sometimes parse one file into multiple search documents (for example, if the file is CSV and each row becomes a search document).
+
+Remember to pull in only searchable and filterable data. Searchable data is text. Filterable data is alphanumeric. Cognitive Search cannot search over binary data in any format, although it can extract and infer text descriptions of image files (see [AI enrichment](cognitive-search-concept-intro.md)) to create searchable content. Likewise, large text can be broken down and analyzed by natural language models to find structure or relevant information, generating new content that you can add to a search document.
+
+Given that indexers don't fix data problems, other forms of data cleansing or manipulation might be needed. For more information, you should refer to the product documentation of your [Azure database product](../index.yml?product=databases).
+
+## Prepare an index
-## How to create indexers
+The output of an indexer is a search index, and the attributed fields in the index will receive the incoming data. Fields are the only receptors of external content, and depending on how the fields are attributed, the values for each field will be analyzed, tokenized, or stored as verbatim strings for filters, fuzzy search, and typeahead queries.
-When you are ready to create an indexer on a remote search service, you will need a search client in the form of a tool, like Azure portal or Postman, or code that instantiates an indexer client. We recommend the Azure portal or REST APIs for early development and proof-of-concept testing.
+Recall that indexers pass off the search documents to the search engine for indexing. Just as indexers have properties that determine execution behavior, an index schema has properties that profoundly affect how strings are indexed (only strings are analyzed and tokenized).
+
+Depending on analyzer assignments on each field, indexed strings might be different from what you passed in. You can evaluate the effects of analyzers using [Analyze Text (REST)](/rest/api/searchservice/test-analyzer). For more information about analyzers, see [Analyzers for text processing](search-analyzers.md).
+
+In terms of how indexers interact with an index, an indexer only checks field names and types. There is no validation step that ensures incoming content is correct for the corresponding search field in the index.
+
+## Create an indexer
+
+When you are ready to create an indexer on a remote search service, you will need a search client, such as Azure portal or Postman, or code that instantiates an indexer client. We recommend the Azure portal or REST APIs for early development and proof-of-concept testing.
### [**Azure portal**](#tab/indexer-portal)
The following screenshot shows where you can find these features in the portal.
:::image type="content" source="media/search-howto-create-indexers/portal-indexer-client.png" alt-text="hotels indexer" border="true":::
-### [**REST**](#tab/kstore-rest)
+### [**REST**](#tab/indexer-rest)
-Both Postman and Visual Studio Code (with an extension for Azure Cognitive Search) can function as an indexer client. Using either tool, you can connect to your search service and send [Create Indexer (REST)](/rest/api/searchservice/create-indexer) requests. There are numerous tutorials and examples that demonstrate REST clients for creating objects.
+Both Postman and Visual Studio Code (with an extension for Azure Cognitive Search) can function as an indexer client. Using either tool, you can connect to your search service and send [Create Indexer (REST)](/rest/api/searchservice/create-indexer) or [Update indexer](/rest/api/searchservice/update-indexer) requests.
-Start with either of these articles to learn about each client:
+```http
+POST /indexers?api-version=[api-version]
+{
+ "name": (required) String that uniquely identifies the indexer,
+ "dataSourceName": (required) String indicated which existing data source to use,
+ "targetIndexName": (required) String,
+ "parameters": {
+ "batchSize": null,
+ "maxFailedItems": null,
+ "maxFailedItemsPerBatch": null
+ },
+ "fieldMappings": [ optional unless there are field discrepancies that need resolution]
+}
+```
+
+There are numerous tutorials and examples that demonstrate REST clients for creating objects. Start with either of these articles to learn about each client:
+ [Create a search index using REST and Postman](search-get-started-rest.md) + [Get started with Visual Studio Code and Azure Cognitive Search](search-get-started-vs-code.md) Refer to the [Indexer operations (REST)](/rest/api/searchservice/Indexer-operations) for help with formulating indexer requests.
-### [**.NET SDK**](#tab/kstore-csharp)
+### [**.NET SDK**](#tab/indexer-csharp)
For Cognitive Search, the Azure SDKs implement generally available features. As such, you can use any of the SDKs to create indexer-related objects. All of them provide a **SearchIndexerClient** that has methods for creating indexers and related objects, including skillsets.
For Cognitive Search, the Azure SDKs implement generally available features. As
## Run the indexer
-Unless you set the **`disabled=true`** in the indexer definition, an indexer runs immediately when you create the indexer on the service. This is the moment of truth where you will find out if there are data source connection errors, field mapping issues, or skillset problems.
+By default, an indexer runs immediately when you create it on the search service. You can override this behavior by setting "disabled" to true in the indexer definition. Indexer execution is the moment of truth where you will find out if there are data source connection errors, field mapping issues, or skillset problems.
There are several ways to run an indexer:
-+ Send an HTTP request for [Create Indexer](/rest/api/searchservice/create-indexer) or [Update indexer](/rest/api/searchservice/update-indexer) to add or change the definition, and run the indexer.
-
-+ Send an HTTP request for [Run Indexer](/rest/api/searchservice/run-indexer) to execute an indexer with no changes to the definition. For more information, see [Run or reset indexers](search-howto-run-reset-indexers.md).
-
-+ Run a program that calls SearchIndexerClient methods for create, update, or run.
++ Run on indexer creation or update (default).
-Alternatively, put the indexer [on a schedule](search-howto-schedule-indexers.md) to invoke processing at regular intervals.
++ Run on demand when there are no changes to the definition, or precede with reset for full indexing. For more information, see [Run or reset indexers](search-howto-run-reset-indexers.md).
-Scheduled execution is usually implemented when you have a need for incremental indexing so that you can pick up the latest changes. As such, scheduling has a dependency on change detection. Change detection logic is a capability that's built into source platforms. If you're using a blob data source, changes in a blob container are detected automatically because Azure Storage exposes a LastModified property. Other data sources require explicit configuration. For guidance on leveraging change detection in other data sources, refer to the indexer docs for those sources:
++ [Schedule indexer processing](search-howto-schedule-indexers.md) to invoke execution at regular intervals.
-+ [Azure SQL database](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md)
-+ [Azure Data Lake Storage Gen2](search-howto-index-azure-data-lake-storage.md)
-+ [Azure Table Storage](search-howto-indexing-azure-tables.md)
-+ [Azure Cosmos DB](search-howto-index-cosmosdb.md)
+Scheduled execution is usually implemented when you have a need for incremental indexing so that you can pick up the latest changes. As such, scheduling has a dependency on change detection.
-## Change detection and indexer state
+## Change detection and internal state
-Indexers can detect changes in the underlying data and only process new or updated documents on each indexer run. For example, if indexer status says that a run was successful with `0/0` documents processed, it means that the indexer didn't find any new or changed rows or blobs in the underlying data source.
+Change detection logic is a capability that's built into source platforms. If your data source support change detection, an indexer can detect changes in the underlying data and only process new or updated documents on each indexer run, leaving unchanged content as-is. If indexer execution history says that a run was successful with `0/0` documents processed, it means that the indexer didn't find any new or changed rows or blobs in the underlying data source.
How an indexer supports change detection varies by data source:
For large indexing loads, an indexer also keeps track of the last document it pr
If you need to clear the high water mark to re-index in full, you can use [Reset Indexer](/rest/api/searchservice/reset-indexer). For more selective re-indexing, use [Reset Skills](/rest/api/searchservice/preview-api/reset-skills) or [Reset Documents](/rest/api/searchservice/preview-api/reset-documents). Through the reset APIs, you can clear internal state, and also flush the cache if you enabled [incremental enrichment](search-howto-incremental-index.md). For more background and comparison of each reset option, see [Run or reset indexers, skills, and documents](search-howto-run-reset-indexers.md).
-## Data preparation
-
-Indexers expect a tabular row set, where each row becomes a full or partial search document in the index. Often, there is a one-to-one correspondence between a row in a database and the resulting search document, where all the fields in the row set fully populate each document. But you can use indexers to generate a subset of a document's fields, and fill in the remaining fields using a different indexer or methodology.
-
-To flatten relational data into a row set, you should create a SQL view, or build a query that returns parent and child records in the same row. For example, the built-in hotels sample dataset is a SQL database that has 50 records (one for each hotel), linked to room records in a related table. The query that flattens the collective data into a row set embeds all of the room information in JSON documents in each hotel record. The embedded room information is a generated by a query that uses a **FOR JSON AUTO** clause. You can learn more about this technique in [define a query that returns embedded JSON](index-sql-relational-data.md#define-a-query-that-returns-embedded-json). This is just one example; you can find other approaches that will produce the same result.
-
-In addition to flattened data, it's important to pull in only searchable data. Searchable data is alphanumeric. Cognitive Search cannot search over binary data in any format, although it can extract and infer text descriptions of image files (see [AI enrichment](cognitive-search-concept-intro.md)) to create searchable content. Likewise, using AI enrichment, large text can be analyzed by natural language models to find structure or relevant information, generating new content that you can add to a search document.
-
-Given that indexers don't fix data problems, other forms of data cleansing or manipulation might be needed. For more information, you should refer to the product documentation of your [Azure database product](../index.yml?product=databases).
-
-## Index preparation
+## Check results
-Recall that indexers pass off the search documents to the search engine for indexing. Just as indexers have properties that determine execution behavior, an index schema has properties that profoundly affect how strings are indexed (only strings are analyzed and tokenized). Depending on analyzer assignments, indexed strings might be different from what you passed in. You can evaluate the effects of analyzers using [Analyze Text (REST)](/rest/api/searchservice/test-analyzer). For more information about analyzers, see [Analyzers for text processing](search-analyzers.md).
+[Monitor indexer status](search-howto-monitor-indexers.md) to check for status. Successful execution can still include warning and notifications. Be sure to check both successful and failed status notifications for details about the job.
-In terms of how indexers interact with an index, an indexer only checks field names and types. There is no validation step that ensures incoming content is correct for the corresponding search field in the index. As a verification step, you can run queries on the populated index that return entire documents or selected fields. For more information about querying the contents of an index, see [Create a basic query](search-query-create.md).
+For additional verification, [run queries](search-query-create.md) on the populated index that return entire documents or selected fields.
## Next steps
-+ [Schedule indexers](search-howto-schedule-indexers.md)
-+ [Define field mappings](search-indexer-field-mappings.md)
-+ [Monitor indexer status](search-howto-monitor-indexers.md)
-+ [Connect using managed identities](search-howto-managed-identities-data-sources.md)
++ [Index data from Azure Blob Storage](search-howto-indexing-azure-blob-storage.md)++ [Index data from Azure SQL database](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md)++ [Index data from Azure Data Lake Storage Gen2](search-howto-index-azure-data-lake-storage.md)++ [Index data from Azure Table Storage](search-howto-indexing-azure-tables.md)++ [Index data from Azure Cosmos DB](search-howto-index-cosmosdb.md)
search Search Howto Indexing Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-indexing-azure-blob-storage.md
Title: Index data from Azure Blob Storage
+ Title: Azure Blob indexer
description: Set up an Azure Blob indexer to automate indexing of blob content for full text search operations and knowledge mining in Azure Cognitive Search.
Previously updated : 12/17/2021- Last updated : 01/17/2022 # Configure a Blob indexer to import data from Azure Blob Storage
-In Azure Cognitive Search, blob indexers are frequently used for both [AI enrichment](cognitive-search-concept-intro.md) and text-based processing.
+In Azure Cognitive Search, blob [indexers](search-indexer-overview.md) are frequently used for both [AI enrichment](cognitive-search-concept-intro.md) and text-based processing.
This article focuses on how to configure a blob indexer for text-based indexing, where just the textual content and metadata are loaded into a search index for full text search scenarios. Inputs are your blobs, in a single container. Output is a search index with searchable content and metadata stored in individual fields.
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information specific to indexing from Blob Storage.
+ ## Prerequisites + [Azure Blob Storage](../storage/blobs/storage-blobs-overview.md), Standard performance (general-purpose v2). + [Access tiers](../storage/blobs/access-tiers-overview.md) for Blob storage include hot, cool, and archive. Only hot and cool can be accessed by search indexers.
-+ Blob content must not exceed the [indexer limits](search-limits-quotas-capacity.md#indexer-limits) for your search service tier.
-
-This article assumes a basic familiarity with [indexers](search-indexer-overview.md) and [indexer creation](search-howto-create-indexers.md), including tools and SDK support.
++ Blob content cannot exceed the [indexer limits](search-limits-quotas-capacity.md#indexer-limits) for your search service tier. <a name="SupportedFormats"></a>
The Azure Cognitive Search blob indexer can extract text from the following docu
## Define the data source
-A primary difference between a blob indexer and other indexers is the data source definition that's assigned to the indexer. The data source definition specifies the data source type ("type": "azureblob") and properties for authentication and connection to the content being indexed.
+A primary difference between a blob indexer and other indexers is the data source assignment. The data source definition specifies the type ("type": `"azureblob"`) and how to connect.
1. [Create or update a data source](/rest/api/searchservice/create-data-source) to set its definition:
A primary difference between a blob indexer and other indexers is the data sourc
} ```
-1. Set "type" to "azureblob" (required).
+1. Set "type" to `"azureblob"` (required).
1. Set "credentials" to the connection string, as shown in the above example, or one of the alternative approaches described in the next section.
You can provide the credentials for the blob container in one of these ways:
| Container shared access signature | |--| | `{ "connectionString" : "ContainerSharedAccessUri=https://<your storage account>.blob.core.windows.net/<container name>?sv=2016-05-31&sr=c&sig=<the signature>&se=<the validity end time>&sp=rl;" }` |
-| The SAS should have the list and read permissions on the container. For more information on storage shared access signatures, see [Using Shared Access Signatures](../storage/common/storage-sas-overview.md). |
+| The SAS should have the list and read permissions on the container. For more information on Azure Storage shared access signatures, see [Using Shared Access Signatures](../storage/common/storage-sas-overview.md). |
> [!NOTE] > If you use SAS credentials, you will need to update the data source credentials periodically with renewed signatures to prevent their expiration. If SAS credentials expire, the indexer will fail with an error message similar to "Credentials provided in the connection string are invalid or have expired".
search Search Howto Indexing Azure Tables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-indexing-azure-tables.md
Title: Index data from Azure Table Storage
+ Title: Azure Table indexer
description: Set up a search indexer to index data stored in Azure Table Storage for full text search in Azure Cognitive Search. + - Previously updated : 06/26/2021+ Last updated : 01/17/2022 # Index data from Azure Table Storage
-This article shows you how to configure an Azure table indexer to extract content and make it searchable in Azure Cognitive Search. This workflow creates a search index on Azure Cognitive Search and loads it with existing content extracted from Azure Table Storage.
+Configure a table [indexer](search-indexer-overview.md) in Azure Cognitive Search to retrieve, serialize, and index entity content from a single table in Azure Table Storage.
-You can set up an Azure Table Storage indexer by using any of these clients:
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information specific to indexing from Azure Table Storage.
-* [Azure portal](https://ms.portal.azure.com)
-* Azure Cognitive Search [REST API](/rest/api/searchservice/Indexer-operations)
-* Azure Cognitive Search [.NET SDK](/dotnet/api/azure.search.documents.indexes.models.searchindexer)
+## Prerequisites
-This article uses the REST APIs.
++ [Azure Table Storage](../storage/tables/table-storage-overview.md)
-## Configure an indexer
++ Tables with entities containing non-binary data for text-based indexing
-### Step 1: Create a data source
+## Define the data source
-[Create Data Source](/rest/api/searchservice/create-data-source) specifies which data to index, the credentials needed to access the data, and the policies that enable Azure Cognitive Search to efficiently identify changes in the data.
+A primary difference between a table indexer and other indexers is the data source assignment. The data source definition specifies the type ("type": `"azuretable"`) and how to connect.
-For table indexing, the data source must have the following properties:
+1. [Create or update a data source](/rest/api/searchservice/create-data-source) to set its definition:
-- **name** is the unique name of the datasource within your search service.-- **type** must be `azuretable`.-- **credentials** parameter contains the storage account connection string. See the [Specify credentials](#Credentials) section for details.-- **container** sets the table name and an optional query.
- - Specify the table name by using the `name` parameter.
- - Optionally, specify a query by using the `query` parameter.
+ ```json
+ {
+ "name" : "hotel-tables",
+ "type" : "azuretable",
+ "credentials" : { "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account key>;" },
+ "container" : { "name" : "tblHotels", "query" : "PartitionKey eq '123'" }
+ }
+ ```
-> [!IMPORTANT]
-> Whenever possible, use a filter on PartitionKey for better performance. Any other query does a full table scan, resulting in poor performance for large tables. See the [Performance considerations](#Performance) section.
+1. Set "type" to `"azuretable"` (required).
-Send the following request to create a data source:
+1. Set "credentials" to the connection string. The following examples show commonly used connection strings for connections using shared access keys or a [system-managed identity](search-howto-managed-identities-storage.md). Additional examples are in the next section.
-```http
-POST https://[service name].search.windows.net/datasources?api-version=2020-06-30
-Content-Type: application/json
-api-key: [admin key]
+ + `"connectionString" : "DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account key>;"`
-{
- "name" : "table-datasource",
- "type" : "azuretable",
- "credentials" : { "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account key>;" },
- "container" : { "name" : "my-table", "query" : "PartitionKey eq '123'" }
-}
-```
+ + `"connectionString" : "ResourceId=/subscriptions/[your subscription ID]/[your resource ID]/providers/Microsoft.Storage/storageAccounts/[your storage account];"`
+
+1. Set "container" to the name of the table.
+
+1. Optionally, set "query" to a filter on PartitionKey. This is a best practice that improves performance. If "query" is specified any other way, the indexer will execute a full table scan, resulting in poor performance if the tables are large.
+
+> [!TIP]
+> The Import data wizard will build a data source for you, including a valid connection string for system-assigned and shared key credentials. If you have trouble setting up the connection programmatically, [use the wizard](search-get-started-portal.md) as a syntax check.
<a name="Credentials"></a>
-#### Ways to specify credentials ####
+### Credentials for Table Storage
-You can provide the credentials for the table in one of these ways:
+You can provide the credentials for the connection in one of these ways:
-- **Managed identity connection string**: `ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.Storage/storageAccounts/<your storage account name>/;` ++ **Managed identity connection string**: `ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.Storage/storageAccounts/<your storage account name>/;` This connection string does not require an account key, but you must follow the instructions for [Setting up a connection to an Azure Storage account using a managed identity](search-howto-managed-identities-storage.md).-- **Full access storage account connection string**: `DefaultEndpointsProtocol=https;AccountName=<your storage account>;AccountKey=<your account key>` You can get the connection string from the Azure portal by going to the **Storage account blade** > **Settings** > **Keys** (for classic storage accounts) or **Settings** > **Access keys** (for Azure Resource Manager storage accounts).-- **Storage account shared access signature connection string**: `TableEndpoint=https://<your account>.table.core.windows.net/;SharedAccessSignature=?sv=2016-05-31&sig=<the signature>&spr=https&se=<the validity end time>&srt=co&ss=t&sp=rl` The shared access signature should have the list and read permissions on containers (tables in this case) and objects (table rows).-- **Table shared access signature**: `ContainerSharedAccessUri=https://<your storage account>.table.core.windows.net/<table name>?tn=<table name>&sv=2016-05-31&sig=<the signature>&se=<the validity end time>&sp=r` The shared access signature should have query (read) permissions on the table.+++ **Full access storage account connection string**: `DefaultEndpointsProtocol=https;AccountName=<your storage account>;AccountKey=<your account key>` You can get the connection string from the Azure portal by going to the **Storage account blade** > **Settings** > **Keys** (for classic storage accounts) or **Settings** > **Access keys** (for Azure Resource Manager storage accounts).+++ **Storage account shared access signature connection string**: `TableEndpoint=https://<your account>.table.core.windows.net/;SharedAccessSignature=?sv=2016-05-31&sig=<the signature>&spr=https&se=<the validity end time>&srt=co&ss=t&sp=rl` The shared access signature should have the list and read permissions on containers (tables in this case) and objects (table rows).+++ **Table shared access signature**: `ContainerSharedAccessUri=https://<your storage account>.table.core.windows.net/<table name>?tn=<table name>&sv=2016-05-31&sig=<the signature>&se=<the validity end time>&sp=r` The shared access signature should have query (read) permissions on the table. For more information on storage shared access signatures, see [Using shared access signatures](../storage/common/storage-sas-overview.md). > [!NOTE]
-> If you use shared access signature credentials, you will need to update the datasource credentials periodically with renewed signatures to prevent their expiration. If shared access signature credentials expire, the indexer fails with an error message similar to "Credentials provided in the connection string are invalid or have expired."
+> If you use shared access signature credentials, you will need to update the data source credentials periodically with renewed signatures to prevent their expiration or the indexer will fail with a "Credentials provided in the connection string are invalid or have expired" message.
-### Step 2: Create an index
+## Define fields in a search index
-[Create Index](/rest/api/searchservice/create-index) specifies the fields in a document, the attributes, and other constructs that shape the search experience.
+1. [Create or update an index](/rest/api/searchservice/create-index) to define search fields that will store content from entities:
-Send the following request to create an index:
+ ```http
+ POST https://[service name].search.windows.net/indexes?api-version=2020-06-30
+ Content-Type: application/json
+ api-key: [admin key]
+
+ {
+ "name" : "my-target-index",
+ "fields": [
+ { "name": "key", "type": "Edm.String", "key": true, "searchable": false },
+ { "name": "SomeColumnInMyTable", "type": "Edm.String", "searchable": true }
+ ]
+ }
+ ```
-```http
-POST https://[service name].search.windows.net/indexes?api-version=2020-06-30
-Content-Type: application/json
-api-key: [admin key]
+1. Check for field correspondence between entity fields and search fields. If names and types don't match, [add field mappings](search-indexer-field-mappings.md) to the indexer definition to ensure the source-to-destination path is clear.
-{
- "name" : "my-target-index",
- "fields": [
- { "name": "key", "type": "Edm.String", "key": true, "searchable": false },
- { "name": "SomeColumnInMyTable", "type": "Edm.String", "searchable": true }
- ]
-}
-```
+1. Create a key field, but do not define field mappings to alternative unique strings in the table.
+
+ A table indexer will populate the key field with concatenated partition and row keys from the table. For example, if a rowΓÇÖs PartitionKey is `PK1` and RowKey is `RK1`, then the `Key` field's value is `PK1RK1`. If the partition key is null, just the row key is used.
-### Step 3: Create an indexer
+## Set properties on the indexer
[Create Indexer](/rest/api/searchservice/create-indexer) connects a data source with a target search index and provides a schedule to automate the data refresh.
-After the index and data source are created, you're ready to create the indexer:
+An indexer definition for Table Storage uses the global properties for data source, index, [schedule](search-howto-schedule-indexers.md), mapping functions for base-64 encoding, and any field mappings.
```http POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
api-key: [admin key]
} ```
-This indexer runs every two hours. (The schedule interval is set to "PT2H".) To run an indexer every 30 minutes, set the interval to "PT30M". The shortest supported interval is five minutes. The schedule is optional; if omitted, an indexer runs only once when it's created. However, you can run an indexer on demand at any time. For more information about defining indexer schedules see [Schedule indexers for Azure Cognitive Search](search-howto-schedule-indexers.md).
-
-## Handle field name discrepancies
-
-Sometimes, the field names in your existing index are different from the property names in your table. You can use field mappings to map the property names from the table to the field names in your search index. To learn more about field mappings, see [Azure Cognitive Search indexer field mappings bridge the differences between datasources and search indexes](search-indexer-field-mappings.md).
+## Change and deletion detection
-## Handle document keys
-
-In Azure Cognitive Search, the document key uniquely identifies a document. Every search index must have exactly one key field of type `Edm.String`. The key field is required for each document that is being added to the index. (In fact, it's the only required field.)
-
-Because table rows have a compound key, Azure Cognitive Search generates a synthetic field called `Key` that is a concatenation of partition key and row key values. For example, if a rowΓÇÖs PartitionKey is `PK1` and RowKey is `RK1`, then the `Key` field's value is `PK1RK1`.
-
-> [!NOTE]
-> The `Key` value may contain characters that are invalid in document keys, such as dashes. You can deal with invalid characters by using the `base64Encode` [field mapping function](search-indexer-field-mappings.md#base64EncodeFunction). If you do this, remember to also use URL-safe Base64 encoding when passing document keys in API calls such as Lookup.
->
+When you set up a table indexer to run on a schedule, it reindexes only new or updated rows, as determined by a row's `Timestamp` value. When indexing out of Azure Table Storage, you donΓÇÖt have to specify a change detection policy. Incremental indexing is enabled for you automatically.
-## Incremental indexing and deletion detection
-
-When you set up a table indexer to run on a schedule, it reindexes only new or updated rows, as determined by a rowΓÇÖs `Timestamp` value. You donΓÇÖt have to specify a change detection policy. Incremental indexing is enabled for you automatically.
-
-To indicate that certain documents must be removed from the index, you can use a soft delete strategy. Instead of deleting a row, add a property to indicate that it's deleted, and set up a soft deletion detection policy on the datasource. For example, the following policy considers that a row is deleted if the row has a property `IsDeleted` with the value `"true"`:
+To indicate that certain documents must be removed from the index, you can use a soft delete strategy. Instead of deleting a row, add a property to indicate that it's deleted, and set up a soft deletion detection policy on the data source. For example, the following policy considers that a row is deleted if the row has a property `IsDeleted` with the value `"true"`:
```http PUT https://[service name].search.windows.net/datasources?api-version=2020-06-30
api-key: [admin key]
## Performance considerations
-By default, Azure Cognitive Search uses the following query filter: `Timestamp >= HighWaterMarkValue`. Because Azure tables donΓÇÖt have a secondary index on the `Timestamp` field, this type of query requires a full table scan and is therefore slow for large tables.
+By default, Azure Cognitive Search uses the following internal query filter to keep track of which source entities have been updated since the last run: `Timestamp >= HighWaterMarkValue`.
-Here are two possible approaches for improving table indexing performance. Both of these approaches rely on using table partitions:
+Because Azure tables donΓÇÖt have a secondary index on the `Timestamp` field, this type of query requires a full table scan and is therefore slow for large tables.
-- If your data can naturally be partitioned into several partition ranges, create a datasource and a corresponding indexer for each partition range. Each indexer now has to process only a specific partition range, resulting in better query performance. If the data that needs to be indexed has a small number of fixed partitions, even better: each indexer only does a partition scan. For example, to create a datasource for processing a partition range with keys from `000` to `100`, use a query like this:
- ```
+Here are two possible approaches for improving table indexing performance. Both rely on using table partitions:
+++ If your data can naturally be partitioned into several partition ranges, create a data source and a corresponding indexer for each partition range. Each indexer now has to process only a specific partition range, resulting in better query performance. If the data that needs to be indexed has a small number of fixed partitions, even better: each indexer only does a partition scan. For example, to create a data source for processing a partition range with keys from `000` to `100`, use a query like this: +
+ ```json
"container" : { "name" : "my-table", "query" : "PartitionKey ge '000' and PartitionKey lt '100' " } ``` -- If your data is partitioned by time (for example, you create a new partition every day or week), consider the following approach:
- - Use a query of the form: `(PartitionKey ge <TimeStamp>) and (other filters)`.
- - Monitor indexer progress by using [Get Indexer Status API](/rest/api/searchservice/get-indexer-status), and periodically update the `<TimeStamp>` condition of the query based on the latest successful high-water-mark value.
- - With this approach, if you need to trigger a complete reindexing, you need to reset the datasource query in addition to resetting the indexer.
++ If your data is partitioned by time (for example, you create a new partition every day or week), consider the following approach: +
+ + Use a query of the form: `(PartitionKey ge <TimeStamp>) and (other filters)`.
+
+ + Monitor indexer progress by using [Get Indexer Status API](/rest/api/searchservice/get-indexer-status), and periodically update the `<TimeStamp>` condition of the query based on the latest successful high-water-mark value.
+
+ + With this approach, if you need to trigger a complete reindexing, you need to reset the datasource query in addition to resetting the indexer.
## See also
search Search Howto Monitor Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-monitor-indexers.md
For more information about investigating indexer errors and warnings, see [Index
## Monitor with Azure Monitoring Metrics
-Cognitive Search is a monitored resource in Azure Monitor, which means that you can use [Metrics Explorer](/azure/azure-monitor/essentials/data-platform-metrics#metrics-explorer) to see basic metrics about the number of indexer-processed documents and skill invocations. These metrics can be used to monitor indexer progress and [set up alerts](/azure/azure-monitor/alerts/alerts-metric-overview).
+Cognitive Search is a monitored resource in Azure Monitor, which means that you can use [Metrics Explorer](../azure-monitor/essentials/data-platform-metrics.md#metrics-explorer) to see basic metrics about the number of indexer-processed documents and skill invocations. These metrics can be used to monitor indexer progress and [set up alerts](../azure-monitor/alerts/alerts-metric-overview.md).
Metric views can be filtered or split up by a set of predefined dimensions.
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/feature-availability.md
The following tables display the current Microsoft Sentinel feature availability
| - [Cross-tenant/Cross-workspace incidents view](../../sentinel/multiple-workspace-view.md) |GA | GA | | - [Entity insights](../../sentinel/enable-entity-behavior-analytics.md) | GA | Public Preview | |- [SOC incident audit metrics](../../sentinel/manage-soc-with-incident-metrics.md) | GA | GA |
-| - [Incident advanced search](/azure/sentinel/investigate-cases#search-for-incidents) |GA |GA |
-| - [Microsoft Teams integrations](/azure/sentinel/collaborate-in-microsoft-teams) |Public Preview |Not Available |
+| - [Incident advanced search](../../sentinel/investigate-cases.md#search-for-incidents) |GA |GA |
+| - [Microsoft Teams integrations](../../sentinel/collaborate-in-microsoft-teams.md) |Public Preview |Not Available |
|- [Bring Your Own ML (BYO-ML)](../../sentinel/bring-your-own-ml.md) | Public Preview | Public Preview | | **Notebooks** | | | |- [Notebooks](../../sentinel/notebooks.md) | GA | GA |
-| - [Notebook integration with Azure Synapse](/azure/sentinel/notebooks-with-synapse) | Public Preview | Not Available|
+| - [Notebook integration with Azure Synapse](../../sentinel/notebooks-with-synapse.md) | Public Preview | Not Available|
| **Watchlists** | | | |- [Watchlists](../../sentinel/watchlists.md) | GA | GA | | **Hunting** | | | - [Hunting](../../sentinel/hunting.md) | GA | GA | | **Content and content management** | | |
-| - [Content hub](/azure/sentinel/sentinel-solutions) and [solutions](/azure/sentinel/sentinel-solutions-catalog) | Public preview | Not Available|
-| - [Repositories](/azure/sentinel/ci-cd?tabs=github) | Public preview | Not Available |
+| - [Content hub](../../sentinel/sentinel-solutions.md) and [solutions](../../sentinel/sentinel-solutions-catalog.md) | Public preview | Not Available|
+| - [Repositories](../../sentinel/ci-cd.md?tabs=github) | Public preview | Not Available |
| **Data collection** | | |
-| - [Advanced SIEM Information Model (ASIM)](/azure/sentinel/normalization) | Public Preview | Not Available |
+| - [Advanced SIEM Information Model (ASIM)](../../sentinel/normalization.md) | Public Preview | Not Available |
| **Threat intelligence support** | | | | - [Threat Intelligence - TAXII data connector](../../sentinel/understand-threat-intelligence.md) | GA | GA | | - [Threat Intelligence Platform data connector](../../sentinel/understand-threat-intelligence.md) | Public Preview | Not Available | | - [Threat Intelligence Research Blade](https://techcommunity.microsoft.com/t5/azure-sentinel/what-s-new-threat-intelligence-menu-item-in-public-preview/ba-p/1646597) | GA | GA | | - [URL Detonation](https://techcommunity.microsoft.com/t5/azure-sentinel/using-the-new-built-in-url-detonation-in-azure-sentinel/ba-p/996229) | Public Preview | Not Available | | - [Threat Intelligence workbook](/azure/architecture/example-scenario/data/sentinel-threat-intelligence) | GA | GA |
-| - [GeoLocation and WhoIs data enrichment](/azure/sentinel/work-with-threat-indicators) | Public Preview | Not Available |
-| - [Threat intelligence matching analytics](/azure/sentinel/work-with-threat-indicators) | Public Preview |Not Available |
+| - [GeoLocation and WhoIs data enrichment](../../sentinel/work-with-threat-indicators.md) | Public Preview | Not Available |
+| - [Threat intelligence matching analytics](../../sentinel/work-with-threat-indicators.md) | Public Preview |Not Available |
|**Detection support** | | | | - [Fusion](../../sentinel/fusion.md)<br>Advanced multistage attack detections <sup>[1](#footnote1)</sup> | GA | GA |
-| - [Fusion detection for ransomware](/azure/sentinel/fusion#fusion-for-ransomware) | Public Preview | Not Available |
-| - [Fusion for emerging threats](/azure/sentinel/fusion#fusion-for-emerging-threats) | Public Preview |Not Available |
+| - [Fusion detection for ransomware](../../sentinel/fusion.md#fusion-for-ransomware) | Public Preview | Not Available |
+| - [Fusion for emerging threats](../../sentinel/fusion.md#fusion-for-emerging-threats) | Public Preview |Not Available |
| - [Anomalous Windows File Share Access Detection](../../sentinel/fusion.md) | Public Preview | Not Available |
-| - [Anomalous RDP Login Detection](/azure/sentinel/data-connectors-reference#configure-the-security-events--windows-security-events-connector-for-anomalous-rdp-login-detection)<br>Built-in ML detection | Public Preview | Not Available |
+| - [Anomalous RDP Login Detection](../../sentinel/data-connectors-reference.md#configure-the-security-events--windows-security-events-connector-for-anomalous-rdp-login-detection)<br>Built-in ML detection | Public Preview | Not Available |
| - [Anomalous SSH login detection](../../sentinel/connect-syslog.md#configure-the-syslog-connector-for-anomalous-ssh-login-detection)<br>Built-in ML detection | Public Preview | Not Available | | **Azure service connectors** | | | | - [Azure Activity Logs](../../sentinel/data-connectors-reference.md#azure-activity) | GA | GA |
The following tables display the current Microsoft Sentinel feature availability
| - [Azure DDoS Protection](../../sentinel/data-connectors-reference.md#azure-ddos-protection) | GA | GA | | - [Microsoft Defender for Cloud](../../sentinel/connect-azure-security-center.md) | GA | GA | | - [Microsoft Defender for IoT](../../sentinel/data-connectors-reference.md#microsoft-defender-for-iot) | Public Preview | Not Available |
-| - [Microsoft Insider Risk Management](/azure/sentinel/sentinel-solutions-catalog#domain-solutions) | Public Preview | Not Available |
+| - [Microsoft Insider Risk Management](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available |
| - [Azure Firewall ](../../sentinel/data-connectors-reference.md#azure-firewall) | GA | GA | | - [Azure Information Protection](../../sentinel/data-connectors-reference.md#azure-information-protection) | Public Preview | Not Available | | - [Azure Key Vault ](../../sentinel/data-connectors-reference.md#azure-key-vault) | Public Preview | Not Available |
The following tables display the current Microsoft Sentinel feature availability
| - [Alcide kAudit](../../sentinel/data-connectors-reference.md#alcide-kaudit) | Public Preview | Not Available | | - [Alsid for Active Directory](../../sentinel/data-connectors-reference.md#alsid-for-active-directory) | Public Preview | Not Available | | - [Apache HTTP Server](../../sentinel/data-connectors-reference.md#apache-http-server) | Public Preview | Not Available |
-| - [Arista Networks](/azure/sentinel/sentinel-solutions-catalog) | Public Preview | Not Available |
-| - [Armorblox](/azure/sentinel/sentinel-solutions-catalog#armorblox) | Public Preview | Not Available |
+| - [Arista Networks](../../sentinel/sentinel-solutions-catalog.md) | Public Preview | Not Available |
+| - [Armorblox](../../sentinel/sentinel-solutions-catalog.md#armorblox) | Public Preview | Not Available |
| - [Aruba ClearPass](../../sentinel/data-connectors-reference.md#aruba-clearpass-preview) | Public Preview | Public Preview | | - [AWS](../../sentinel/connect-data-sources.md) | GA | GA | | - [Barracuda CloudGen Firewall](../../sentinel/data-connectors-reference.md#barracuda-cloudgen-firewall) | GA | GA |
The following tables display the current Microsoft Sentinel feature availability
| - [BETTER Mobile Threat Defense MTD](../../sentinel/data-connectors-reference.md#better-mobile-threat-defense-mtd-preview) | Public Preview | Not Available | | - [Beyond Security beSECURE](../../sentinel/data-connectors-reference.md#beyond-security-besecure) | Public Preview | Not Available | | - [Blackberry CylancePROTECT](../../sentinel/connect-data-sources.md) | Public Preview | Public Preview |
-| - [Box](/azure/sentinel/sentinel-solutions-catalog#box) | Public Preview | Not Available |
+| - [Box](../../sentinel/sentinel-solutions-catalog.md#box) | Public Preview | Not Available |
| - [Broadcom Symantec DLP](../../sentinel/data-connectors-reference.md#broadcom-symantec-data-loss-prevention-dlp-preview) | Public Preview | Public Preview | | - [Check Point](../../sentinel/data-connectors-reference.md#check-point) | GA | GA |
-| - [Cisco ACI](/azure/sentinel/sentinel-solutions-catalog#cisco) | Public Preview | Not Available |
+| - [Cisco ACI](../../sentinel/sentinel-solutions-catalog.md#cisco) | Public Preview | Not Available |
| - [Cisco ASA](../../sentinel/data-connectors-reference.md#cisco-asa) | GA | GA |
-| - [Cisco Duo Security](/azure/sentinel/sentinel-solutions-catalog#cisco) | Public Preview | Not Available |
-| - [Cisco ISE](/azure/sentinel/sentinel-solutions-catalog#cisco) | Public Preview | Not Available |
+| - [Cisco Duo Security](../../sentinel/sentinel-solutions-catalog.md#cisco) | Public Preview | Not Available |
+| - [Cisco ISE](../../sentinel/sentinel-solutions-catalog.md#cisco) | Public Preview | Not Available |
| - [Cisco Meraki](../../sentinel/data-connectors-reference.md#cisco-meraki-preview) | Public Preview | Public Preview |
-| - [Cisco Secure Email Gateway / ESA](/azure/sentinel/sentinel-solutions-catalog#cisco) | Public Preview | Not Available |
+| - [Cisco Secure Email Gateway / ESA](../../sentinel/sentinel-solutions-catalog.md#cisco) | Public Preview | Not Available |
| - [Cisco Umbrella](../../sentinel/data-connectors-reference.md#cisco-umbrella-preview) | Public Preview | Public Preview | | - [Cisco UCS](../../sentinel/data-connectors-reference.md#cisco-unified-computing-system-ucs-preview) | Public Preview | Public Preview | | - [Cisco Firepower EStreamer](../../sentinel/connect-data-sources.md) | Public Preview | Public Preview |
-| - [Cisco Web Security Appliance (WSA)](/azure/sentinel/sentinel-solutions-catalog#cisco) | Public Preview | Not Available |
+| - [Cisco Web Security Appliance (WSA)](../../sentinel/sentinel-solutions-catalog.md#cisco) | Public Preview | Not Available |
| - [Citrix Analytics WAF](../../sentinel/data-connectors-reference.md#citrix-web-app-firewall-waf-preview) | GA | GA |
-| - [Cloudflare](/azure/sentinel/sentinel-solutions-catalog#cloudflare) | Public Preview | Not Available |
+| - [Cloudflare](../../sentinel/sentinel-solutions-catalog.md#cloudflare) | Public Preview | Not Available |
| - [Common Event Format (CEF)](../../sentinel/connect-common-event-format.md) | GA | GA |
-| - [Contrast Security](/azure/sentinel/sentinel-solutions-catalog#contrast-security) | Public Preview | Not Available |
-| - [CrowdStrike](/azure/sentinel/sentinel-solutions-catalog#crowdstrike) | Public Preview | Not Available |
+| - [Contrast Security](../../sentinel/sentinel-solutions-catalog.md#contrast-security) | Public Preview | Not Available |
+| - [CrowdStrike](../../sentinel/sentinel-solutions-catalog.md#crowdstrike) | Public Preview | Not Available |
| - [CyberArk Enterprise Password Vault (EPV) Events](../../sentinel/data-connectors-reference.md#cyberark-enterprise-password-vault-epv-events-preview) | Public Preview | Public Preview |
-| - [Digital Guardian](/azure/sentinel/sentinel-solutions-catalog#digital-guardian) | Public Preview | Not Available |
+| - [Digital Guardian](../../sentinel/sentinel-solutions-catalog.md#digital-guardian) | Public Preview | Not Available |
| - [ESET Enterprise Inspector](../../sentinel/connect-data-sources.md) | Public Preview | Not Available | | - [Eset Security Management Center](../../sentinel/connect-data-sources.md) | Public Preview | Not Available | | - [ExtraHop Reveal(x)](../../sentinel/data-connectors-reference.md#extrahop-revealx) | GA | GA | | - [F5 BIG-IP ](../../sentinel/data-connectors-reference.md#f5-big-ip) | GA | GA | | - [F5 Networks](../../sentinel/data-connectors-reference.md#f5-networks-asm) | GA | GA |
-| - [FireEye NX (Network Security)](/azure/sentinel/sentinel-solutions-catalog#fireeye-nx-network-security) | Public Preview | Not Available |
-| - [Flare Systems Firework](/azure/sentinel/sentinel-solutions-catalog#flare-systems-framework) | Public Preview | Not Available |
+| - [FireEye NX (Network Security)](../../sentinel/sentinel-solutions-catalog.md#fireeye-nx-network-security) | Public Preview | Not Available |
+| - [Flare Systems Firework](../../sentinel/sentinel-solutions-catalog.md) | Public Preview | Not Available |
| - [Forcepoint NGFW](../../sentinel/data-connectors-reference.md#forcepoint-cloud-access-security-broker-casb-preview) | Public Preview | Public Preview | | - [Forcepoint CASB](../../sentinel/data-connectors-reference.md#forcepoint-cloud-access-security-broker-casb-preview) | Public Preview | Public Preview | | - [Forcepoint DLP ](../../sentinel/data-connectors-reference.md#forcepoint-data-loss-prevention-dlp-preview) | Public Preview | Not Available |
-| - [Forescout](/azure/sentinel/sentinel-solutions-catalog#forescout) | Public Preview | Not Available |
+| - [Forescout](../../sentinel/sentinel-solutions-catalog.md#forescout) | Public Preview | Not Available |
| - [ForgeRock Common Audit for CEF](../../sentinel/connect-data-sources.md) | Public Preview | Public Preview | | - [Fortinet](../../sentinel/data-connectors-reference.md#fortinet) | GA | GA |
-| - [Google Cloud Platform DNS](/azure/sentinel/sentinel-solutions-catalog#google) | Public Preview | Not Available |
-| - [Google Cloud Platform](/azure/sentinel/sentinel-solutions-catalog#google) | Public Preview | Not Available |
+| - [Google Cloud Platform DNS](../../sentinel/sentinel-solutions-catalog.md#google) | Public Preview | Not Available |
+| - [Google Cloud Platform](../../sentinel/sentinel-solutions-catalog.md#google) | Public Preview | Not Available |
| - [Google Workspace (G Suite) ](../../sentinel/data-connectors-reference.md#google-workspace-g-suite-preview) | Public Preview | Not Available | | - [Illusive Attack Management System](../../sentinel/data-connectors-reference.md#illusive-attack-management-system-ams-preview) | Public Preview | Public Preview | | - [Imperva WAF Gateway](../../sentinel/data-connectors-reference.md#imperva-waf-gateway-preview) | Public Preview | Public Preview |
-| - [InfoBlox Cloud](/azure/sentinel/sentinel-solutions-catalog#infoblox) | Public Preview | Not Available |
+| - [InfoBlox Cloud](../../sentinel/sentinel-solutions-catalog.md#infoblox) | Public Preview | Not Available |
| - [Infoblox NIOS](../../sentinel/data-connectors-reference.md#infoblox-network-identity-operating-system-nios-preview) | Public Preview | Public Preview |
-| - [Juniper IDP](/azure/sentinel/sentinel-solutions-catalog#juniper) | Public Preview | Not Available |
+| - [Juniper IDP](../../sentinel/sentinel-solutions-catalog.md#juniper) | Public Preview | Not Available |
| - [Juniper SRX](../../sentinel/data-connectors-reference.md#juniper-srx-preview) | Public Preview | Public Preview |
-| - [Kaspersky AntiVirus](/azure/sentinel/sentinel-solutions-catalog#kaspersky) | Public Preview | Not Available |
-| - [Lookout Mobile Threat Defense](/azure/sentinel/data-connectors-reference#lookout-mobile-threat-defense-preview) | Public Preview | Not Available |
-| - [McAfee ePolicy](/azure/sentinel/sentinel-solutions-catalog#mcafee) | Public Preview | Not Available |
-| - [McAfee Network Security Platform](/azure/sentinel/sentinel-solutions-catalog#mcafee) | Public Preview | Not Available |
+| - [Kaspersky AntiVirus](../../sentinel/sentinel-solutions-catalog.md#kaspersky) | Public Preview | Not Available |
+| - [Lookout Mobile Threat Defense](../../sentinel/data-connectors-reference.md#lookout-mobile-threat-defense-preview) | Public Preview | Not Available |
+| - [McAfee ePolicy](../../sentinel/sentinel-solutions-catalog.md#mcafee) | Public Preview | Not Available |
+| - [McAfee Network Security Platform](../../sentinel/sentinel-solutions-catalog.md#mcafee) | Public Preview | Not Available |
| - [Morphisec UTPP](../../sentinel/connect-data-sources.md) | Public Preview | Public Preview | | - [Netskope](../../sentinel/connect-data-sources.md) | Public Preview | Public Preview | | - [NXLog Windows DNS](../../sentinel/data-connectors-reference.md#nxlog-dns-logs-preview) | Public Preview | Not Available |
The following tables display the current Microsoft Sentinel feature availability
| - [Okta Single Sign On](../../sentinel/data-connectors-reference.md#okta-single-sign-on-preview) | Public Preview | Public Preview | | - [Onapsis Platform](../../sentinel/connect-data-sources.md) | Public Preview | Public Preview | | - [One Identity Safeguard](../../sentinel/data-connectors-reference.md#one-identity-safeguard-preview) | GA | GA |
-| - [Oracle Cloud Infrastructure](/azure/sentinel/sentinel-solutions-catalog#oracle)| Public Preview | Not Available |
-| - [Oracle Database Audit](/azure/sentinel/sentinel-solutions-catalog#oracle)| Public Preview | Not Available |
+| - [Oracle Cloud Infrastructure](../../sentinel/sentinel-solutions-catalog.md#oracle)| Public Preview | Not Available |
+| - [Oracle Database Audit](../../sentinel/sentinel-solutions-catalog.md#oracle)| Public Preview | Not Available |
| - [Orca Security Alerts](../../sentinel/data-connectors-reference.md#orca-security-preview) | Public Preview | Not Available | | - [Palo Alto Networks](../../sentinel/data-connectors-reference.md#palo-alto-networks) | GA | GA | | - [Perimeter 81 Activity Logs](../../sentinel/data-connectors-reference.md#perimeter-81-activity-logs-preview) | GA | Not Available |
-| - [Ping Identity](/azure/sentinel/sentinel-solutions-catalog#ping-identity) | Public Preview | Not Available |
+| - [Ping Identity](../../sentinel/sentinel-solutions-catalog.md#ping-identity) | Public Preview | Not Available |
| - [Proofpoint On Demand Email Security](../../sentinel/data-connectors-reference.md#proofpoint-on-demand-pod-email-security-preview) | Public Preview | Not Available | | - [Proofpoint TAP](../../sentinel/data-connectors-reference.md#proofpoint-targeted-attack-protection-tap-preview) | Public Preview | Public Preview | | - [Pulse Connect Secure](../../sentinel/data-connectors-reference.md#proofpoint-targeted-attack-protection-tap-preview) | Public Preview | Public Preview | | - [Qualys Vulnerability Management](../../sentinel/data-connectors-reference.md#qualys-vulnerability-management-vm-preview) | Public Preview | Public Preview |
-| - [Rapid7](/azure/sentinel/sentinel-solutions-catalog#rapid7) | Public Preview | Not Available |
-| - [RSA SecurID](/azure/sentinel/sentinel-solutions-catalog#rsa) | Public Preview | Not Available |
+| - [Rapid7](../../sentinel/sentinel-solutions-catalog.md#rapid7) | Public Preview | Not Available |
+| - [RSA SecurID](../../sentinel/sentinel-solutions-catalog.md#rsa) | Public Preview | Not Available |
| - [Salesforce Service Cloud](../../sentinel/data-connectors-reference.md#salesforce-service-cloud-preview) | Public Preview | Not Available |
-| - [SAP (Continuous Threat Monitoring for SAP)](/azure/sentinel/sap-deploy-solution) | Public Preview | Not Available |
-| - [Semperis](/azure/sentinel/sentinel-solutions-catalog#semperis) | Public Preview | Not Available |
-| - [Senserva Pro](/azure/sentinel/sentinel-solutions-catalog#senserva-pro) | Public Preview | Not Available |
-| - [Slack Audit](/azure/sentinel/sentinel-solutions-catalog#slack) | Public Preview | Not Available |
+| - [SAP (Continuous Threat Monitoring for SAP)](../../sentinel/sap-deploy-solution.md) | Public Preview | Not Available |
+| - [Semperis](../../sentinel/sentinel-solutions-catalog.md#semperis) | Public Preview | Not Available |
+| - [Senserva Pro](../../sentinel/sentinel-solutions-catalog.md#senserva-pro) | Public Preview | Not Available |
+| - [Slack Audit](../../sentinel/sentinel-solutions-catalog.md#slack) | Public Preview | Not Available |
| - [SonicWall Firewall ](../../sentinel/data-connectors-reference.md#sophos-cloud-optix-preview) | Public Preview | Public Preview |
-| - [Sonrai Security](/azure/sentinel/sentinel-solutions-catalog#sonrai-security) | Public Preview | Not Available |
+| - [Sonrai Security](../../sentinel/sentinel-solutions-catalog.md#sonrai-security) | Public Preview | Not Available |
| - [Sophos Cloud Optix](../../sentinel/data-connectors-reference.md#sophos-cloud-optix-preview) | Public Preview | Not Available | | - [Sophos XG Firewall](../../sentinel/data-connectors-reference.md#sophos-xg-firewall-preview) | Public Preview | Public Preview | | - [Squadra Technologies secRMM](../../sentinel/data-connectors-reference.md#squadra-technologies-secrmm) | GA | GA |
The following tables display the current Microsoft Sentinel feature availability
| - [Symantec ProxySG](../../sentinel/data-connectors-reference.md#symantec-proxysg-preview) | Public Preview | Public Preview | | - [Symantec VIP](../../sentinel/data-connectors-reference.md#symantec-vip-preview) | Public Preview | Public Preview | | - [Syslog](../../sentinel/connect-syslog.md) | GA | GA |
-| - [Tenable](/azure/sentinel/sentinel-solutions-catalog#tenable) | Public Preview | Not Available |
+| - [Tenable](../../sentinel/sentinel-solutions-catalog.md#tenable) | Public Preview | Not Available |
| - [Thycotic Secret Server](../../sentinel/data-connectors-reference.md#thycotic-secret-server-preview) | Public Preview | Public Preview | | - [Trend Micro Deep Security](../../sentinel/data-connectors-reference.md#trend-micro-deep-security) | GA | GA | | - [Trend Micro TippingPoint](../../sentinel/data-connectors-reference.md#trend-micro-tippingpoint-preview) | Public Preview | Public Preview | | - [Trend Micro XDR](../../sentinel/connect-data-sources.md) | Public Preview | Not Available |
-| - [Ubiquiti](/azure/sentinel/sentinel-solutions-catalog#ubiquiti) | Public Preview | Not Available |
-| - [vArmour](/azure/sentinel/sentinel-solutions-catalog#varmour) | Public Preview | Not Available |
-| - [Vectra](/azure/sentinel/sentinel-solutions-catalog#vectra) | Public Preview | Not Available |
+| - [Ubiquiti](../../sentinel/sentinel-solutions-catalog.md#ubiquiti) | Public Preview | Not Available |
+| - [vArmour](../../sentinel/sentinel-solutions-catalog.md#varmour) | Public Preview | Not Available |
+| - [Vectra](../../sentinel/sentinel-solutions-catalog.md#vectra) | Public Preview | Not Available |
| - [VMware Carbon Black Endpoint Standard](../../sentinel/data-connectors-reference.md#vmware-carbon-black-endpoint-standard-preview) | Public Preview | Public Preview | | - [VMware ESXi](../../sentinel/data-connectors-reference.md#vmware-esxi-preview) | Public Preview | Public Preview | | - [WireX Network Forensics Platform](../../sentinel/data-connectors-reference.md#wirex-network-forensics-platform-preview) | Public Preview | Public Preview |
-| - [Zeek Network (Corelight)](/azure/sentinel/sentinel-solutions-catalog#zeek-network) | Public Preview | Not Available |
+| - [Zeek Network (Corelight)](../../sentinel/sentinel-solutions-catalog.md#zeek-network) | Public Preview | Not Available |
| - [Zimperium Mobile Threat Defense](../../sentinel/data-connectors-reference.md#zimperium-mobile-thread-defense-preview) | Public Preview | Not Available | | - [Zscaler](../../sentinel/data-connectors-reference.md#zscaler) | GA | GA | | | | |
Office 365 GCC is paired with Azure Active Directory (Azure AD) in Azure. Office
| Connector | Azure | Azure Government | |--|--|--|
-| **[Office IRM](/azure/sentinel/data-connectors-reference#microsoft-365-insider-risk-management-irm-preview)** | | |
+| **[Office IRM](../../sentinel/data-connectors-reference.md#microsoft-365-insider-risk-management-irm-preview)** | | |
| - Office 365 GCC | Public Preview | - | | - Office 365 GCC High | - | Not Available | | - Office 365 DoD | - | Not Available |
Office 365 GCC is paired with Azure Active Directory (Azure AD) in Azure. Office
| - Office 365 GCC | GA | - | | - Office 365 GCC High | - | GA | | - Office 365 DoD | - | GA |
-| **[Teams](/azure/sentinel/sentinel-solutions-catalog#microsoft)** | | |
+| **[Teams](../../sentinel/sentinel-solutions-catalog.md#microsoft)** | | |
| - Office 365 GCC | Public Preview | - | | - Office 365 GCC High | - | Not Available | | - Office 365 DoD | - | Not Available |
sentinel Connect Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-azure-active-directory.md
You can use Microsoft Sentinel's built-in connector to collect data from [Azure
- An Azure Active Directory P1 or P2 license is required to ingest sign-in logs into Microsoft Sentinel. Any Azure AD license (Free/O365/P1/P2) is sufficient to ingest the other log types. Additional per-gigabyte charges may apply for Azure Monitor (Log Analytics) and Microsoft Sentinel. -- Your user must be assigned the [Microsoft Sentinel Contributor](/azure/role-based-access-control/built-in-roles#microsoft-sentinel-contributor) role on the workspace.
+- Your user must be assigned the [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) role on the workspace.
-- Your user must be assigned the [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator) or [Security Administrator](/azure/active-directory/roles/permissions-reference#security-administrator) roles on the tenant you want to stream the logs from.
+- Your user must be assigned the [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator) or [Security Administrator](../active-directory/roles/permissions-reference.md#security-administrator) roles on the tenant you want to stream the logs from.
- Your user must have read and write permissions to the Azure AD diagnostic settings in order to be able to see the connection status.
To query the Azure AD logs, enter the relevant table name at the top of the quer
## Next steps In this document, you learned how to connect Azure Active Directory to Microsoft Sentinel. To learn more about Microsoft Sentinel, see the following articles: - Learn how to [get visibility into your data and potential threats](get-visibility.md).-- Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).
+- Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).
sentinel Connect Microsoft 365 Defender https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-microsoft-365-defender.md
For more information about incident integration and advanced hunting event colle
- You must have a valid license for Microsoft 365 Defender, as described in [Microsoft 365 Defender prerequisites](/microsoft-365/security/mtp/prerequisites). -- Your user must be assigned the [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator) or [Security Administrator](/azure/active-directory/roles/permissions-reference#security-administrator) roles on the tenant you want to stream the logs from.
+- Your user must be assigned the [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator) or [Security Administrator](../active-directory/roles/permissions-reference.md#security-administrator) roles on the tenant you want to stream the logs from.
- Your user must have read and write permissions on your Microsoft Sentinel workspace.
In the **Next steps** tab, youΓÇÖll find some useful workbooks, sample queries,
In this document, you learned how to integrate Microsoft 365 Defender incidents, and advanced hunting event data from Microsoft Defender for Endpoint and Defender for Office 365, into Microsoft Sentinel, using the Microsoft 365 Defender connector. To learn more about Microsoft Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md).-- Get started [detecting threats with Microsoft Sentinel](./detect-threats-built-in.md).
+- Get started [detecting threats with Microsoft Sentinel](./detect-threats-built-in.md).
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/data-connectors-reference.md
If your DNS events don't show up in Microsoft Sentinel:
1. In the **Configuration** area, change any of the settings and save your changes. Change your settings back if you need to, and then save your changes again. 1. Check your Azure DNS Analytics to make sure that your events and queries display properly.
-For more information, see [Gather insights about your DNS infrastructure with the DNS Analytics Preview solution](/azure/azure-monitor/insights/dns-analytics).
+For more information, see [Gather insights about your DNS infrastructure with the DNS Analytics Preview solution](../azure-monitor/insights/dns-analytics.md).
## Windows Forwarded Events (Preview)
You can find the value of your workspace ID on the ZScaler Private Access connec
For more information, see: - [Microsoft Sentinel solutions catalog](sentinel-solutions-catalog.md)-- [Threat intelligence integration in Microsoft Sentinel](threat-intelligence-integration.md)
+- [Threat intelligence integration in Microsoft Sentinel](threat-intelligence-integration.md)
sentinel Ingestion Delay https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/ingestion-delay.md
For more information, see:
- [Customize alert details in Azure Sentinel](customize-alert-details.md) - [Manage template versions for your scheduled analytics rules in Azure Sentinel](manage-analytics-rule-templates.md) - [Use the health monitoring workbook](monitor-data-connector-health.md)-- [Log data ingestion time in Azure Monitor](/azure/azure-monitor/logs/data-ingestion-time)
+- [Log data ingestion time in Azure Monitor](../azure-monitor/logs/data-ingestion-time.md)
sentinel Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/iot-solution.md
> The *Microsoft Sentinel Data connector for Microsoft Defender for IoT* and the *IoT OT Threat Monitoring with Defender for IoT* solution are in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
-ΓÇï[Microsoft Defender for IoT](/azure/defender-for-iot/) enables you to secure your entire OT environment, whether you need to protect existing OT devices or build security into new OT innovations.
+ΓÇï[Microsoft Defender for IoT](../defender-for-iot/index.yml) enables you to secure your entire OT environment, whether you need to protect existing OT devices or build security into new OT innovations.
Microsoft Sentinel and Microsoft Defender for IoT help to bridge the gap between IT and OT security challenges, and to empower SOC teams with out-of-the-box capabilities to efficiently and effectively detect and respond to OT threats. The integration between Microsoft Defender for IoT and Microsoft Sentinel helps organizations to quickly detect multistage attacks, which often cross IT and OT boundaries.
Before you start, make sure you have the following requirements on your workspac
1. Under **Microsoft Defender for IoT**, select **Enable Microsoft Defender for IoT**.
-For more information, see [Permissions in Microsoft Sentinel](roles.md) and [Quickstart: Get started with Defender for IoT](/azure/defender-for-iot/organizations/getting-started).
+For more information, see [Permissions in Microsoft Sentinel](roles.md) and [Quickstart: Get started with Defender for IoT](../defender-for-iot/organizations/getting-started.md).
> [!IMPORTANT] > Currently, having both the Microsoft Defender for IoT and the [Microsoft Defender for Cloud](data-connectors-reference.md#microsoft-defender-for-cloud) data connectors enabled on the same Microsoft Sentinel workspace simultaneously may result in duplicate alerts in Microsoft Sentinel. We recommend that you disconnect the Microsoft Defender for Cloud data connector before connecting to Microsoft Defender for IoT.
View Defender for IoT alerts in the Microsoft Sentinel **Logs** area.
> [!NOTE] > The **Logs** page in Microsoft Sentinel is based on Azure Monitor's Log Analytics. >
-> For more information, see [Log queries overview](/azure/azure-monitor/logs/log-query-overview) in the Azure Monitor documentation and the [Write your first KQL query](/learn/modules/write-first-query-kusto-query-language/) Learn module.
+> For more information, see [Log queries overview](../azure-monitor/logs/log-query-overview.md) in the Azure Monitor documentation and the [Write your first KQL query](/learn/modules/write-first-query-kusto-query-language/) Learn module.
> ## Install the Defender for IoT solution
This playbook opens a ticket in SerivceNow each time a new Engineering Workstati
For more information, see: -- [Microsoft Defender for IoT documentation](/azure/defender-for-iot/)
+- [Microsoft Defender for IoT documentation](../defender-for-iot/index.yml)
- [Microsoft Defender for IoT solution](sentinel-solutions-catalog.md#microsoft)-- [Microsoft Defender for IoT data connector](data-connectors-reference.md#microsoft-defender-for-iot)
+- [Microsoft Defender for IoT data connector](data-connectors-reference.md#microsoft-defender-for-iot)
sentinel Monitor Data Connector Health https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/monitor-data-connector-health.md
Title: Monitor the health of your data connectors with this Microsoft Sentinel workbook | Microsoft Docs
-description: Use the Health Monitoring workbook to keep track of your data connectors' connectivity and performance.
-
+ Title: Monitor the health of your Microsoft Sentinel data connectors | Microsoft Docs
+description: Use the SentinelHealth data table and the Health Monitoring workbook to keep track of your data connectors' connectivity and performance.
+ Previously updated : 11/09/2021 Last updated : 12/30/2021
-# Monitor the health of your data connectors with this Microsoft Sentinel workbook
+
+# Monitor the health of your data connectors
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
-The **Data connectors health monitoring workbook** allows you to keep track of your data connectors' health, connectivity, and performance, from within Microsoft Sentinel. The workbook provides additional monitors, detects anomalies, and gives insight regarding the workspaceΓÇÖs data ingestion status. You can use the workbookΓÇÖs logic to monitor the general health of the ingested data, and to build custom views and rule-based alerts.
+After you've configured and connected your Microsoft Sentinel workspace to your data connectors, you'll want to monitor your connector health, viewing any service or data source issues, such as authentication, throttling, and more.
+
+You also might like to configure notifications for health drifts for relevant stakeholders who can take action. For example, configure email messages, Microsoft Teams messages, new tickets in your ticketing system, and so on.
+
+This article describes how to use the following features, which allow you to keep track of your data connectors' health, connectivity, and performance from within Microsoft Sentinel:
+
+- **Data connectors health monitoring workbook**. This workbook provides additional monitors, detects anomalies, and gives insight regarding the workspaceΓÇÖs data ingestion status. You can use the workbookΓÇÖs logic to monitor the general health of the ingested data, and to build custom views and rule-based alerts.
+
+- ***SentinelHealth* data table**. (Public preview) Provides insights on health drifts, such as latest failure events per connector, or connectors with changes from success to failure states, which you can use to create alerts and other automated actions.
+
+ > [!NOTE]
+ > The *SentinelHealth* data table is currently supported only for [selected data connectors](#supported-data-connectors).
+ >
+ ## Use the health monitoring workbook
The **Data connectors health monitoring workbook** allows you to keep track of y
There are three tabbed sections in this workbook:
-1. The **Overview** tab shows the general status of data ingestion in the selected workspace: volume measures, EPS rates, and time last log received.
+- The **Overview** tab shows the general status of data ingestion in the selected workspace: volume measures, EPS rates, and time last log received.
-1. The **Data collection anomalies** tab will help you to detect anomalies in the data collection process, by table and data source. Each tab presents anomalies for a particular table (the **General** tab includes a collection of tables). The anomalies are calculated using the **series_decompose_anomalies()** function that returns an **anomaly score**. [Learn more about this function](/azure/data-explorer/kusto/query/series-decompose-anomaliesfunction?WT.mc_id=Portal-fx). Set the following parameters for the function to evaluate:
+- The **Data collection anomalies** tab will help you to detect anomalies in the data collection process, by table and data source. Each tab presents anomalies for a particular table (the **General** tab includes a collection of tables). The anomalies are calculated using the **series_decompose_anomalies()** function that returns an **anomaly score**. [Learn more about this function](/azure/data-explorer/kusto/query/series-decompose-anomaliesfunction?WT.mc_id=Portal-fx). Set the following parameters for the function to evaluate:
- **AnomaliesTimeRange**: This time picker applies only to the data collection anomalies view. - **SampleInterval**: The time interval in which data is sampled in the given time range. The anomaly score is calculated only on the last interval's data.
There are three tabbed sections in this workbook:
:::image type="content" source="media/monitor-data-connector-health/data-health-workbook-2.png" alt-text="data connector health monitoring workbook anomalies page" lightbox="media/monitor-data-connector-health/data-health-workbook-2.png":::
-1. The **Agent info** tab shows you information about the health of the Log Analytics agents installed on your various machines, whether Azure VM, other cloud VM, on-premises VM, or physical. You can monitor the following:
+- The **Agent info** tab shows you information about the health of the Log Analytics agents installed on your various machines, whether Azure VM, other cloud VM, on-premises VM, or physical. You can monitor the following:
- System location
There are three tabbed sections in this workbook:
:::image type="content" source="media/monitor-data-connector-health/data-health-workbook-3.png" alt-text="data connector health monitoring workbook agent info page" lightbox="media/monitor-data-connector-health/data-health-workbook-3.png":::
+## Use the SentinelHealth data table (Public preview)
+
+To get data connector health data from the *SentinelHealth* data table, you must first [turn on the Microsoft Sentinel health feature](#turn-on-microsoft-sentinel-health-for-your-workspace) for your workspace.
+
+Once the health feature is turned on, the *SentinelHealth* data table is created at the first success or failure event generated for your data connectors.
+
+> [!TIP]
+> To configure the retention time for your health events, see the [Log Analytics retention configuration documentation](/azure/azure-monitor/logs/manage-cost-storage).
+>
+
+> [!IMPORTANT]
+>
+> The SentinelHealth data table is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+### Supported data connectors
+
+The *SentinelHealth* data table is currently supported only for the following data connectors:
+
+- [Amazon Web Services (CloudTrail)](connect-aws.md)
+- [Dynamics 365](connect-dynamics-365.md)
+- [Office 365](connect-office-365.md)
+- [Office ATP](connect-microsoft-defender-advanced-threat-protection.md)
+- [Threat Intelligence - TAXII](connect-threat-intelligence-taxii.md)
+- [Threat Intelligence Platforms](connect-threat-intelligence-tip.md)
+
+### Turn on Microsoft Sentinel health for your workspace
+
+1. In Microsoft Sentinel, under the **Configuration** menu on the left, select **Settings** and expand the **Health** section.
+
+1. Select **Configure Diagnostic Settings** and create a new diagnostic setting.
+
+ - In the **Diagnostic setting name** field, enter a meaningful name for your setting.
+
+ - In the **Category details** column, select **DataConnectors**.
+
+ - Under **Destination details**, select **Send to Log Analytics workspace**, and select your subscription and workspace from the dropdown menus.
+
+1. Select **Save** to save your new setting.
+
+The *SentinelHealth* data table is created at the first success or failure event generated for your data connectors.
++
+### Access the *SentinelHealth* table
+
+In the Microsoft Sentinel **Logs** page, run a query on the *SentinelHealth* table. For example:
+
+```kusto
+SentinelHealth
+ | take 20
+```
+
+### Understanding SentinelHealth table events
+
+The following types of health events are logged in the *SentinelHealth* table:
+
+- **Data fetch status change**. Logged once an hour as long as a data connector status remains stable, with either continuous success or failure events. For as long as a data connector's status does not change, monitoring only hourly works to prevent redundant auditing and reduce table size. If the data connector's status has continuous failures, additional details about the failures are included in the *ExtendedProperties* column.
+
+ If the data connector's status changes, either from a success to failure, from failure to success, or has changes in failure reasons, the event is logged immediately to allow your team to take proactive and immediate action.
+
+ Potentially transient errors, such as source service throttling, are logged only after they've continued for more than 60 minutes. These 60 minutes allow Microsoft Sentinel to overcome a transient issue in the backend and catch up with the data, without requiring any user action. Errors that are definitely not transient are logged immediately.
+
+- **Failure summary**. Logged once an hour, per connector, per workspace, with an aggregated failure summary. Failure summary events are created only when the connector has experienced polling errors during the given hour. They contain any extra details provided in the *ExtendedProperties* column, such as the time period for which the connector's source platform was queried, and a distinct list of failures encountered during the time period.
+
+For more information, see [SentinelHealth table columns schema](#sentinelhealth-table-columns-schema).
+
+### Run queries to detect health drifts
+
+Create queries on the *SentinelHealth* table to help you detect health drifts in your data connectors. For example:
+
+**Detect latest failure events per connector**:
+
+```kusto
+SentinelHealth
+| where TimeGenerated > ago(3d)
+| where OperationName == 'Data fetch status change'
+| where Status in ('Success', 'Failure')
+| summarize TimeGenerated = arg_max(TimeGenerated,*) by SentinelResourceName, SentinelResourceId
+| where Status == 'Failure'
+```
+
+**Detect connectors with changes from fail to success state**:
+
+```kusto
+let lastestStatus = SentinelHealth
+| where TimeGenerated > ago(12h)
+| where OperationName == 'Data fetch status change'
+| where Status in ('Success', 'Failure')
+| project TimeGenerated, SentinelResourceName, SentinelResourceId, LastStatus = Status
+| summarize TimeGenerated = arg_max(TimeGenerated,*) by SentinelResourceName, SentinelResourceId;
+let nextToLastestStatus = SentinelHealth
+| where TimeGenerated > ago(12h)
+| where OperationName == 'Data fetch status change'
+| where Status in ('Success', 'Failure')
+| join kind = leftanti (lastestStatus) on SentinelResourceName, SentinelResourceId, TimeGenerated
+| project TimeGenerated, SentinelResourceName, SentinelResourceId, NextToLastStatus = Status
+| summarize TimeGenerated = arg_max(TimeGenerated,*) by SentinelResourceName, SentinelResourceId;
+lastestStatus
+| join kind=inner (nextToLastestStatus) on SentinelResourceName, SentinelResourceId
+| where NextToLastStatus == 'Failure' and LastStatus == 'Success'
+```
+
+**Detect connectors with changes from success to fail state**:
+
+```kusto
+let lastestStatus = SentinelHealth
+| where TimeGenerated > ago(12h)
+| where OperationName == 'Data fetch status change'
+| where Status in ('Success', 'Failure')
+| project TimeGenerated, SentinelResourceName, SentinelResourceId, LastStatus = Status
+| summarize TimeGenerated = arg_max(TimeGenerated,*) by SentinelResourceName, SentinelResourceId;
+let nextToLastestStatus = SentinelHealth
+| where TimeGenerated > ago(12h)
+| where OperationName == 'Data fetch status change'
+| where Status in ('Success', 'Failure')
+| join kind = leftanti (lastestStatus) on SentinelResourceName, SentinelResourceId, TimeGenerated
+| project TimeGenerated, SentinelResourceName, SentinelResourceId, NextToLastStatus = Status
+| summarize TimeGenerated = arg_max(TimeGenerated,*) by SentinelResourceName, SentinelResourceId;
+lastestStatus
+| join kind=inner (nextToLastestStatus) on SentinelResourceName, SentinelResourceId
+| where NextToLastStatus == 'Success' and LastStatus == 'Failure'
+```
+
+### Configure alerts and automated actions for health issues
+
+While you can use the Microsoft Sentinel [analytics rules](automate-incident-handling-with-automation-rules.md) to configure automation in Microsoft Sentinel logs, if you want to be notified and take immediate action for health drifts in your data connectors, we recommend that you use [Azure Monitor alert rules](/azure/azure-monitor/alerts/alerts-overview).
+
+For example:
+
+1. In an Azure Monitor alert rule, select your Microsoft Sentinel workspace as the rule scope, and **Custom log search** as the first condition.
+
+1. Customize the alert logic as needed, such as frequency or lookback duration, and then use [queries](#run-queries-to-detect-health-drifts) to search for health drifts.
+
+1. For the rule actions, select an existing action group or create a new one as needed to configure push notifications or other automated actions such as triggering a Logic App, Webhook, or Azure Function in your system.
+
+For more information, see [Azure Monitor alerts overview](/azure/azure-monitor/alerts/alerts-overview) and [Azure Monitor alerts log](/azure/azure-monitor/alerts/alerts-log).
+
+### SentinelHealth table columns schema
+
+The following table describes the columns and data generated in the *SentinelHealth* data table:
+
+| ColumnName | ColumnType | Description|
+| -- | -- | |
+| **TenantId** | String | The tenant ID for your Microsoft Sentinel workspace. |
+| **TimeGenerated** | Datetime | The time at which the health event occurred. |
+| <a name="operationname"></a>**OperationName** | String | The health operation. One of the following values: <br><br>-`Data fetch status change` for health or success indications <br>- `Failure summary` for aggregated health summaries. <br><br>For more information, see [Understanding SentinelHealth table events](#understanding-sentinelhealth-table-events). |
+| <a name="sentinelresourceid"></a>**SentinelResourceId** | String | The unique identifier of the Microsoft Sentinel workspace and the associated connector on which the health event occurred. |
+| **SentinelResourceName** | String | The data connector name. |
+| <a name="status"></a>**Status** | String | Indicates `Success` or `Failure` for the `Data fetch status change` [OperationName](#operationname), and `Informational` for the `Failure summary` [OperationName](#operationname). |
+| **Description** | String | Describes the operation, including extended data as needed. For example, for failures, this column might indicate the failure reason. |
+| **WorkspaceId** | String | The workspace GUID on which the health issue occurred. The full Azure Resource Identifier is available in the [SentinelResourceID](#sentinelresourceid) column. |
+| **SentinelResourceType** | String |The Microsoft Sentinel resource type being monitored: `Data connector`|
+| **SentinelResourceKind** | String | The type of data connector being monitored, such as `Office365`. |
+| **RecordId** | String | A unique identifier for the record that can be shared with the support team for better correlation as needed. |
+| **ExtendedProperties** | Dynamic (json) | A JSON bag that varies by the [OperationName](#operationname) value and the [Status](#status) of the event: <br><br>- For `Data fetch status change` events with a success indicator, the bag contains a ΓÇÿDestinationTableΓÇÖ property to indicate where data from this connector is expected to land. For failures, the contents vary depending on the failure type. |
+| **Type** | String | `SentinelHealth` |
+| | | |
+ ## Next steps+ Learn how to [onboard your data to Microsoft Sentinel](quickstart-onboard.md), [connect data sources](connect-data-sources.md), and [get visibility into your data, and potential threats](get-visibility.md).
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/network-normalization-schema.md
The following fields are common to all network session activity logging:
| **SrcDvcIdType** | Optional | Enumerated | The type of [SrcDvcId](#srcdvcid), if known. Possible values include:<br> - `AzureResourceId`<br>- `MDEid`<br><br>If multiple IDs are available, use the first one from the preceding list, and store the others in **SrcDvcAzureResourceId** and **SrcDvcMDEid**, respectively.<br><br>**Note**: This field is required if [SrcDvcId](#srcdvcid) is used. | | **SrcDeviceType** | Optional | Enumerated | The type of the source device. Possible values include:<br>- `Computer`<br>- `Mobile Device`<br>- `IOT Device`<br>- `Other` | | <a name="srcuserid"></a>**SrcUserId** | Optional | String | A machine-readable, alphanumeric, unique representation of the source user. Format and supported types include:<br>- **SID** (Windows): `S-1-5-21-1377283216-344919071-3415362939-500`<br>- **UID** (Linux): `4578`<br>- **AADID** (Azure Active Directory): `9267d02c-5f76-40a9-a9eb-b686f3ca47aa`<br>- **OktaId**: `00urjk4znu3BcncfY0h7`<br>- **AWSId**: `72643944673`<br><br>Store the ID type in the [SrcUserIdType](#srcuseridtype) field. If other IDs are available, we recommend that you normalize the field names to **SrcUserSid**, **SrcUserUid**, **SrcUserAadId**, **SrcUserOktaId**, and **UserAwsId**, respectively. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity).<br><br>Example: S-1-12 |
-| <a name="srcuseridtype"></a>**SrcUserIdType** | Optional | Enumerated | The type of the ID stored in the [SrcUserId](#srcuserid) field. Supported values include `SID`, `UIS`, `AADID`, `OktaId`, and `AWSId`. |
+| <a name="srcuseridtype"></a>**SrcUserIdType** | Optional | Enumerated | The type of the ID stored in the [SrcUserId](#srcuserid) field. Supported values include `SID`, `UID`, `AADID`, `OktaId`, and `AWSId`. |
| <a name="srcusername"></a>**SrcUsername** | Optional | String | The Source username, including domain information when available. Use one of the following formats and in the following order of priority:<br>- **Upn/Email**: `johndow@contoso.com`<br>- **Windows**: `Contoso\johndow`<br>- **DN**: `CN=Jeff Smith,OU=Sales,DC=Fabrikam,DC=COM`<br>- **Simple**: `johndow`. Use the Simple form only if domain information isn't available.<br><br>Store the Username type in the [SrcUsernameType](#srcusernametype) field. If other IDs are available, we recommend that you normalize the field names to **SrcUserUpn**, **SrcUserWindows**, and **SrcUserDn**.<br><br>For more information, see [The User entity](normalization-about-schemas.md#the-user-entity).<br><br>Example: `AlbertE` | | <a name="srcusernametype"></a>**SrcUsernameType** | Optional | Enumerated | Specifies the type of the username stored in the [SrcUsername](#srcusername) field. Supported values are `UPN`, `Windows`, `DN`, and `Simple`. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity).<br><br>Example: `Windows` | | **SrcUserType** | Optional | Enumerated | The type of Actor. Allowed values are:<br>- `Regular`<br>- `Machine`<br>- `Admin`<br>- `System`<br>- `Application`<br>- `Service Principal`<br>- `Other`<br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [SrcOriginalUserType](#srcoriginalusertype) field. |
sentinel Normalization Develop Parsers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/normalization-develop-parsers.md
To deploy a large number of parsers, we recommend using parser ARM templates, as
1. Use the [ASIM Yaml to ARM template converter](https://aka.ms/ASimYaml2ARM) to convert your YAML file to an ARM template.
-1. Deploy your template using the [Azure portal](/azure/azure-resource-manager/templates/quickstart-create-templates-use-the-portal#edit-and-deploy-the-template) or [PowerShell](/azure/azure-resource-manager/templates/deploy-powershell).
+1. Deploy your template using the [Azure portal](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md#edit-and-deploy-the-template) or [PowerShell](../azure-resource-manager/templates/deploy-powershell.md).
-You can also combine multiple templates to a single deploy process using [linked templates](/azure/azure-resource-manager/templates/linked-templates?tabs=azure-powershell#linked-template)
+You can also combine multiple templates to a single deploy process using [linked templates](../azure-resource-manager/templates/linked-templates.md?tabs=azure-powershell#linked-template)
> [!TIP] > ARM templates can combine different resources, so parsers can be deployed alongside connectors, analytic rules, or watchlists, to name a few useful options. For example, your parser can reference a watchlist deployed alongside it.
Learn more about the ASIM in general:
- Watch the [Deep Dive Webinar on Microsoft Sentinel Normalizing Parsers and Normalized Content](https://www.youtube.com/watch?v=zaqblyjQW6k) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM) - [Advanced SIEM Information Model (ASIM) overview](normalization.md) - [Advanced SIEM Information Model (ASIM) schemas](normalization-about-schemas.md)-- [Advanced SIEM Information Model (ASIM) content](normalization-content.md)
+- [Advanced SIEM Information Model (ASIM) content](normalization-content.md)
sentinel Normalization Manage Parsers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/normalization-manage-parsers.md
Microsoft Sentinel users cannot edit built-in unifying parsers. Instead, use the
To add a custom parser, insert a line to the custom unifying parser to reference the new, custom parser.
-Make sure to add both a filtering custom parser and a parameter-less custom parser. To learn more about how to edit parsers, refer to the document [Functions in Azure Monitor log queries](/azure/azure-monitor/logs/functions#edit-a-function).
+Make sure to add both a filtering custom parser and a parameter-less custom parser. To learn more about how to edit parsers, refer to the document [Functions in Azure Monitor log queries](../azure-monitor/logs/functions.md#edit-a-function).
The syntax of the line to add is different for each schema:
Learn more about the ASIM in general:
- Watch the [Deep Dive Webinar on Microsoft Sentinel Normalizing Parsers and Normalized Content](https://www.youtube.com/watch?v=zaqblyjQW6k) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM) - [Advanced SIEM Information Model (ASIM) overview](normalization.md) - [Advanced SIEM Information Model (ASIM) schemas](normalization-about-schemas.md)-- [Advanced SIEM Information Model (ASIM) content](normalization-content.md)
+- [Advanced SIEM Information Model (ASIM) content](normalization-content.md)
sentinel Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/notebooks.md
Select one of the following tabs, depending on whether you'll be using a public
# [Private endpoint](#tab/private-endpoint)
-The steps in this procedure reference specific articles in the Azure Machine Learning documentation when relevant. For more information, see [How to create a secure Azure ML workspace](/azure/machine-learning/tutorial-create-secure-workspace).
+The steps in this procedure reference specific articles in the Azure Machine Learning documentation when relevant. For more information, see [How to create a secure Azure ML workspace](../machine-learning/tutorial-create-secure-workspace.md).
1. Create a VM jump box within a VNet. Since the VNet restricts access from the public internet, the jump box is used as a way to connect to resources behind the VNet.
-1. Access the jump box, and then go to your Microsoft Sentinel workspace. We recommend using [Azure Bastion](/azure/bastion/bastion-overview) to access the VM.
+1. Access the jump box, and then go to your Microsoft Sentinel workspace. We recommend using [Azure Bastion](../bastion/bastion-overview.md) to access the VM.
1. In Microsoft Sentinel, select **Threat management** > **Notebooks** and then select **Create a new AML workspace**.
The steps in this procedure reference specific articles in the Azure Machine Lea
It can take several minutes to create your workspace in the cloud. During this time, the workspace **Overview** page shows the current deployment status, and updates when the deployment is complete.
-1. In the Azure Machine Learning studio, on the **Compute** page, create a new compute. On the **Advanced Settings** tab, make sure to select the same VNet that you'd used for your VM jump box. For more information, see [Create and manage an Azure Machine Learning compute instance](/azure/machine-learning/how-to-create-manage-compute-instance?tabs=python).
+1. In the Azure Machine Learning studio, on the **Compute** page, create a new compute. On the **Advanced Settings** tab, make sure to select the same VNet that you'd used for your VM jump box. For more information, see [Create and manage an Azure Machine Learning compute instance](../machine-learning/how-to-create-manage-compute-instance.md?tabs=python).
-1. Configure your network traffic to access Azure ML from behind a firewall. For more information, see [Configure inbound and outbound network traffic](/azure/machine-learning/how-to-access-azureml-behind-firewall?tabs=ipaddress%2Cpublic).
+1. Configure your network traffic to access Azure ML from behind a firewall. For more information, see [Configure inbound and outbound network traffic](../machine-learning/how-to-access-azureml-behind-firewall.md?tabs=ipaddress%2cpublic).
Continue with one of the following sets of steps:
Continue with one of the following sets of steps:
1. For each resource, including both **privatelink.api.azureml.ms** and **privatelink.notebooks.azure.ms**, add a virtual network link.
- Select the resource > **Virtual network links** > **Add**. For more information, see [Link the virtual network](/azure/dns/private-dns-getstarted-portal).
+ Select the resource > **Virtual network links** > **Add**. For more information, see [Link the virtual network](../dns/private-dns-getstarted-portal.md).
For more information, see: -- [Network traffic flow when using a secured workspace](/azure/machine-learning/concept-secure-network-traffic-flow)-- [Secure Azure Machine Learning workspace resources using virtual networks (VNets)](/azure/machine-learning/how-to-network-security-overview)
+- [Network traffic flow when using a secured workspace](../machine-learning/concept-secure-network-traffic-flow.md)
+- [Secure Azure Machine Learning workspace resources using virtual networks (VNets)](../machine-learning/how-to-network-security-overview.md)
For more information, see:
- [Webinar: Microsoft Sentinel notebooks fundamentals](https://www.youtube.com/watch?v=rewdNeX6H94) - [Proactively hunt for threats](hunting.md) - [Use bookmarks to save interesting information while hunting](bookmarks.md)-- [Jupyter, msticpy, and Microsoft Sentinel](https://msticpy.readthedocs.io/en/latest/getting_started/JupyterAndAzureSentinel.html)
+- [Jupyter, msticpy, and Microsoft Sentinel](https://msticpy.readthedocs.io/en/latest/getting_started/JupyterAndAzureSentinel.html)
sentinel Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/powerbi.md
Refresh your Power BI report on a schedule, so updated data always appears in th
For more information, see: -- [Azure Monitor service limits](/azure/azure-monitor/service-limits)
+- [Azure Monitor service limits](../azure-monitor/service-limits.md)
- [Import Azure Monitor log data into Power BI](../azure-monitor/logs/log-powerbi.md)-- [Power Query M formula language](/powerquery-m/)
+- [Power Query M formula language](/powerquery-m/)
sentinel Quickstart Onboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/quickstart-onboard.md
After you connect your data sources, choose from a gallery of expertly created w
- **Log Analytics workspace**. Learn how to [create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md). For more information about Log Analytics workspaces, see [Designing your Azure Monitor Logs deployment](../azure-monitor/logs/design-logs-deployment.md).
- By default, you may have a default of [30 days retention](/azure/azure-monitor/logs/manage-cost-storage#legacy-pricing-tiers) in the Log Analytics workspace used for Microsoft Sentinel. To make sure that you can use the full extent of Microsoft Sentinel functionality, raise this to 90 days. For more information, see [Change the retention period](/azure/azure-monitor/logs/manage-cost-storage#change-the-data-retention-period).
+ By default, you may have a default of [30 days retention](../azure-monitor/logs/manage-cost-storage.md#legacy-pricing-tiers) in the Log Analytics workspace used for Microsoft Sentinel. To make sure that you can use the full extent of Microsoft Sentinel functionality, raise this to 90 days. For more information, see [Change the retention period](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period).
- **Permissions**:
For more information, see:
- **Get started**: - [Get started with Microsoft Sentinel](get-visibility.md) - [Create custom analytics rules to detect threats](detect-threats-custom.md)
- - [Connect your external solution using Common Event Format](connect-common-event-format.md)
+ - [Connect your external solution using Common Event Format](connect-common-event-format.md)
sentinel Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/resources.md
Microsoft Sentinel uses Azure Monitor Log Analytics's Kusto Query Language (KQL)
## Microsoft Sentinel templates for data to monitor
-The [Azure Active Directory Security Operations Guide](/azure/active-directory/fundamentals/security-operations-introduction) includes specific guidance and knowledge about data that's important to monitor for security purposes, for several operational areas.
+The [Azure Active Directory Security Operations Guide](../active-directory/fundamentals/security-operations-introduction.md) includes specific guidance and knowledge about data that's important to monitor for security purposes, for several operational areas.
-In each article, check for sections named [Things to monitor](/azure/active-directory/fundamentals/security-operations-privileged-accounts#things-to-monitor) for lists of events that we recommend alerting on and investigating, as well as analytics rule templates to deploy directly to Microsoft Sentinel.
+In each article, check for sections named [Things to monitor](../active-directory/fundamentals/security-operations-privileged-accounts.md#things-to-monitor) for lists of events that we recommend alerting on and investigating, as well as analytics rule templates to deploy directly to Microsoft Sentinel.
## Learn more about creating automation
Download sample content from the private community GitHub repository to create c
> [Get certified!](/learn/paths/security-ops-sentinel/) > [!div class="nextstepaction"]
-> [Read customer use case stories](https://customers.microsoft.com/en-us/search?sq=%22Azure%20Sentinel%20%22&ff=&p=0&so=story_publish_date%20desc)
+> [Read customer use case stories](https://customers.microsoft.com/en-us/search?sq=%22Azure%20Sentinel%20%22&ff=&p=0&so=story_publish_date%20desc)
sentinel Store Logs In Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/store-logs-in-azure-data-explorer.md
The following image shows a sample flow of exported data into an Azure Storage,
- Create a data pipeline with a copy activity, based on when the blob properties were last modified.
- This step requires an extra understanding of Azure Data Factory. For more information, see [Copy activity in Azure Data Factory and Azure Synapse Analytics](/azure/data-factory/copy-activity-overview).
+ This step requires an extra understanding of Azure Data Factory. For more information, see [Copy activity in Azure Data Factory and Azure Synapse Analytics](../data-factory/copy-activity-overview.md).
Regardless of where you store your data, continue hunting and investigating usin
For more information, see: - [Tutorial: Investigate incidents with Microsoft Sentinel](investigate-cases.md)-- [Hunt for threats with Microsoft Sentinel](hunting.md)
+- [Hunt for threats with Microsoft Sentinel](hunting.md)
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/whats-new.md
description: This article describes new features in Microsoft Sentinel from the
Previously updated : 12/01/2021 Last updated : 01/13/2022
If you're looking for items older than six months, you'll find them in the [Arch
## January 2021
+- [SentinelHealth data table (Public preview)](#sentinelhealth-data-table-public-preview)
- [More workspaces supported for Multiple Workspace View](#more-workspaces-supported-for-multiple-workspace-view) - [Kusto Query Language workbook and tutorial](#kusto-query-language-workbook-and-tutorial)+
+### SentinelHealth data table (Public preview)
+
+Microsoft Sentinel now provides the **SentinelHealth** data table to help you monitor your connector health, providing insights on health drifts, such as latest failure events per connector, or connectors with changes from success to failure states. Use this data to create alerts and other automated actions, such as Microsoft Teams messages, new tickets in a ticketing system, and so on.
+
+Turn on the Microsoft Sentinel health feature for your workspace in order to have the **SentinelHealth** data table created at the next success or failure event generated for supported data connectors.
+
+For more information, see [Use the SentinelHealth data table (Public preview)](monitor-data-connector-health.md#use-the-sentinelhealth-data-table-public-preview).
+ ### More workspaces supported for Multiple Workspace View Now, instead of being limited to 10 workspaces in Microsoft Sentinel's [Multiple Workspace View](multiple-workspace-view.md), you can view data from up to 30 workspaces simultaneously.
For more information, see:
- [The need to use multiple Microsoft Sentinel workspaces](extend-sentinel-across-workspaces-tenants.md#the-need-to-use-multiple-microsoft-sentinel-workspaces) - [Work with incidents in many workspaces at once](multiple-workspace-view.md) - [Manage multiple tenants in Microsoft Sentinel as an MSSP](multiple-tenants-service-providers.md)+ ### Kusto Query Language workbook and tutorial Kusto Query Language is used in Microsoft Sentinel to search, analyze, and visualize data, as the basis for detection rules, workbooks, hunting, and more.
For more information, see:
## September 2021
+- [Data connector health enhancements (Public preview)](#data-connector-health-enhancements-public-preview)
+ - [New in docs: scaling data connector documentation](#new-in-docs-scaling-data-connector-documentation) - [Azure Storage account connector changes](#azure-storage-account-connector-changes)
+### Data connector health enhancements (Public preview)
+
+Azure Sentinel now provides the ability to enhance your data connector health monitoring with a new *SentinelHealth* table. The *SentinelHealth* table is created after you've [turned on the Azure Sentinel health feature](monitor-data-connector-health.md#turn-on-microsoft-sentinel-health-for-your-workspace) in your Azure Sentinel workspace, at the first success or failure health event that's generated.
+
+For more information, see [Monitor the health of your data connectors with this Azure Sentinel workbook](monitor-data-connector-health.md).
+
+> [!NOTE]
+> The *SentinelHealth* data table is currently supported only for selected data connectors. For more information, see [Supported data connectors](monitor-data-connector-health.md#supported-data-connectors).
+>
++ ### New in docs: scaling data connector documentation As we continue to add more and more built-in data connectors for Azure Sentinel, we've reorganized our data connector documentation to reflect this scaling.
service-fabric Service Fabric Best Practices Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-best-practices-networking.md
Maximize your Virtual Machine's performance with Accelerated Networking, by decl
} ] ```
-Service Fabric cluster can be provisioned on [Linux with Accelerated Networking](/azure/virtual-network/create-vm-accelerated-networking-cli), and [Windows with Accelerated Networking](/azure/virtual-network/create-vm-accelerated-networking-powershell).
+Service Fabric cluster can be provisioned on [Linux with Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md), and [Windows with Accelerated Networking](../virtual-network/create-vm-accelerated-networking-powershell.md).
Accelerated Networking is supported for Azure Virtual Machine Series SKUs: D/DSv2, D/DSv3, E/ESv3, F/FS, FSv2, and Ms/Mms. Accelerated Networking was tested successfully using the Standard_DS8_v3 SKU on 01/23/2019 for a Service Fabric Windows Cluster, and using Standard_DS12_v2 on 01/29/2019 for a Service Fabric Linux Cluster. Please note that Accelerated Networking requires at least 4 vCPUs.
-To enable Accelerated Networking on an existing Service Fabric cluster, you need to first [Scale a Service Fabric cluster out by adding a Virtual Machine Scale Set](/azure/service-fabric/virtual-machine-scale-set-scale-node-type-scale-out), to perform the following:
+To enable Accelerated Networking on an existing Service Fabric cluster, you need to first [Scale a Service Fabric cluster out by adding a Virtual Machine Scale Set](./virtual-machine-scale-set-scale-node-type-scale-out.md), to perform the following:
1. Provision a NodeType with Accelerated Networking enabled 2. Migrate your services and their state to the provisioned NodeType with Accelerated Networking enabled
-Scaling out infrastructure is required to enable Accelerated Networking on an existing cluster, because enabling Accelerated Networking in place would cause downtime, as it requires all virtual machines in an availability set be [stop and deallocate before enabling Accelerated networking on any existing NIC](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+Scaling out infrastructure is required to enable Accelerated Networking on an existing cluster, because enabling Accelerated Networking in place would cause downtime, as it requires all virtual machines in an availability set be [stop and deallocate before enabling Accelerated networking on any existing NIC](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
## Cluster Networking
Scaling out infrastructure is required to enable Accelerated Networking on an ex
## Network Security Rules
-The network security group rules described below are the recommended minimum for a typical configuration. We also include what rules are mandatory for an operational cluster if optional rules are not desired. Failure to open the mandatory ports or approving the IP/URL will prevent proper operation of the cluster and may not be supported. The [automatic OS image upgrades](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade) is recommended for Windows Updates. If you use [Patch Orchestration Application](service-fabric-patch-orchestration-application.md) an additional rule with the ServiceTag [AzureUpdateDelivery](/azure/virtual-network/service-tags-overview) is needed.
+The network security group rules described below are the recommended minimum for a typical configuration. We also include what rules are mandatory for an operational cluster if optional rules are not desired. Failure to open the mandatory ports or approving the IP/URL will prevent proper operation of the cluster and may not be supported. The [automatic OS image upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) is recommended for Windows Updates. If you use [Patch Orchestration Application](service-fabric-patch-orchestration-application.md) an additional rule with the ServiceTag [AzureUpdateDelivery](../virtual-network/service-tags-overview.md) is needed.
-The rules marked as mandatory are needed for a proper operational cluster. Described is the minimum for typical configurations. It also enables a complete security lockdown with network peering and jumpbox concepts like Azure Bastion. Failure to open the mandatory ports or approving the IP/URL will prevent proper operation of the cluster and may not be supported. The [automatic OS image upgrades](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade) is the recommendation for Windows Updates, for the [Patch Orchestration Application](service-fabric-patch-orchestration-application.md) an additional rule with the Virtual Network Service Tag [AzureUpdateDelivery](/azure/virtual-network/service-tags-overview) is needed.
+The rules marked as mandatory are needed for a proper operational cluster. Described is the minimum for typical configurations. It also enables a complete security lockdown with network peering and jumpbox concepts like Azure Bastion. Failure to open the mandatory ports or approving the IP/URL will prevent proper operation of the cluster and may not be supported. The [automatic OS image upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) is the recommendation for Windows Updates, for the [Patch Orchestration Application](service-fabric-patch-orchestration-application.md) an additional rule with the Virtual Network Service Tag [AzureUpdateDelivery](../virtual-network/service-tags-overview.md) is needed.
### Inbound |Priority |Name |Port |Protocol |Source |Destination |Action | Mandatory
More information about the inbound security rules:
* **Azure portal**. This port is used by the Service Fabric Resource Provider to query information about your cluster in order to display in the Azure Management Portal. If this port is not accessible from the Service Fabric Resource Provider then you will see a message such as 'Nodes Not Found' or 'UpgradeServiceNotReachable' in the Azure portal and your node and application list will appear empty. This means that if you wish to have visibility of your cluster in the Azure Management Portal then your load balancer must expose a public IP address and your NSG must allow incoming 19080 traffic.
-* **Client API**. The client connection endpoint for APIs used by PowerShell. Please open the port for the integration with Azure DevOps by using [AzureDevOps](/azure/virtual-network/service-tags-overview) as Virtual Network Service Tag.
+* **Client API**. The client connection endpoint for APIs used by PowerShell. Please open the port for the integration with Azure DevOps by using [AzureDevOps](../virtual-network/service-tags-overview.md) as Virtual Network Service Tag.
-* **SFX + Client API**. This port is used by Service Fabric Explorer to browse and manage your cluster. In the same way it's used by most common APIs like REST/PowerShell (Microsoft.ServiceFabric.PowerShell.Http)/CLI/.NET. This port is recommended for extended management operations from the Service Fabric Resource Provider to guarantee higher reliability. Please open the port for the integration with Azure API Management by using [ApiManagement](/azure/virtual-network/service-tags-overview) as Virtual Network Service Tag.
+* **SFX + Client API**. This port is used by Service Fabric Explorer to browse and manage your cluster. In the same way it's used by most common APIs like REST/PowerShell (Microsoft.ServiceFabric.PowerShell.Http)/CLI/.NET. This port is recommended for extended management operations from the Service Fabric Resource Provider to guarantee higher reliability. Please open the port for the integration with Azure API Management by using [ApiManagement](../virtual-network/service-tags-overview.md) as Virtual Network Service Tag.
* **Cluster**. Used for inter-node communication; should never be blocked.
More information about the outbound security rules:
* **Download Binaries**. The upgrade service is using the address download.microsoft.com to get the binaries, this is needed for setup, re-image and runtime upgrades. In the scenario of an "internal only" load balancer, an [additional external load balancer](service-fabric-patterns-networking.md#internal-and-external-load-balancer) must be added with a rule allowing outbound traffic for port 443. Optionally, this port can be blocked after an successful setup, but in this case the upgrade package must be distributed to the nodes or the port has to be opened for the short period of time, afterwards a manual upgrade is needed.
-Use Azure Firewall with [NSG flow log](/azure/network-watcher/network-watcher-nsg-flow-logging-overview) and [traffic analytics](/azure/network-watcher/traffic-analytics) to track connectivity issues. The ARM template [Service Fabric with NSG](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/5-VM-Windows-1-NodeTypes-Secure-NSG) is a good example to start.
+Use Azure Firewall with [NSG flow log](../network-watcher/network-watcher-nsg-flow-logging-overview.md) and [traffic analytics](../network-watcher/traffic-analytics.md) to track connectivity issues. The ARM template [Service Fabric with NSG](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/5-VM-Windows-1-NodeTypes-Secure-NSG) is a good example to start.
> [!NOTE]
-> Please note that the default network security rules should not be overwritten as they ensure the communication between the nodes. [Network Security Group - How it works](/azure/virtual-network/network-security-group-how-it-works). Another example, outbound connectivity on port 80 is needed to do the Certificate Revocation List check.
+> Please note that the default network security rules should not be overwritten as they ensure the communication between the nodes. [Network Security Group - How it works](../virtual-network/network-security-group-how-it-works.md). Another example, outbound connectivity on port 80 is needed to do the Certificate Revocation List check.
## Application Networking
Use Azure Firewall with [NSG flow log](/azure/network-watcher/network-watcher-ns
* Create a cluster on VMs or computers running Linux: [Create a Linux cluster](service-fabric-cluster-creation-via-portal.md) * Learn about [Service Fabric support options](service-fabric-support.md)
-[NSGSetup]: ./media/service-fabric-best-practices/service-fabric-nsg-rules.png
+[NSGSetup]: ./media/service-fabric-best-practices/service-fabric-nsg-rules.png
site-recovery Site Recovery Test Failover To Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/site-recovery-test-failover-to-azure.md
This procedure describes how to run a test failover for a recovery plan. If you
4. If you're failing over to Azure and data encryption is enabled, in **Encryption Key**, select the certificate that was issued when you enabled encryption during Provider installation. You can ignore this step if encryption isn't enabled. 5. Track failover progress on the **Jobs** tab. You should be able to see the test replica machine in the Azure portal. 6. To initiate an RDP connection to the Azure VM, you need to [add a public IP address](/archive/blogs/srinathv/how-to-add-a-public-ip-address-to-azure-vm-for-vm-failed-over-using-asr) on the network interface of the failed over VM.
- If you don't want to add a public IP address to the virtual machine, check the recommended alternatives [here](https://docs.microsoft.com/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-networking#best-practice-control-public-ip-addresses).
+ If you don't want to add a public IP address to the virtual machine, check the recommended alternatives [here](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-networking#best-practice-control-public-ip-addresses).
7. When everything is working as expected, click **Cleanup test failover**. This deletes the VMs that were created during test failover. 8. In **Notes**, record and save any observations associated with the test failover.
static-web-apps Enterprise Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/enterprise-edge.md
Use Azure Static Web Apps enterprise-grade edge (Preview) to enable faster page
Key features of Azure Static Web Apps enterprise-grade edge include:
-* Global presence in 118+ [edge locations](/azure/frontdoor/edge-locations-by-region) across 100 metro cities.
+* Global presence in 118+ [edge locations](../frontdoor/edge-locations-by-region.md) across 100 metro cities.
-* Caching assets at the [edge](/azure/frontdoor/front-door-caching).
+* Caching assets at the [edge](../frontdoor/front-door-caching.md).
-* Proactive protection against [Distributed Denial of Service (DDoS) attacks](/azure/frontdoor/front-door-ddos).
+* Proactive protection against [Distributed Denial of Service (DDoS) attacks](../frontdoor/front-door-ddos.md).
-* Native support of end-to-end IPv6 connectivity and [HTTP/2 protocol](/azure/frontdoor/front-door-http2.md).
+* Native support of end-to-end IPv6 connectivity and [HTTP/2 protocol](../frontdoor/front-door-http2.md).
* Optimized file compression.
az staticwebapp enterprise-edge enable -n my-static-webapp -g my-resource-group
## Next steps > [!div class="nextstepaction"]
-> [Application configuration](configuration.md)
+> [Application configuration](configuration.md)
storage Blob Containers Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/blob-containers-powershell.md
loop-container4
## See also -- [Run PowerShell commands with Azure AD credentials to access blob data](/azure/storage/blobs/authorize-data-operations-powershell)-- [Create a storage account](/azure/storage/common/storage-account-create?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=azure-portal)
+- [Run PowerShell commands with Azure AD credentials to access blob data](./authorize-data-operations-powershell.md)
+- [Create a storage account](../common/storage-account-create.md?tabs=azure-portal&toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
storage Blob Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/blob-powershell.md
There are many scenarios in which blobs of different types may be copied. Exampl
For a simplified copy operation within the same storage account, use the `Copy-AzStorageBlob` cmdlet. Because the operation is copying a blob within the same storage account, it's a synchronous operation. Cross-account operations are asynchronous.
-You should consider the use of AzCopy for ease and performance, especially when copying blobs between storage accounts. AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. Find out more about how to [Get started with AzCopy](/azure/storage/common/storage-use-azcopy-v10).
+You should consider the use of AzCopy for ease and performance, especially when copying blobs between storage accounts. AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. Find out more about how to [Get started with AzCopy](../common/storage-use-azcopy-v10.md).
The example below copies the **bannerphoto.png** blob from the **photos** container to the **photos** folder within the **archive** container. Both containers exist within the same storage account. The result verifies the success of the copy operation.
Foreach($blob in $blobs)
## Next steps -- [Run PowerShell commands with Azure AD credentials to access blob data](/azure/storage/blobs/authorize-data-operations-powershell)-- [Create a storage account](/azure/storage/common/storage-account-create?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=azure-portal)-- [Manage blob containers using PowerShell](blob-containers-powershell.md)
+- [Run PowerShell commands with Azure AD credentials to access blob data](./authorize-data-operations-powershell.md)
+- [Create a storage account](../common/storage-account-create.md?tabs=azure-portal&toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
+- [Manage blob containers using PowerShell](blob-containers-powershell.md)
storage Secure File Transfer Protocol Host Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/secure-file-transfer-protocol-host-keys.md
Previously updated : 11/22/2021 Last updated : 01/14/2022
When you connect to Blob Storage by using an SFTP client, you might be prompted
> [!div class="mx-tdBreakAll"] > | Region | Host key type | SHA 256 fingerprint <sup>1</sup> | Public key | > |||||
-> | eastus2euap | rsa-sha2-256 | `0b0IILN+fMMAZ7CZePfSVdFj14ppjACcIl4yi3hT/Rc` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDE45HQiHTS8Vxs6ktkHVrDoWFYnDzTOFzVF9IE0EZp/NMVIqRSnveYyFcgWtg7AfG648DiPsEar3lHmcGKT5OxGJ7KGP6Z8Nd1HxWC75j59GDadLfkuJyFLnWuSQIyiLV9nsDgl2e/BQ4owhHZhlSUCBlsWkECBaACptS5AvWG5CQN6AQnR2L0CEEjPPUSPh6YibqHCITsCAAduH1N8S2B+xj+OqPLpEqbIUpF6aEHggMrb9/CKBsaRzN9LXXIyJJ2Rovg54bkTUDhQO80JnGzCWQvqT1JX4KSQcr0KzkzoOoPLwuQ6w0FxP3UD+zPLYi2V8MNlW3Xp86bNHoUDfhR` |
-> | eastus2euap | rsa-sha2-512 | `pv4MPlF/uF1/1qg6vUoCGCTrXyxwTvTJykicv0IIeZA` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDu64NcnMdsh2vvxfC/2PtYXRYk5IoGB1PSXkrbqov5VllVbJAF9du9V4ccHoLVppux2W1jDlFQ7E+TdOT/hnwmnQurUTAvW355LPG0MtUFPcVCEfEPbxKuv7pxCPKZAWUpMX1aLbmEjt3CX157dtKMhmOkyExLRWu4Ua65LrqpGlKovg8Pzuxc/k6Bznxmj++G3XbHv82F3UXDsXJvUOxmF6DiuDuRWBUIwLGBNJOw2/ddyan34qK2fPBUP+lPSrucinG4b+X7aJHFhTt1E6h9XBs8fYp/9SIZ6c6ftQ/ZbET66NRSS7H7D72tSFJI5lhrKCeoKU/e0GAplSEiPNLR` |
-> | eastus2euap | ecdsa-sha2-nistp256 | `V21Ku/gEEacUyR8VuG5WjVOgBfWdPVPD1KsgCpk8eqI` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPd+eEm6eCdZCbpaVGZPvYmetmpOnrDsemOkj9KMmVimESN2k6I0sKNUhwntMTXGx0nPNeKWG3g/ETzKF3VsYn8=` |
-> | eastus2euap | ecdsa-sha2-nistp384 | `Yv87+z8s9fDkiluM3ZkbsgENLGe48ITr+fnuwoG2+kg` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYVtgfJ36apFiv6gIxBa/q4n08flTyA0W0cGTsN0ot59nbl6pPCrRCfSByRtzgRY+id9ZOeuZTvN8VpPsZWOSfUOwxE0/GC2c9kS0F4SrFzTALaMY6pY3/GhMrQelAmFw==` |
-> | francec | ecdsa-sha2-nistp256 | `p7eHtX2lbIqu06mDFezjRBf7SxlHokVOC+MdcpdO2bM` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKtYOFWJFlknTvpl2XpxYMrkb0ULCF+ZfwVxwDXUY3zIMANy0hmbyZ73x15EwDP3DobilK149W570x3+TAdwE7o=` |
-> | francec | ecdsa-sha2-nistp384| `kbK8Ld5FYOfa+r1PnKooDglmdzLVGBQWNqnMoYOMdGk` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF8e5s445PAyVF3kgnPP6XoBlCUW+I6HCcQcC+xRti9OTciBAQReKX9c39J15Xoa6iSWuQ0ru9ER5UzXS+bjzhXBKXOmgAcR3/XEJMonjS2++XMldlGhgt1c4hEW3QQGVQ==` |
-> | canadaeast | ecdsa-sha2-nistp256 | `ppta3xQWBvWxjkRy0CyFY6a+qB3TrFI1qoCnXnSk3cY` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLIb5mteX+Vk00D8pPmjYuYBqC9g1xdmN8e3apdsXBucC8qXx9qug7veSex0/NzkTu00kIVVtvW+4LFOvhbat5Y=` |
-> | canadaeast | ecdsa-sha2-nistp384 | `RQXlsP8rowi9ndsJe+3zOl87/O2OOpjXA/rasqLQOns` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO3mWu+SY6u27HQuJq154HCTrGxVsy9axbwTdVXFvgV1h1uhpIdgAZDL55bDe7ZPmB0BPirPas/vUQyG8aGDNAZJn1iinq/umZegYb0BCDthR5bPi7SPb3h7Qf6FN4dXoA==` |
-> | usnorth | ecdsa-sha2-nistp256 | `6xMRs7dmIdi3vUOgNnOf6xOTbF9RlGk6Pj7lLk6z/bM` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJw1dXTy1YqYLJhAo1tB+F5NNaimQwDI+vfEDG4KXIFfS83mUFqr9VO9o+zgL3+0vTrlWQQTsP/hLHrjhHd9If8=` |
-> | usnorth | ecdsa-sha2-nistp384 | `0cJkHHeTNQpl7ewPTZwug5+/hfebiH6Yxl2rOTtYZQo` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8aqja46A9Q5PmhPzhxcklcJGp+CiC3MCjVR6Qdl9oQGMywOHfe+kCD72YBKnA6KNudZdx7pUUB/ZahvI5vwt4bi593adUMTY1/RlTRjplz6c2fSfwSO/0Ia4+0mxQyjw==` |
-> | canadacentral | ecdsa-sha2-nistp256 | `7QJ5hJsY84IxPMXFyL1NzG5OVNUEndWru1jNBxP26fI` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAGEx7ZWe5opSy1zUn4PNfmAvWmTVRRTq2bwoQ5Dibfsr1byd7IIkhD5+0P5ybtq1dEdxh9oK2IjFSQWzj9jFPY=` |
-> | canadacentral | ecdsa-sha2-nistp384 | `xqbUD0NAFshX0Cbq6XbxHOMB+9vSaQXCmv/mlHdUuiw` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBmGFDJBLNDi3UWwk8IMuJQXK/927uHoYVK/wLH7zI7pvtmgb9/FdXa7rix8QVTsfk8uK8wxxqyIYYApUslOtUzkpkXwW9gx7d37wiZmTjEbsvVeHq+gD7PHmXTpLS8VPQ==` |
-> | europewest | ecdsa-sha2-nistp256 | `7Lrxb5z3CnAWI8pr2LK5eFHwDCl/Gtm/fhgGwB3zscw` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE/ewktdeHJc4bH41ytmxvMR3ch9IOR+CQ2i2Pejbavmgy6XmkOnhpIPKVNytXRCToDysIjWt7DLVsQ1EHv/xtg=` |
-> | europewest | ecdsa-sha2-nistp384 | `UpzudqPZw1MrBiBoK/HHtLLppAZF8bFD75dK7huZQnI` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEDYr3fSaCAcTygFUp7MKpND4RghNd6UBjnoMB6EveRWVAiBxLTsRHNHaZ+jk3Q8kCHSEJrWKAOY4aZl78WtWcrmlWLH8gfLtcfG/sXmXka8klstLhmkCvzUXzhBclBy7w==` |
-> | switzerlandn | ecdsa-sha2-nistp256 | `DfyPsw04f2rU6PXeLx8iVRu+hrtSLushETT3zs5Dq7U` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJICveabT6GPfbyaCSeU7D553Q4Rr/IgGjTMC8vMCIUJKUzazeCeS3q46mXL2kwnBLIge9wTzzvP7JSWf+I2Fis=` |
-> | switzerlandn | ecdsa-sha2-nistp384 | `Rw0TLDVU4PqsXbOunR2BZcn2/wqFty6rCgWN4cCD/1Y` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLhGaEyHYvfVU05lmKV4Rnrl9YiuSSOCXjUaJjJJRhe5ZXbDMHeiC67CAWW3mm/+c5i1hoob/8pHg7vmeC+ve+Ztu/ww12JsC4qy/CG8qIIQvlnDDqnfmOgr0Svw3/Izw==` |
-> | australiaeast | ecdsa-sha2-nistp256 | `s8NdoxI0mdWchKMMt/oYtnlFNAD8RUDa1a4lO8aPMpQ` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBKG2nz5SnoR5KVYAnBMdt8be1HNIOkiZ5UrHxm4pZpLG3LCuzLEXyWlhTm8rynuM/8rATVB5FZqrDCIrnn8pkw=` |
-> | australiaeast | ecdsa-sha2-nistp384 | `YmeF1kX0R0W/ssqzKCkjoSLh3CciQvtV7iacYpRU2xc` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFJi5nieNPCIxkYS7HKMH2fQgONSy2kGkViQucVhWrTJCEQMVz5peL2JZJFjf2a6zaB2olPaBNEkeuJRHxGyW0luTII9ZXXUoiGQH9l05B41mweVtG6pljHfuKQ4HzoUJA==` |
-> | asiaeast | ecdsa-sha2-nistp256 | `/iq1i88fRFHFBw4DBtZUX7GRbT5dQq4g7KfUi5346co` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCvI7Dc7W3K919GK2VHZZkzJhTM+n2tX3mxq4EAI7l8p0HO0UHSmucHdQhpKApTIBR0j9O/idZ/Ew6Yr4nusBwE=` |
-> | asiaeast | ecdsa-sha2-nistp384 | `KssXSE1WC6Oca0dS2CNySgObkbVshqRGE2JcaNsUvpA` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNEYGYGolx8LNs5TVJRF/yoxOCal3a4C0fw1Wlj1BxzUsDtxaQAxSfzQhZG+lFCF7RVQyiUwKjCxmWoZbSb19aE7AnRx9UOVmrbTt2PMD3dx8VmPj1K8rsPOSq+XX4KGdQ==` |
-> | germanywc | ecdsa-sha2-nistp256 | `Ce+h+7thT5tt75ypIkWZ6+JnmQMZEl1N7Tt3Ldalb64` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBmVDE0INhtrKI83oB4r8eU1tXq7bRbzrtIhZdkgiy3lrsvNTEzsEExtWae2uy8zFHdkpyTbBlcUYCZEtNr9w3U=` |
-> | germanywc | ecdsa-sha2-nistp384 | `hhQQi2iRjSX5d9c+4714hAFvTA3c63+TGknhuQi7Tss` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDlFF3ceA17ZFERfvijHkPI2Na1wuti9/AOY5E/bDvZfP08kkmYTb9Ma6omhB0dHR6e1CmRJfKmFXfTd81iVWPa7yXCxbS8yG+uNKCuHxuNv8hFhNM84h2727BSBHBBHBA==` |
-> | europenorth | ecdsa-sha2-nistp256 | `wUF5N8VjGTnA/PYBVzQrhcrMgHuCfAYL1tu+p6s28Ms` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCh4oFTmr3wzccXcayCwvcx+EyvZ7yANMYfc3epZqEzAcDeoPV+6v58gGhYLaEVDh69fGdhiwIvMcB7yWXtqHxE=` |
-> | europenorth | ecdsa-sha2-nistp384 | `w7dzF6HD42eE2dgf/G1O73dh+QaZ7OPPZqzeKIT1H68` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgyasQj6FYeRa1jiQE4TzOGY/BcQwrWFxXNEmbyoG89ruJcmXD01hS2RzsOPaVLHfr/l71fslVrB8MQzlj3MFwgfeJdiPn7k/4owFoQolaZO7mr/vY/bqOienHN4uxLEA==` |
-> | uscentraleuap | ecdsa-sha2-nistp256 | `J9oxrXZ6jDR01CcDWu6xhoRAY60R1SpqbeKA4S9EjNc` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPNv9UEan8fLKmcI/RK53nX+TD9Pm/RfOKVa1b/leKSByIzMBWQFwa6wxwtr/shl6zvjwT4E9uRu6TsRTYnk+AI=` |
-> | uscentraleuap | ecdsa-sha2-nistp384 | `SeX6s483/LpSdx8kIy+KWm5Bb6zy6wr3icyhq1DQydU` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGcFnKk6dlO6bG0YSPZruEp6bXZP3h6QvCA+jmxScUz7MIgufRT3lrxkrZs0RM9vp44i2HXOSowsgvVPDQMBJRF5gXsEU1Z9SrpqOUrlcyhzfy0SkaewuNM6VoAYjUn44g==` |
-> | useast2 | ecdsa-sha2-nistp256 | `bouiC5HdtURUU19RJbym8R94fbMOTw/bUxFUkoAByoI` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJshJI18IECu6neLrash/Q622MAXO07C+hbIOiVPC6M/ZIJM8HyYvQEh4DKI1CMEaeAIs/HA905QKeU/syvt7QI=` |
-> | useast2 | ecdsa-sha2-nistp384 | `vWnPlGaQOY4LFj9XSQ2qN/NMF92+UOfKPjGNSPA2bOg` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBByJNAblwxCNVqedg5FcdbdwiuzTMVEWj/uF3uzI8wp890Xv2M4H+aMTpeItxgQsuiQCptgITsO+XCf2dBTHOGWpd90QtvcznzHyy/FEWVAKWs9brvyaNVe82c4TOFqYRg==` |
+> |eastus2euap | rsa-sha2-256| `dkP64W5LSbRoRlv2MV02TwH5wFPbV6D3R3nyTGivVfk`| `AAAAB3NzaC1yc2EAAAADAQABAAABAQC3PqLDKkkqUXrZSAbiEZsI6T1jYRh5cp+F5ktPCw7aXq6E9Vn2e6Ngu+vr+nNrwwtHqPzcZhxuu9ej2vAKTfp2FcExvy3fKKEhJKq0fJX8dc/aBNAGihKqxTKUI7AX5XsjhtIf0uuhig506g9ZssyaDWXuQ/3gvTDn923R9Hz5BdqQEH9RSHKW+intO8H4CgbhgwfuVZ0mD4ioJKCwfdhakJ2cKMDfgi/FS6QQqeh1wI+uPoS7DjW8Zurd7fhXEfJQFyuy5yZ7CZc7qV381kyo/hV1az6u3W4mrFlGPlNHhp9TmGFBij5QISC6yfmyFS4ZKMbt6n8xFZTJODiU2mT1` |
+> |eastus2euap | rsa-sha2-512 | `M39Ofv6366yGPdeFZ0/2B7Ui6JZeBUoTpxmFPkwIo4c` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC+1NvYoRon15Tr2wwSNGmL+uRi7GoVKwBsKFVhbRHI/w8oa3kndnXWI4rRRyfOS7KVlwFgtb/WolWzBdKOGVe6IaUHBU8TjOx2nKUhUvL605O0aNuaGylACJpponYxy7Kazftm2rV/WfxCcV7TmOGV1159mbbILCXdEWbHXZkA3qWe4JPGCT+XoEzrsXdPUDsXuUkSGVp0wWFI2Sr13KvygrwFdv4jxH1IkzJ5uk6Sxn0iVE+efqUOmBftQdVetleVdgR9qszQxxye0P2/FuXr0S+LUrwX4+lsWo3TSxXAUHxDd8jZoyYZFjAsVYGdp0NDQ+Y6yOx5L9bR6whSvKE1` |
+> |eastus2euap | ecdsa-sha2-nistp256 | `X+c1NIpAJGvWU31UJ3Vd2Os4J7bCfgvyZGh35b2oSBQ` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK+U6CE6con74cCntkFAm6gxbzGxm9RgjboKuLcwBiFanNs/uYywMCpj+1PMYXVx/nMM4vFbAjEOA20fJeoQtN8=` |
+> |eastus2euap | ecdsa-sha2-nistp384 | `Q3zIFfOI1UfCrMq6Eh7nP1/VIvgPn3QluTBkyZ2lfCw` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDWRjO+e8kZpalcdg7HblZ4I3q9yzURY5VXGjvs5+XFuvxyq4CoAIPskCsgtDLjB5u6NqYeFMPzlvo406XeugO4qAui+zUMoQDY8prNjTGk5t7JVc4wYeAWbBJ2WUFyMrQ==` |
+> |francec | ecdsa-sha2-nistp256 | `N61PH8SVCAXOq7Z7eIV4mRnotafmNoPrpc+TaLxtPX4` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK3UBFa/Ke9y3aLs1q1b8gh/tXiS7lpOTzUiDFpXdbq00/V9Ag+v2z5MIaicFdum9Ih4fls1Mg07Ert16bi5M8E=` |
+> |francec | ecdsa-sha2-nistp384 | `/CkQnHA57ehNeC9ZHkTyvVr8yVyl/P1dau2AwCg579k` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/x6qX+DRtmxOoMZwe7d7ZckHyeLkBWxB7SNH6Wnw2tXvtNekI9d9LGl1DaSmiZLJnawtX+MPj64S31v8AhZcVle9OPVIvH5im3IcoPSKQ6TIfZ26e2WegwJxuc1CjZZg==` |
+> |francec | rsa-sha2-256 | `zYLnY1rtM2sgP5vwYCtaU8v2isldoWWcR8eMmQSQ9KQ` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDCmdsufvzqydsoecjXzxxL9AqnnRNCjlIRPRGohdspT9AfApKA9ZmoJUPY8461hD9qzsd7ps8RSIOkbGzgNfDUU9+ekEZLnhvrc7sSS9bikWyKmGtjDdr3PrPSZ/4zePAlYwDzRqtlWa/GKzXQrnP/h9SU4/3pj21gyUssOu2Mpr6zdPk59lO/n/w2JRTVVmkRghCmEVaWV25qmIEslWmbgI3WB5ysKfXZp79YRuByVZHZpuoQSBbU0s7Kjh3VRX8+ZoUnBuq7HKnIPwt+YzSxHx7ePHR+Ny4EEwU7NFzyfVYiUZflBK+Sf8e1cHnwADjv/qu/nhSinf3JcyQDG1lN` |
+> |francec | rsa-sha2-512 | `ixum/Dragma5DAMBzA/c5/MY02FjUBD/gI8+XQDzJvc` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDjTJ9EvMFWicBCcmYF0zO2GaWZJXLc7F5QrvFv6Nm/6pV72YrRmSdiY9znZowNK0NvwnucKjjQj0RkJOlwVEnsq7OVS+RqGA35vN6u6c0iGl4q2Jp+XLRm8nazC1B5uLVurVzYCH0SOl1vkkeXTqMOAZQlhj9e7RiFibDdv8toxU3Fl87KtexFYeSm3kHBVBJHoo5sD2CdeCv5/+nw9/vRQVhFKy2DyLaxtS+l2b0QXUqh6Op7KzjaMr3hd168yCaqRjtm8Jtth/Nzp+519H7tT0c0M+pdAeB7CQ9PAUqieXZJK+IvycM5gfi0TnmSoGRG8TPMGHMFQlcmr3K1eZ8h` |
+> |canadaeast | rsa-sha2-256 | `SRhd9gnvJS630A8VtCYMqc4djz5R8EiG7spwAUCYSJk` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQD2nSByIh/NC3ZHsjK3zt7mspcUUXcq9Y/jc9QQsfHXQetOH/fBalf17d5odCwQSyNY5Mm+RWTt+Aae5t8kGm0f+sKVO/4HcBIihNlAnXkf1ah5NoeJ+R0eFxRs6Uz/cJILD4wuJnyDptRk1GFhpAphvBi0fLEnvn6lGJbrfOxuHJSXhjJcxDCbmcTlcWoU1l+1SaYfOzkVBcqelYIimspCmIznMdE2D9vNar77FVaNlx4J9Ew+HQRPSLG1zAh5ae1806B6CHG1+4puuTUFxJR1AO+BuT6fqy1p0V77CrhkBTHs8DNqw9ZYI27fjyTrSW4SixyfcH16DAegeHO+d2YZ` |
+> |canadaeast | rsa-sha2-512 | `60yzcSSOHlubdGkuNPWMXB9j21HqIkIzGdJUv0J57iY` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDDmA4meGZwkdDzrgA9jAgcrlglZro0+IVzkLDCo791vsjJ29bTM6UbXVYFoKEkYliXSueL0q92W91IaFH/NhlOdW81Dbjs3jE+CuE4OX5pMisIMKx45QDcYCx3MJxqZrIOkDdS+m8JLs6XwM07LxiTX+6bH5vSwuGwvqg5gpnYfUpN0U5o7Wq7H7UplyUN8vsiDvTux3glXBLAI3ugjn6FC/YVPwMOq7Luwry3kxwEMx4Fnewe6hAlz47lbBHW6l/qmzzu4wfhJC20GqPzMJHD3kjHEGFBHpcmRbyijUUIyd7QBrnfS4J0xPVLftGJsrOOUP7Oq8AAru66/00We501` |
+> |canadaeast | ecdsa-sha2-nistp256 | `YPqDobCavdQ/zGV7FuR/gzYqgUIzWePgERDTQjYEE0M` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKlfnJ9/rlO5/YGeGle1K6I6Ctan4Z3cKpGE3W9BPe1ZcSfkXq47u/f6F/nR7WgrC6+NwJHaMkhiBGadEWbuA3Q=` |
+> |canadaeast | ecdsa-sha2-nistp384 | `Y6FK9rWscBkyKN7mgPAEj0jKFXrv4mGNzoaZ9ttc4io` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDS8gYaqmJ8eEjmDF2ET7d2d6WAO7SgBQdTvqt6cUEjp7I11AYATKVN4Isz1hx8qBCWGIjA42X1/jNzk3YR7Bv/hgXO7PgAfDZ41AcT4+cJd0WrAWnxv0xgOvgLKL/8GYQ==` |
+> |usnorth | rsa-sha2-256 | `9AV5CnZNkf9nd6WO6WGNu7x6c4FdlxyC0k6w6wRO0cs` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDJTv+aoDs1ngYi5OPrRl1R6hz+ko4D35hS0pgPTAjx/VbktVC9WGIlZMRjIyerfalN6niJkyUqYMzE4OoR9Z2NZCtHN+mJ7rc88WKg7RlXmQJUYtuAVV3BhNEFniufXC7rB/hPfAJSl+ogfZoPW4MeP/2V2g+jAKvGyjaixqMczjC2IVAA1WHB5zr/JqP2p2B6JiNNqNrsFWwrTScbQg0OzR4zcLcaICJWqLo3fWPo5ErNIPsWlLLY6peO0lgzOPrIZe4lRRdNc1D//63EajPgHzvWeT30fkl8fT/gd7WTyGjnDe4TK3MEEBl3CW8GB71I4NYlH4QBx13Ra20IxMlN` |
+> |usnorth | rsa-sha2-512 | `R3HlMn2cnNblX4qnHxdReba31GMPphUl9+BQYSeR6+E` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDeM6MOS9Av7a5PGhYLyLmT09xETbcvdt9jgNE1rFnZho5ikzjzRH4nz60cJsUbbOxZ38+DDyZdR84EfTOYR2Fyvv08mg98AYXdKVWMyFlx08w1xI4vghjN2QQWa8cfWI02RgkxBHMlxxvkBYEyfXcV1wrKHSggqBtzpxPO94mbrqqO+2nZrPrPFkBg4xbiN8J2j+8c7d6mXJjAbSddVfwEbRs4mH8GwK8yd/PXPd1U0+f62bJRIbheWbB+NTfOnjND5XFGL9vziCTXO8AbFEz0vEZ9NmxfFTuVVxGtJBePVdCAYbifQbxe/gRTEGiaJnwDRnQHn/zzK+RUNesJuuFJ` |
+> |usnorth | ecdsa-sha2-nistp256 | `6xMRs7dmIdi3vUOgNnOf6xOTbF9RlGk6Pj7lLk6z/bM` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJw1dXTy1YqYLJhAo1tB+F5NNaimQwDI+vfEDG4KXIFfS83mUFqr9VO9o+zgL3+0vTrlWQQTsP/hLHrjhHd9If8=` |
+> |usnorth | ecdsa-sha2-nistp384 | `0cJkHHeTNQpl7ewPTZwug5+/hfebiH6Yxl2rOTtYZQo` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8aqja46A9Q5PmhPzhxcklcJGp+CiC3MCjVR6Qdl9oQGMywOHfe+kCD72YBKnA6KNudZdx7pUUB/ZahvI5vwt4bi593adUMTY1/RlTRjplz6c2fSfwSO/0Ia4+0mxQyjw==` |
+> |canadacentral | ecdsa-sha2-nistp256 | `HhbpllbdxrinWvNsk/OvkowI9nWd9ZRVXXkQmwn2cq4` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBuyYEUpBjzEnYljSwksmHMxl5uoErbC30R8wstMIDLexpjSpdUxty1u2nDE3WY7m4W/doyXVSBYiHUUYhdNFjg=` |
+> |canadacentral | ecdsa-sha2-nistp384 | `EjEadkKaEgaNfdwXtzlqanUbDigzsdzcZJeTzJfQXP0` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBmGFDJBLNDi3UWwk8IMuJQXK/927uHoYVK/wLH7zI7pvtmgb9/FdXa7rix8QVTsfk8uK8wxxqyIYYApUslOtUzkpkXwW9gx7d37wiZmTjEbsvVeHq+gD7PHmXTpLS8VPQ==` |
+> |canadacentral | rsa-sha2-256 | `KOYkeGvx4egH9DTGgxiONDMvSlkEkoU8cXWnynOEQRE` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC7jhZvp5GMrYyA2gYjbQXTC/QoSeDeluBUpji6ndy52KuqBNXelmHIsaEBc69MbixqfoopaFyJshdu7X8maxcRSsdDcnhbCgUO/MnJ+am6yb33v/25qtLToqzJRXb5y86o9/WtyA9DXbJMwwzQFqxIsa1gB` |
+> |canadacentral | rsa-sha2-512 | `tdixmLr++BVpFMpiWyVkr5iAXM4TDmj3jp5EC0x8mrw` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNMZwL0AuF2Uyn4NIK+XdaHuon2jEBwUFSNAXo4JP7WDfmewISzMWqqi1btip/7VwZbxiz98C4NUEcsPNweaw3VdpYiXXXc7NN45cC32uM8yFeV6TqizoerHf+8Hm8avWQOfBv17kvGihob2vx8wZo4HkZg9KacQGvyuUyfUKa9LJI9BnpI2Wo3RPue4kbaV3JKmzxl8sF9i6OTT8Adj6+H7SkluITm105NX32uKBMjipEeMwDSQvkWGwlh2oZwJpL+Tvi2G0hQ/Q/FCQS5MAW9MCwnp0SSPWZaLiA9EDnzFrugFoundyBa0vRjNGZoj+X4+8MVG2fYgOzDED1JSPB` |
+> |europewest | ecdsa-sha2-nistp256 | `7Lrxb5z3CnAWI8pr2LK5eFHwDCl/Gtm/fhgGwB3zscw` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE/ewktdeHJc4bH41ytmxvMR3ch9IOR+CQ2i2Pejbavmgy6XmkOnhpIPKVNytXRCToDysIjWt7DLVsQ1EHv/xtg=` |
+> |europewest | ecdsa-sha2-nistp384 | `UpzudqPZw1MrBiBoK/HHtLLppAZF8bFD75dK7huZQnI` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEDYr3fSaCAcTygFUp7MKpND4RghNd6UBjnoMB6EveRWVAiBxLTsRHNHaZ+jk3Q8kCHSEJrWKAOY4aZl78WtWcrmlWLH8gfLtcfG/sXmXka8klstLhmkCvzUXzhBclBy7w==` |
+> |europewest | rsa-sha2-256 | `IeHrQ+N6WAdLMKSMsJiML4XqMrkF1kyOiTeTjh1PFyc` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDZL63ZKHrWlwN8gkPvq43uTh88n0V6GwlTH2/sEpIyPxN56/gpgWW6aDyzyv6PIRI/zlLjZNdOBhqmEO+MhnBPkAI8edlvFoVOA6c/ft5RljQOhv+nFzgELyP8qAlZOi1iQHx7UeB1NGkQ5AIwNIkRDImeft9Iga+bDF6yWu60gY43QdGQCTNhjglNuZ6lkGnrTxQtPSC01AyU51V1yXKHzgaTByrA4tK6cGtwjFjMBsnXtX2+yoyyuQz/xNnIN63awqpQxZameGOtjAYhLhtEgl39XEIgvpAs1hXDWcSEBSMWP4z04U/tw2R5mtorL3QU1CmokWmuAQZNQcLSLLlt` |
+> |europewest | rsa-sha2-512 | `7+VdJ21y+HcaNRZZeaaBtk1AjkCNK4weG5mkkoyabi0` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDYAmiv6Tk/o02McJi79dlIdPLu1I5HfhsdPlUycW+t1zQwZL+WaI182G6SY728hJOGzAz51XqD4e5yueAZYjOJwcGhHVq6MfabbhvT1sxWQplnk3QKrUMRXnyuuSua1j+AwXsm957RlbW9bi1aQKdJgKq3y2yz+hqBS76SX9d8BxOHWJl5KwCIFaaJWb0u32W2HGb9eLDMQNipzHyANEQXI9Uq2qRL7Z20GiRGyy7VPP6AbPYTprrivo3QpYXSXe9VUuuXA9g3Bz3itxmOw6RV9aGQhCSp22BdJKDl70FMxTm1d87LEwOQmAViqelEeY+DEowPHwVLQs3rIJrZHxYV` |
+> |switzerlandn | ecdsa-sha2-nistp256 | `DfyPsw04f2rU6PXeLx8iVRu+hrtSLushETT3zs5Dq7U` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJICveabT6GPfbyaCSeU7D553Q4Rr/IgGjTMC8vMCIUJKUzazeCeS3q46mXL2kwnBLIge9wTzzvP7JSWf+I2Fis=` |
+> |switzerlandn | ecdsa-sha2-nistp384 | `Rw0TLDVU4PqsXbOunR2BZcn2/wqFty6rCgWN4cCD/1Y` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLhGaEyHYvfVU05lmKV4Rnrl9YiuSSOCXjUaJjJJRhe5ZXbDMHeiC67CAWW3mm/+c5i1hoob/8pHg7vmeC+ve+Ztu/ww12JsC4qy/CG8qIIQvlnDDqnfmOgr0Svw3/Izw==` |
+> |switzerlandn | rsa-sha2-256 | `4cXg5pca9HCvAxDMrE7GdwvUZl5RlaivApaqz8gl7vs` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCqqSS6hVSmykLqNCqZntOao0QSS1xG89BiwNaR7uQvz7Y2H+gJiXhgot6wtc4/A5743t7svXZqsCBGPvkpK05JMNZDUy0UTwQ1eI9WAcgFAHqzmazKT1B5/aK0P5IMcK00dVap4jTwxaoQbtc973E5XAiUW1ZRt6YComeoZB6cFVX28MaE6auWOPdEaSg8SlcmWyw73Q9X5SsJkDTW5543tzjJI5hnH03LAvPIs8pIvqxntsKPEeWnyIMHWtc5Vpg8LB7CnAr4C86++hxt3mws7+AOtcjfUu2LmLzG1A34B1yEa/wLqJCz7jWV/Wm21KlTp1VdBk+4qFoVfy2IFeX9` |
+> |switzerlandn | rsa-sha2-512 | `E63lmwPWd5a6K3wJLj4ksx0wPab1lqle2a4kwjXuR4c` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCtSlbkDdzwqHy2C/pAteV2mrkZFpJHAlL05iOrJSFk0dhq8iwsmOmQiF9Xwth6T1n3NVVncAodIN2MyHR7pQTUJu1dmHcikG/JU6wGPVN8law0+3f9aClbqWRV5tdOx1vWQP3uPrppYlT90bWbD0IBmmHnxPJXsXm+7tI1n+P1/bKewG7FvU1yF+gqOXyTXrdb3sEZOD6IYW/PusR44mDl/rV5dFilBvmluHY5155hk1O2HBOWlCiDGBdEIOmB73waUQabqBCicAWfyloGZqB1n8Eay6FksLtRSAUcCSyBSnA81phYdLiLBd9UmiVKPC7gvdBWPztWB+2MeLsXtim9` |
+> |australiaeast | ecdsa-sha2-nistp256 | `s8NdoxI0mdWchKMMt/oYtnlFNAD8RUDa1a4lO8aPMpQ` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBKG2nz5SnoR5KVYAnBMdt8be1HNIOkiZ5UrHxm4pZpLG3LCuzLEXyWlhTm8rynuM/8rATVB5FZqrDCIrnn8pkw=` |
+> |australiaeast | ecdsa-sha2-nistp384 | `YmeF1kX0R0W/ssqzKCkjoSLh3CciQvtV7iacYpRU2xc` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFJi5nieNPCIxkYS7HKMH2fQgONSy2kGkViQucVhWrTJCEQMVz5peL2JZJFjf2a6zaB2olPaBNEkeuJRHxGyW0luTII9ZXXUoiGQH9l05B41mweVtG6pljHfuKQ4HzoUJA==` |
+> |australiaeast | rsa-sha2-256 | `MrPZLU8llsG+SzgBN8eH702H4zuynyYgqqQLQmWGDEs` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDsRwHZ+DKINZZNP0GO6l7mFIIgTRnJy7ikg07h54iuk+KStoB2Cwppj+ulPs0NiR2RgAsP5nchWRZQysjsfYDui8wha6JEXKvWPlQ90rEYEs96gpUcbVQesgfH8ILXK06Kn1xY/4CWAHEc5U++66e+pHQulkkFyDXTsRYHsjTk574OiUI1` |
+> |australiaeast | rsa-sha2-512 | `jkDaVBMh+d9CUJq0QtH5LNwCIpc9DuWxetgJsE5vgNc` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDFHirQKaYqkcecqdutyMQr1eFwIaIM/h302KjROiocgb4iywMAJkBLXmhJn+sSbagM5tzIk4K4k5LRjAizEIhC26sc2aa7spyvDu7+HMqDmNQ+nRgBgvO7kpxVRcK45ZjFsgZ6+mq9jK/eRnA8wgG3LnM+3zWaNLhWlrcCM0Pdy87Cswev/CEFZu6o6E6PgpBGw0MiPVY8CbdhFoTkT8Nt6tx9VhMTpcA2yzkd3LT7JGdC2I6MvRpuyZH1q+VhW9bC4eUVoVuIHJ81hH0vzzhIci2DKsikz2P4pJT0osg5YE/o9hVJs+4CG5n1MZN/l11K8lVb9Ns7oXYsvVdtR2Jp` |
+> |asiaeast | ecdsa-sha2-nistp256 | `/iq1i88fRFHFBw4DBtZUX7GRbT5dQq4g7KfUi5346co` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCvI7Dc7W3K919GK2VHZZkzJhTM+n2tX3mxq4EAI7l8p0HO0UHSmucHdQhpKApTIBR0j9O/idZ/Ew6Yr4nusBwE=` |
+> |asiaeast | ecdsa-sha2-nistp384 | `KssXSE1WC6Oca0dS2CNySgObkbVshqRGE2JcaNsUvpA` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNEYGYGolx8LNs5TVJRF/yoxOCal3a4C0fw1Wlj1BxzUsDtxaQAxSfzQhZG+lFCF7RVQyiUwKjCxmWoZbSb19aE7AnRx9UOVmrbTt2PMD3dx8VmPj1K8rsPOSq+XX4KGdQ==` |
+> |asiaeast | rsa-sha2-256 | `XYuEB+zABdpXRklca8RCoWy4hZp9wAxjk50MD9w6UjE` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNKlaGhRiHdomU5uGvkcEjFQ0mhmNnWXsHAUNoGUhH6BU8LmsgWS61QOKHf1d3qQ0C9bPaRWMpemAa3DKGGqbgIdRrI2Yd9op+tqM+3hrZ8cBvBCgqKgaj4ZitoFnYm+iwwuReOz+x0I2/NmWUxuQlbiHTzcu8TVIln/5sj+n9PbwXC8Zk6vsCt6aon/P7hESHBJ4yf2E+Io30m+vaPNzxQGmwHjmBrZXzX8gAjGi6p823v5zdL4mq3tT5aPPsFQcfjkSMRDGq6yFSMMEA7i2dfczBQmTIJkYihdS8LBE0Ir5islJbaoPQxeXIrF+EgYgla505kJEogrLprcTGCY/t` |
+> |asiaeast | rsa-sha2-512 | `FUYhL0FaN8Zkj/M0/VJnm8jPL+2WxMsHrrc/G+xo5QM` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC7x8s74EH+il+k99G2aLl1ip5cfGfO/WUd3foiwwq+qT/95xdtstPYmOP77VBQ4G6EnauP2dY6RHKzSM2qUdmBWiYaK0aaI/2lCAaPxco12Te5Htf7igWyAHYz7W99I2CfJCEUm1Soa0v/57gLtvUg/HOqTgFX44W+PEOstMhqGoU9bSpw2IKlos9ZP87C6IQB5xPQQ1HlnIQRIzluJoFCuT7YHXFWU+F4ZOwq5+uofNH3tLlCy7D4JlxLQ0hkqq3IhF4y5xXJyuWaBYF2H8OGjOL4QN+r9osrP7iithf1Q0EZwuPYqcT1QeIhgqI7OIYEKwqMfMIMNxZwnzKgnDC1` |
+> |germanywc | ecdsa-sha2-nistp256 | `Ce+h+7thT5tt75ypIkWZ6+JnmQMZEl1N7Tt3Ldalb64` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBmVDE0INhtrKI83oB4r8eU1tXq7bRbzrtIhZdkgiy3lrsvNTEzsEExtWae2uy8zFHdkpyTbBlcUYCZEtNr9w3U=` |
+> |germanywc | ecdsa-sha2-nistp384 | `hhQQi2iRjSX5d9c+4714hAFvTA3c63+TGknhuQi7Tss` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDlFF3ceA17ZFERfvijHkPI2Na1wuti9/AOY5E/bDvZfP08kkmYTb9Ma6omhB0dHR6e1CmRJfKmFXfTd81iVWPa7yXCxbS8yG+uNKCuHxuNv8hFhNM84h2727BSBHBBHBA==` |
+> |germanywc | rsa-sha2-256 | `0SKtGye+E9pp4QLtWNLLiPSx+qKvDLNjrqHwYcDjyZ8` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDsbkjxK9IJP8K98j+4/wGJQVdkO/x0Msf89wpjd/3O4VIbmZuQ/Ilfo6OClSMVxah6biDdt3ErqeyszSaDH9n3qnaLxSd5f+317oVpBlgr2FRoxBEgzLvR/a2ZracH14zWLiEmCePp/5dgseeN7TqPtFGalvGewHEol6y0C6rkiSBzuWwFK+FzXgjOFvme7M6RYbUS9/MF7cbQbq696jyetw2G5lzEdPpXuOxJdf0GqYWpgU7XNVm+XsMXn66lp87cijNBYkX7FnXyn4XhlG4Q6KlsJ/BcM3BMk+WxT+equ7R7sU/oMQ0ti/QNahd5E/5S/hDWxg6ZI1zN8WTzypid` |
+> |germanywc | rsa-sha2-512 | `9OYO7Hn5p+JJeGGVsTSanmHK3rm+iC6KKhLEWRPD9ro` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCwrSTqa0GD36iymT4ZxSMz3mf5iMIHk6APQ2snhR5FUvacnqTOHt3xhMF+UwYmGLbQtmr4HdXIKd7Dgn5EzHcfaYFbaLJs2aDngfv7Pd6TyLW3TtSgJ6K+mC1MDI/vHzGvRxizuxwdN0uMXv5kflQvnEtWlsKAHW/H7Ypk4R8s+Kl2AIVEKdy+PYwzRd2ojqqNs+4T2tPP5Y6pnJpzTlcHkIIIf7V0Bk/bFG2B7r73DG2cSUlnJz8QW9pLXIn7268YDOR/5nozSXj7DinVDBlE5oeZh4qkdMHO1FSWynj/isUCm5qBn76WNa6sAeMBS3dYiJHUgmKMc+ZHgpu6sqgd` |
+> |europenorth | ecdsa-sha2-nistp256 | `wUF5N8VjGTnA/PYBVzQrhcrMgHuCfAYL1tu+p6s28Ms` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCh4oFTmr3wzccXcayCwvcx+EyvZ7yANMYfc3epZqEzAcDeoPV+6v58gGhYLaEVDh69fGdhiwIvMcB7yWXtqHxE=` |
+> |europenorth | ecdsa-sha2-nistp384 | `w7dzF6HD42eE2dgf/G1O73dh+QaZ7OPPZqzeKIT1H68` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgyasQj6FYeRa1jiQE4TzOGY/BcQwrWFxXNEmbyoG89ruJcmXD01hS2RzsOPaVLHfr/l71fslVrB8MQzlj3MFwgfeJdiPn7k/4owFoQolaZO7mr/vY/bqOienHN4uxLEA==` |
+> |europenorth | rsa-sha2-256 | `vTEOsEjvg/jHYH1xIWf2rKrtENlIScpBx450ROw52UI` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQChnfrsd1M0nb7mOYhWqgjpA+ChNf7Ch6Eul6wnGbs7ZLxXtXIPyEkFKlEUw4bnozSRDCfrGFY78pjx4FXrPe5/m1sCCojZX8iaxCOyj00ETj+oIgw/87Mke1pQPjyPCL29TeId16e7Wmv5XlRhop8IN6Z9baeLYxg6phTH9ilA5xwc9a1AQVoQslG0k/eTyL4gVNVOgjhz94dlPYjwcsmMFif6nq2YgQgJlIjFJ+OwMqFIzCEZIIME1Mc04tRtPlClnZN/I+Hgnxl8ysroLBJrNXGYhuRMJjJm0J1AZyFIugp/z3X1SmBIjupu1RFn/M/iB6AxziebQcsaaFEkee0l` |
+> |europenorth | rsa-sha2-512 | `c4FqTQY/IjTcovY/g7RRxOVS5oObxpiu3B0ZFvC0y+A` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCanDNwi72RmI2j6ZEhRs4/tWoeDE4HTHgKs5DRgRfkH/opK6KHM64WnVADFxAvwNws1DYT1cln3eUs6VvxUDq5mVb6SGNSz4BWGuLQ4onRxOUS/L90qUgBp4JNgQvjxBI1LX2VNmFSed34jUkkdZnLfY+lCIA/svxwzMFDw5YTp+zR0pyPhTsdHB6dST7qou+gJvyRwbrcV4BxdBnZZ7gpJxnAPIYV0oLECb9GiNOlLiDZkdsG+SpL7TPduCsOrKb/J0gtjjWHrAejXoyfxP5R054nDk+NfhIeOVhervauxZPWeLPvqdskRNiEbFBhBzi9PZSTsV4Cvh5S5bkGCfV5` |
+> |uscentraleuap | ecdsa-sha2-nistp256 | `J9oxrXZ6jDR01CcDWu6xhoRAY60R1SpqbeKA4S9EjNc` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPNv9UEan8fLKmcI/RK53nX+TD9Pm/RfOKVa1b/leKSByIzMBWQFwa6wxwtr/shl6zvjwT4E9uRu6TsRTYnk+AI=` |
+> |uscentraleuap | ecdsa-sha2-nistp384 | `SeX6s483/LpSdx8kIy+KWm5Bb6zy6wr3icyhq1DQydU` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGcFnKk6dlO6bG0YSPZruEp6bXZP3h6QvCA+jmxScUz7MIgufRT3lrxkrZs0RM9vp44i2HXOSowsgvVPDQMBJRF5gXsEU1Z9SrpqOUrlcyhzfy0SkaewuNM6VoAYjUn44g==` |
+> |uscentraleuap | rsa-sha2-256 | `tYuwFgj2b/FNC4MQm949sCucp1Atfq0z7NsF7pQU25c` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCmeyxAgrNZ+q5e1RSPL2RBtjVLTKim3lviSwdIRhddDzSQTqbl2oRAxegP5shXuBF6A5zkByGMWWfSE7sSU/zyYHH3+J8lN5NFOflPgILOcPNvQOS88i3vHdF2yguSETkWSxyBcBC36Fv5YAyRfJqEq97He1nbvIS30/1XEuOZOgk9qzaq+f18PsJjs+m24y9oqr3WgiVT/3DnD/5XW7JjESZy0YGDWRcivYZDasTQzFJTOIeMRqTXsqhYkaPkigPC/rWjUzgp9fXlknQeFrSgT/f3NvMZ+bG2WMSn28bzyOs9DZAU1LmYNkAcjABQLniQUqjoM+RRt439et9ZEOEJ` |
+> |uscentraleuap | rsa-sha2-512 | `6gy1BGZMfD37oV7ApF+SvUMcfhZumyftkNYGs5PN34I` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCzdsoT/Ark4c4jqgx0s/7fcSNLCfj/tWvdNKVFkl3B87npb26g+bJkV35/iqSdsE5T82+OILDxXGBDcZbfbZyvfYx/EEWKId7r/WRvrQDYkAcS/z1MJbpUFwxmcuqaRMYjWmzwcc/nde6Awelte0Rc9wueTq58ZUdL7VUvtPCI88SdrB5Nn5x9DoPcuGAn+8AC1UsRT4VJB2DgMRmxqUe0fUq1bMSDanAmL7ICc2s6GFvWA4JJ2g5D74MKMfvw/mBy02FJvFyJivQ1NPnQ+6CJ6CmfE0mRVCrrBZC3qBXST5NEVf4sVvhAacoR7Qn2vfRaS2tJXrFbLC5/omYNUy1J` |
+> |useast2 | ecdsa-sha2-nistp256 | `bouiC5HdtURUU19RJbym8R94fbMOTw/bUxFUkoAByoI` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJshJI18IECu6neLrash/Q622MAXO07C+hbIOiVPC6M/ZIJM8HyYvQEh4DKI1CMEaeAIs/HA905QKeU/syvt7QI=` |
+> |useast2 | ecdsa-sha2-nistp384 | `vWnPlGaQOY4LFj9XSQ2qN/NMF92+UOfKPjGNSPA2bOg` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBByJNAblwxCNVqedg5FcdbdwiuzTMVEWj/uF3uzI8wp890Xv2M4H+aMTpeItxgQsuiQCptgITsO+XCf2dBTHOGWpd90QtvcznzHyy/FEWVAKWs9brvyaNVe82c4TOFqYRg==` |
+> |useast2 | rsa-sha2-256 | `K+QQglmdpev3bvEKUgBTiOGMxwTlbD7gaYnLZhPfe1c` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOA2Aj1tIG/TUXoVFHOltyW8qnCOtm2gq2NFTdfyDFw3/C4jk2HHQf2smDX54g8ixcLuSX3bRDtKRJItUhrKFY6A0AfN1+r46kkJJdFjzdcgi7C3M0BehH7HlHZ7Fv2u01VdROiXocHpNOVeLFKyt516ooe6b6bxrdc480RrgopPYpf6etJUm8d4WrDtGXB3ctip8p6Z2Z/ORfK77jTeKO4uzaHLM0W7G5X+nZJWn3axaf4H092rDAIH1tjEuWIhEivhkG9stUSeI3h6zw7q9FsJTGo0mIMZ9BwgE+Q2WLZtE2uMpwQ0mOqEPDnm0uJ5GiSmQLVyaV6E5SqhTfvVZ1` |
+> |useast2 | rsa-sha2-512 | `UKT1qPRfpm+yzpRMukKpBCRFnOd257uSxGizI7fPLTw` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC/HCjYc4tMVNKmbEDT0HXVhyYkyzufrad8pvGb3bW1qGnpM1ZT3qauJrKizJFIuT3cPu43slhwR/Ryy79x6fLTKXNNucHHEpwT/yzf5H6L14N+i0rB/KWvila2enB2lTDVkUW50Fo+k5U/JPTn8vdLPkYJbtx9s0s3RMwaRrRBkW6+36Xrh0h7rxV5LfY/EI1331f+1bgNM7xD59D3U76OafZMh5VfSbCisvDWyIPebXkOMF/eL8ATlaOfab0TAC8lriCkLQolR+El9ARZ69CJtKg4gBB3IY766Ag3+rry1/J97kr4X3aVrDxMps1Pq+Q8TCOf4zFDPf2JwZhUpDPp` |
+ <sup>1</sup> The SHA 256 fingerprint is used by Open SSH and WinSCP.
storage Storage Explorer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-explorer-troubleshooting.md
Follow these steps to find them:
- Windows: Open the installation directory, select **/bin/**, and then double-click **openssl.exe**. - Mac and Linux: Run `openssl` from a terminal.
-1. Run the command `openssl s_client -showcerts -connect <hostname>:443` for any of the Microsoft or Azure host names that your storage resources are behind. For more information, see this [list of host names that are frequently accessed by Storage Explorer](https://docs.microsoft.com/azure/storage/common/storage-explorer-network).
+1. Run the command `openssl s_client -showcerts -connect <hostname>:443` for any of the Microsoft or Azure host names that your storage resources are behind. For more information, see this [list of host names that are frequently accessed by Storage Explorer](./storage-explorer-network.md).
1. Look for self-signed certificates. If the subject `("s:")` and issuer `("i:")` are the same, the certificate is most likely self-signed. 1. When you find the self-signed certificates, for each one, copy and paste everything from, and including, `--BEGIN CERTIFICATE--` to `--END CERTIFICATE--` into a new .cer file. 1. Open Storage Explorer and go to **Edit** > **SSL Certificates** > **Import Certificates**. Then use the file picker to find, select, and open the .cer files you created.
If none of these solutions work for you, you can:
- Create a support ticket. - [Open an issue on GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues) by selecting the **Report issue to GitHub** button in the lower-left corner.
-![Feedback](./media/storage-explorer-troubleshooting/feedback-button.PNG)
+![Feedback](./media/storage-explorer-troubleshooting/feedback-button.PNG)
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/file-sync/file-sync-planning.md
The following regions require you to request access to Azure Storage before you
- South Africa West - UAE Central
-To request access for these regions, follow the process in [this document](https://docs.microsoft.com/troubleshoot/azure/general/region-access-request-process).
+To request access for these regions, follow the process in [this document](/troubleshoot/azure/general/region-access-request-process).
## Redundancy [!INCLUDE [storage-files-redundancy-overview](../../../includes/storage-files-redundancy-overview.md)]
These increases in both the number of recalls and the amount of data being recal
* [Consider firewall and proxy settings](file-sync-firewall-and-proxy.md) * [Deploy Azure Files](../files/storage-how-to-create-file-share.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json) * [Deploy Azure File Sync](file-sync-deployment-guide.md)
-* [Monitor Azure File Sync](file-sync-monitoring.md)
+* [Monitor Azure File Sync](file-sync-monitoring.md)
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-identity-ad-ds-enable.md
Previously updated : 10/05/2021 Last updated : 01/14/2022
If your OU enforces password expiration, you must update the password before the
Keep the SID of the newly created identity, you'll need it for the next step. The identity you've created that represent the storage account doesn't need to be synced to Azure AD.
+### Enable the feature on your storage account
+
+Modify the following command to include configuration details for the domain properties in the following command, then run it to enable the feature. The storage account SID required in the following command is the SID of the identity you created in your AD DS in [the previous section](#create-an-identity-representing-the-storage-account-in-your-ad-manually).
+
+```PowerShell
+# Set the feature flag on the target storage account and provide the required AD domain information
+Set-AzStorageAccount `
+ -ResourceGroupName "<your-resource-group-name-here>" `
+ -Name "<your-storage-account-name-here>" `
+ -EnableActiveDirectoryDomainServicesForFile $true `
+ -ActiveDirectoryDomainName "<your-domain-dns-root-here>" `
+ -ActiveDirectoryNetBiosDomainName "<your-domain-dns-root-here>" `
+ -ActiveDirectoryForestName "<your-forest-name-here>" `
+ -ActiveDirectoryDomainGuid "<your-guid-here>" `
+ -ActiveDirectoryDomainsid "<your-domain-sid-here>" `
+ -ActiveDirectoryAzureStorageSid "<your-storage-account-sid>"
+```
+ #### (Optional) Enable AES256 encryption
-If you want to enable AES 256 encryption, follow the steps in this section. If you plan to use RC4, you can skip this section.
+To enable AES 256 encryption, follow the steps in this section. If you plan to use RC4, skip this section.
The domain object that represents your storage account must meet the following requirements: - The storage account name cannot exceed 15 characters.
$NewPassword = ConvertTo-SecureString -String $KerbKey -AsPlainText -Force
Set-ADAccountPassword -Identity <domain-object-identity> -Reset -NewPassword $NewPassword ```
-### Enable the feature on your storage account
-
-Now you can enable the feature on your storage account. Provide some configuration details for the domain properties in the following command, then run it. The storage account SID required in the following command is the SID of the identity you created in your AD DS in [the previous section](#create-an-identity-representing-the-storage-account-in-your-ad-manually).
-
-```PowerShell
-# Set the feature flag on the target storage account and provide the required AD domain information
-Set-AzStorageAccount `
- -ResourceGroupName "<your-resource-group-name-here>" `
- -Name "<your-storage-account-name-here>" `
- -EnableActiveDirectoryDomainServicesForFile $true `
- -ActiveDirectoryDomainName "<your-domain-name-here>" `
- -ActiveDirectoryNetBiosDomainName "<your-netbios-domain-name-here>" `
- -ActiveDirectoryForestName "<your-forest-name-here>" `
- -ActiveDirectoryDomainGuid "<your-guid-here>" `
- -ActiveDirectoryDomainsid "<your-domain-sid-here>" `
- -ActiveDirectoryAzureStorageSid "<your-storage-account-sid>"
-```
- ### Debugging You can run the Debug-AzStorageAccountAuth cmdlet to conduct a set of basic checks on your AD configuration with the logged on AD user. This cmdlet is supported on AzFilesHybrid v0.1.2+ version. For more information on the checks performed in this cmdlet, see [Unable to mount Azure Files with AD credentials](storage-troubleshoot-windows-file-connection-problems.md#unable-to-mount-azure-files-with-ad-credentials) in the troubleshooting guide for Windows.
storage Storage Files Identity Auth Active Directory Domain Service Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-identity-auth-active-directory-domain-service-enable.md
Previously updated : 07/22/2021 Last updated : 01/14/2022
The following diagram illustrates the end-to-end workflow for enabling Azure AD
![Diagram showing Azure AD over SMB for Azure Files workflow](media/storage-files-active-directory-enable/azure-active-directory-over-smb-workflow.png)
+## (Optional) Use AES 256 encryption
+
+By default, Azure AD DS authentication uses Kerberos RC4 encryption. To use Kerberos AES256 instead, follow these steps:
+
+As an Azure AD DS user with the required permissions (typically, members of the **AAD DC Administrators** group will have the necessary permissions, open the Azure cloud shell.
+
+Execute the following commands:
+
+```azurepowershell
+# 1. Find the service account in your managed domain that represents the storage account.
+
+$storageAccountName= ΓÇ£<InsertStorageAccountNameHere>ΓÇ¥
+$searchFilter = "Name -like '*{0}*'" -f $storageAccountName
+$userObject = Get-ADUser -filter $searchFilter
+
+if ($userObject -eq $null)
+{
+ Write-Error "Cannot find AD object for storage account:$storageAccountName" -ErrorAction Stop
+}
+
+# 2. Set the KerberosEncryptionType of the object
+
+Set-ADUser $userObject -KerberosEncryptionType AES256
+
+# 3. Validate that the object now has the expected (AES256) encryption type.
+
+Get-ADUser $userObject -properties KerberosEncryptionType
+```
+ ## Enable Azure AD DS authentication for your account To enable Azure AD DS authentication over SMB for Azure Files, you can set a property on storage accounts by using the Azure portal, Azure PowerShell, or Azure CLI. Setting this property implicitly "domain joins" the storage account with the associated Azure AD DS deployment. Azure AD DS authentication over SMB is then enabled for all new and existing file shares in the storage account.
storage Tiger Bridge Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/solution-integration/validated-partners/primary-secondary-storage/tiger-bridge-deployment-guide.md
Before you begin the deployment, we recommend you plan your Tiger Bridge use cas
- Consider long-term planning for your future data workflows to allow proper selection for defining policies (reclamation, replication, archiving, and snapshot policies.) ### Prepare Azure Blob Storage
-Before Tiger Bridge can be deployed, you need to create a **Storage Account**. Storage account will be used as a repository for data that is tiered, or replicated from Tiger Bridge solution. Tiger Bridge supports all Azure Storage services, including Archive Tier for Azure Storage Block blobs. That support allows to store less frequently data in cost-effective way but still access it through the well-known pane of glass. For additional information on Tiger Bridge, visit our comparison of [ISV solutions](/azure/storage/solution-integration/validated-partners/primary-secondary-storage/isv-file-services).
+Before Tiger Bridge can be deployed, you need to create a **Storage Account**. Storage account will be used as a repository for data that is tiered, or replicated from Tiger Bridge solution. Tiger Bridge supports all Azure Storage services, including Archive Tier for Azure Storage Block blobs. That support allows to store less frequently data in cost-effective way but still access it through the well-known pane of glass. For additional information on Tiger Bridge, visit our comparison of [ISV solutions](./isv-file-services.md).
1. Create a storage account :::image type="content" source="./media/tiger-bridge-deployment-guide/azure-create-storage-account.png" alt-text="Screenshot that shows how to create a storage account for Azure Blob Storage service.":::
Before you can install Tiger Bridge, you need to have a Windows file server inst
1. Enter the account name and key and the Blob endpoint in the respective fields. Use the storage account name created in [prepare Azure Blob storage step](#prepare-azure-blob-storage). 1. Choose whether to access the target using the secure transfer (SSL/TLS) by selecting, or clearing the check box. Secure transfer is recommended for production workloads. If disabled, make sure that you disabled **Secure transfer required** option in the storage account **Configuration** option. :::image type="content" source="./media/tiger-bridge-deployment-guide/azure-secure-transfer.png" alt-text="Screenshot that shows how to enable, or disable secure transfer for a storage account.":::
- 1. In the **Default access tier**, select whether to use the Hot, Cool, or Archive tier of Azure Storage. This tier will be used for any data that doesn't have a tier set. Learn more on [Hot, Cool, and Archive access tiers for blob data](/azure/storage/blobs/access-tiers-overview)
- 1. In Rehydration priority, select whether offline files should be rehydrated using the Standard, or the High option. Learn more on [Blob rehydration from the Archive tier](/azure/storage/blobs/access-tiers-overview)
+ 1. In the **Default access tier**, select whether to use the Hot, Cool, or Archive tier of Azure Storage. This tier will be used for any data that doesn't have a tier set. Learn more on [Hot, Cool, and Archive access tiers for blob data](../../../blobs/access-tiers-overview.md)
+ 1. In Rehydration priority, select whether offline files should be rehydrated using the Standard, or the High option. Learn more on [Blob rehydration from the Archive tier](../../../blobs/access-tiers-overview.md)
1. Select **List containers** to display the list of containers available for the account you have specified, and then select the container on the target, which will be paired with the selected source. 1. In the next step, select what to do with data already in the container, and then press **OK**. :::image type="content" source="./media/tiger-bridge-deployment-guide/tiger-bridge-existing-data-policy.png" alt-text="Screenshot that shows policies for managing existing data in the storage account container.":::
Tiger Technology provides 365x24x7 support for Tiger Bridge. To contact support,
## Next steps - [Tiger Bridge website](https://www.tiger-technology.com/software/tiger-bridge/) - [Tiger Bridge guides](https://www.tiger-technology.com/software/tiger-bridge/docs/)-- [Azure Storage partners for primary and secondary storage](/azure/storage/solution-integration/validated-partners/primary-secondary-storage/partner-overview)
+- [Azure Storage partners for primary and secondary storage](./partner-overview.md)
- [Tiger Bridge Marketplace offering](https://azuremarketplace.microsoft.com/marketplace/apps/tiger-technology.tigerbridge_vm)-- [Running ISV file services in Azure](/azure/storage/solution-integration/validated-partners/primary-secondary-storage/isv-file-services)
+- [Running ISV file services in Azure](./isv-file-services.md)
stream-analytics Automation Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/automation-powershell.md
Last updated 11/03/2021
# Auto-pause a job with PowerShell and Azure Functions or Azure Automation
-Some applications require a stream processing approach, made easy with [Azure Stream Analytics](/azure/stream-analytics/stream-analytics-introduction) (ASA), but don't strictly need to run continuously. The reasons are various:
+Some applications require a stream processing approach, made easy with [Azure Stream Analytics](./stream-analytics-introduction.md) (ASA), but don't strictly need to run continuously. The reasons are various:
- Input data arriving on a schedule (top of the hour...) - A sparse or low volume of incoming data (few records per minute)
Some applications require a stream processing approach, made easy with [Azure St
The benefit of not running these jobs continuously will be **cost savings**, as Stream Analytics jobs are [billed](https://azure.microsoft.com/pricing/details/stream-analytics/) per Streaming Unit **over time.**
-This article will explain how to set up auto-pause for an Azure Stream Analytics job. In it, we configure a task that automatically pauses and resumes a job on a schedule. If we're using the term **pause**, the actual job [state](/azure/stream-analytics/job-states) is **stopped**, as to avoid any billing.
+This article will explain how to set up auto-pause for an Azure Stream Analytics job. In it, we configure a task that automatically pauses and resumes a job on a schedule. If we're using the term **pause**, the actual job [state](./job-states.md) is **stopped**, as to avoid any billing.
We'll discuss the overall design first, then go through the required components, and finally discuss some implementation details.
For this example, we want our job to run for N minutes, before pausing it for M
![Diagram that illustrates the behavior of the auto-paused job over time](./media/automation/principle.png)
-When running, the task shouldn't stop the job until its metrics are healthy. The metrics of interest will be the input backlog and the [watermark](/azure/stream-analytics/stream-analytics-time-handling#background-time-concepts). We'll check that both are at their baseline for at least N minutes. This behavior translates to two actions:
+When running, the task shouldn't stop the job until its metrics are healthy. The metrics of interest will be the input backlog and the [watermark](./stream-analytics-time-handling.md#background-time-concepts). We'll check that both are at their baseline for at least N minutes. This behavior translates to two actions:
- A stopped job is restarted after M minutes - A running job is stopped anytime after N minutes, as soon as its backlog and watermark metrics are healthy
When running, the task shouldn't stop the job until its metrics are healthy. The
As an example, let's consider N = 5 minutes, and M = 10 minutes. With these settings, a job has at least 5 minutes to process all the data received in 15. Potential cost savings are up to 66%.
-To restart the job, we'll use the `When Last Stopped` [start option](/azure/stream-analytics/start-job#start-options). This option tells ASA to process all the events that were backlogged upstream since the job was stopped. There are two caveats in this situation. First, the job can't stay stopped longer than the retention period of the input stream. If we only run the job once a day, we need to make sure that the [event hub retention period](/azure/event-hubs/event-hubs-faq#what-is-the-maximum-retention-period-for-events-) is more than one day. Second, the job needs to have been started at least once for the mode `When Last Stopped` to be accepted (else it has literally never been stopped before). So the first run of a job needs to be manual, or we would need to extend the script to cover for that case.
+To restart the job, we'll use the `When Last Stopped` [start option](./start-job.md#start-options). This option tells ASA to process all the events that were backlogged upstream since the job was stopped. There are two caveats in this situation. First, the job can't stay stopped longer than the retention period of the input stream. If we only run the job once a day, we need to make sure that the [event hub retention period](/azure/event-hubs/event-hubs-faq#what-is-the-maximum-retention-period-for-events-) is more than one day. Second, the job needs to have been started at least once for the mode `When Last Stopped` to be accepted (else it has literally never been stopped before). So the first run of a job needs to be manual, or we would need to extend the script to cover for that case.
The last consideration is to make these actions idempotent. This way, they can be repeated at will with no side effects, for both ease of use and resiliency.
We anticipate the need to interact with ASA on the following **aspects**:
For *ASA Resource Management*, we can use either the [REST API](/rest/api/streamanalytics/), the [.NET SDK](/dotnet/api/microsoft.azure.management.streamanalytics) or one of the CLI libraries ([Az CLI](/cli/azure/stream-analytics), [PowerShell](/powershell/module/az.streamanalytics)).
-For *Metrics* and *Logs*, in Azure everything is centralized under [Azure Monitor](/azure/azure-monitor/overview), with a similar choice of API surfaces. We have to remember that logs and metrics are always 1 to 3 minutes behind when querying the APIs. So setting N at 5 usually means the job will be running 6 to 8 minutes in reality. Another thing to consider is that metrics are always emitted. When the job is stopped, the API returns empty records. We'll have to clean up the output of our API calls to only look at relevant values.
+For *Metrics* and *Logs*, in Azure everything is centralized under [Azure Monitor](../azure-monitor/overview.md), with a similar choice of API surfaces. We have to remember that logs and metrics are always 1 to 3 minutes behind when querying the APIs. So setting N at 5 usually means the job will be running 6 to 8 minutes in reality. Another thing to consider is that metrics are always emitted. When the job is stopped, the API returns empty records. We'll have to clean up the output of our API calls to only look at relevant values.
### Scripting language
In PowerShell, we'll use the [Az PowerShell](/powershell/azure/new-azureps-modul
- [Get-AzStreamAnalyticsJob](/powershell/module/az.streamanalytics/get-azstreamanalyticsjob) for the current job status - [Start-AzStreamAnalyticsJob](/powershell/module/az.streamanalytics/start-azstreamanalyticsjob) / [Stop-AzStreamAnalyticsJob](/powershell/module/az.streamanalytics/stop-azstreamanalyticsjob)-- [Get-AzMetric](/powershell/module/az.monitor/get-azmetric) with `InputEventsSourcesBacklogged` [(from ASA metrics)](/azure/azure-monitor/essentials/metrics-supported#microsoftstreamanalyticsstreamingjobs)
+- [Get-AzMetric](/powershell/module/az.monitor/get-azmetric) with `InputEventsSourcesBacklogged` [(from ASA metrics)](../azure-monitor/essentials/metrics-supported.md#microsoftstreamanalyticsstreamingjobs)
- [Get-AzActivityLog](/powershell/module/az.monitor/get-azactivitylog) for event names beginning with `Stop Job` ### Hosting service To host our PowerShell task, we'll need a service that offers scheduled runs. There are lots of options, but looking at serverless ones: -- [Azure Functions](/azure/azure-functions/functions-overview), a serverless compute engine that can run almost any piece of code. Functions offer a [timer trigger](/azure/azure-functions/functions-bindings-timer?tabs=csharp) that can run up to every second-- [Azure Automation](/azure/automation/overview), a managed service built for operating cloud workloads and resources. Which fits the bill, but whose minimal schedule interval is 1 hour (less with [workarounds](/azure/automation/shared-resources/schedules#schedule-runbooks-to-run-more-frequently)).
+- [Azure Functions](../azure-functions/functions-overview.md), a serverless compute engine that can run almost any piece of code. Functions offer a [timer trigger](../azure-functions/functions-bindings-timer.md?tabs=csharp) that can run up to every second
+- [Azure Automation](../automation/overview.md), a managed service built for operating cloud workloads and resources. Which fits the bill, but whose minimal schedule interval is 1 hour (less with [workarounds](../automation/shared-resources/schedules.md#schedule-runbooks-to-run-more-frequently)).
If we don't mind the workaround, Azure Automation is the easier way to deploy the task. But to be able to compare, in this article we'll be writing a local script first. Once we have a functioning script, we'll deploy it both in Functions and in an Automation Account. ### Developer tools
-We highly recommend local development using [VSCode](https://code.visualstudio.com/), both for [Functions](/azure/azure-functions/create-first-function-vs-code-powershell) and [ASA](/azure/stream-analytics/quick-create-visual-studio-code). Using a local IDE allows us to use source control and to easily repeat deployments. But for the sake of brevity, here we'll illustrate the process in the [Azure portal](https://portal.azure.com).
+We highly recommend local development using [VSCode](https://code.visualstudio.com/), both for [Functions](../azure-functions/create-first-function-vs-code-powershell.md) and [ASA](./quick-create-visual-studio-code.md). Using a local IDE allows us to use source control and to easily repeat deployments. But for the sake of brevity, here we'll illustrate the process in the [Azure portal](https://portal.azure.com).
## Writing the PowerShell script locally
Write-Output "asaRobotPause - Job $($asaJobName) was $($currentJobState), is now
## Option 1: Hosting the task in Azure Functions
-For reference, the Azure Functions team maintains an exhaustive [PowerShell developer guide](/azure/azure-functions/functions-reference-powershell?tabs=portal).
+For reference, the Azure Functions team maintains an exhaustive [PowerShell developer guide](../azure-functions/functions-reference-powershell.md?tabs=portal).
First we'll need a new **Function App**. A Function App is similar to a solution that can host multiple Functions.
-The full procedure is [here](/azure/azure-functions/functions-create-function-app-portal#create-a-function-app), but the gist is to go in the [Azure portal](https://portal.azure.com), and create a new Function App with:
+The full procedure is [here](../azure-functions/functions-create-function-app-portal.md#create-a-function-app), but the gist is to go in the [Azure portal](https://portal.azure.com), and create a new Function App with:
- Publish: **Code** - Runtime: **PowerShell Core**
Once it's provisioned, let's start with its overall configuration.
### Managed identity for Azure Functions
-The Function needs permissions to start and stop the ASA job. We'll assign these permissions via a [managed identity](/azure/active-directory/managed-identities-azure-resources/overview).
+The Function needs permissions to start and stop the ASA job. We'll assign these permissions via a [managed identity](../active-directory/managed-identities-azure-resources/overview.md).
-The first step is to enable a **system-assigned managed identity** for the Function, following that [procedure](/azure/app-service/overview-managed-identity?toc=%2Fazure%2Fazure-functions%2Ftoc.json&tabs=dotnet#using-the-azure-portal).
+The first step is to enable a **system-assigned managed identity** for the Function, following that [procedure](../app-service/overview-managed-identity.md?tabs=dotnet&toc=%2fazure%2fazure-functions%2ftoc.json#using-the-azure-portal).
Now we can grant the right permissions to that identity on the ASA job we want to auto-pause. For that, in the Portal for the **ASA job** (not the Function one), in **Access control (IAM)**, add a **role assignment** to the role *Contributor* for a member of type *Managed Identity*, selecting the name of the Function above.
Write-Host "asaRobotPause - PowerShell timer trigger function is starting at tim
### Parameters for Azure Functions
-The best way to pass our parameters to the script in Functions is to use the Function App application settings as [environment variables](/azure/azure-functions/functions-reference-powershell?tabs=portal#environment-variables).
+The best way to pass our parameters to the script in Functions is to use the Function App application settings as [environment variables](../azure-functions/functions-reference-powershell.md?tabs=portal#environment-variables).
-To do so, the first step is in the Function App page, to define our parameters as **App Settings** following that [procedure](/azure/azure-functions/functions-how-to-use-azure-function-app-settings?tabs=portal#settings). We'll need:
+To do so, the first step is in the Function App page, to define our parameters as **App Settings** following that [procedure](../azure-functions/functions-how-to-use-azure-function-app-settings.md?tabs=portal#settings). We'll need:
|Name|Value| |-|-|
$asaJobName = $env:asaJobName
### PowerShell module requirements
-The same way we had to install Az PowerShell locally to use the ASA commands (like `Start-AzStreamAnalyticsJob`), we'll need to [add it to the Function App host](/azure/azure-functions/functions-reference-powershell?tabs=portal#dependency-management).
+The same way we had to install Az PowerShell locally to use the ASA commands (like `Start-AzStreamAnalyticsJob`), we'll need to [add it to the Function App host](../azure-functions/functions-reference-powershell.md?tabs=portal#dependency-management).
To do that, we can go in `Functions` > `App files` of the Function App page, select `requirements.psd1`, and uncomment the line `'Az' = '6.*'`. For that change to take effect, the whole app will need to be restarted.
To do that, we can go in `Functions` > `App files` of the Function App page, sel
Once all that configuration is done, we can create the specific function, inside the Function App, that will run our script.
-We'll develop in the portal, a function triggered on a timer (every minute with `0 */1 * * * *`, which [reads](/azure/azure-functions/functions-bindings-timer?tabs=csharp#ncrontab-expressions) "*on second 0 of every 1 minute*"):
+We'll develop in the portal, a function triggered on a timer (every minute with `0 */1 * * * *`, which [reads](../azure-functions/functions-bindings-timer.md?tabs=csharp#ncrontab-expressions) "*on second 0 of every 1 minute*"):
![Screenshot of creating a new timer trigger function in the function app](./media/automation/new-function-timer.png)
Next set up the **Alert logic** as follows:
- Threshold value: 0 - Frequency of evaluation: 5 minutes
-From there, reuse or create a new [action group](/azure/azure-monitor/alerts/action-groups?WT.mc_id=Portal-Microsoft_Azure_Monitoring), then complete the configuration.
+From there, reuse or create a new [action group](../azure-monitor/alerts/action-groups.md?WT.mc_id=Portal-Microsoft_Azure_Monitoring), then complete the configuration.
To check that the alert was set up properly, we can add `throw "Testing the alert"` anywhere in the PowerShell script, and wait 5 minutes to receive an email.
To check that the alert was set up properly, we can add `throw "Testing the aler
First we'll need a new **Automation Account**. An Automation Account is similar to a solution that can host multiple runbooks.
-The procedure is [here](/azure/automation/quickstarts/create-account-portal). Here we can select to use a system-assigned managed identity directly in the `advanced` tab.
+The procedure is [here](../automation/quickstarts/create-account-portal.md). Here we can select to use a system-assigned managed identity directly in the `advanced` tab.
-For reference, the Automation team has a [good tutorial](/azure/automation/learn/powershell-runbook-managed-identity) to get started on PowerShell runbooks.
+For reference, the Automation team has a [good tutorial](../automation/learn/powershell-runbook-managed-identity.md) to get started on PowerShell runbooks.
### Parameters for Azure Automation
Param(
### Managed identity for Azure Automation
-The Automation Account should have received a managed identity during provisioning. But if needed, we can enable one using that [procedure](/azure/automation/enable-managed-identity-for-automation).
+The Automation Account should have received a managed identity during provisioning. But if needed, we can enable one using that [procedure](../automation/enable-managed-identity-for-automation.md).
Like for the function, we'll need to grant the right permissions on the ASA job we want to auto-pause.
We can now paste our script and test it. The full script can be copied from [her
We can check that everything is wired properly in the `Test Pane`.
-After that we need to `Publish` the job, which will allow us to link the runbook to a schedule. Creating and linking the schedule is a straightforward process that won't be discussed here. Now is a good time to remember that there are [workarounds](/azure/automation/shared-resources/schedules#schedule-runbooks-to-run-more-frequently) to achieve schedule intervals under 1 hour.
+After that we need to `Publish` the job, which will allow us to link the runbook to a schedule. Creating and linking the schedule is a straightforward process that won't be discussed here. Now is a good time to remember that there are [workarounds](../automation/shared-resources/schedules.md#schedule-runbooks-to-run-more-frequently) to achieve schedule intervals under 1 hour.
-Finally, we can set up an alert. The first step is to enable logs via the [Diagnostic settings](/azure/azure-monitor/essentials/diagnostic-settings?tabs=CMD#create-in-azure-portal) of the Automation Account. The second step is to capture errors via a query like we did for Functions.
+Finally, we can set up an alert. The first step is to enable logs via the [Diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md?tabs=CMD#create-in-azure-portal) of the Automation Account. The second step is to capture errors via a query like we did for Functions.
## Outcome
You've learned the basics of using PowerShell to automate the management of Azur
- [Scale Azure Stream Analytics jobs](stream-analytics-scale-jobs.md) - [Azure Stream Analytics Management .NET SDK](/previous-versions/azure/dn889315(v=azure.100)) - [Azure Stream Analytics Query Language Reference](/stream-analytics-query/stream-analytics-query-language-reference)-- [Azure Stream Analytics Management REST API Reference](/rest/api/streamanalytics/)
+- [Azure Stream Analytics Management REST API Reference](/rest/api/streamanalytics/)
stream-analytics Input Validation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/input-validation.md
This article illustrates how to implement this technique.
## Context
-Azure Stream Analytics (ASA) jobs process data coming from streams. Streams are sequences of raw data that are transmitted [serialized](https://en.wikipedia.org/wiki/Serialization) (CSV, JSON, AVRO...). To read from a stream, an application will need to know the specific serialization format used. In ASA, the **event serialization format** has to be defined when configuring a [streaming input](/azure/stream-analytics/stream-analytics-define-inputs).
+Azure Stream Analytics (ASA) jobs process data coming from streams. Streams are sequences of raw data that are transmitted [serialized](https://en.wikipedia.org/wiki/Serialization) (CSV, JSON, AVRO...). To read from a stream, an application will need to know the specific serialization format used. In ASA, the **event serialization format** has to be defined when configuring a [streaming input](./stream-analytics-define-inputs.md).
Once the data is deserialized, **a schema needs to be applied to give it meaning**. By schema we mean the list of fields in the stream, and their respective data types. With ASA, the schema of the incoming data doesn't need to be set at the input level. ASA instead supports **dynamic input schemas** natively. It expects **the list of fields (columns), and their types, to change between events (rows)**. ASA will also infer data types when none is provided explicitly, and try to implicitly cast types when needed.
There's another discrepancy. **ASA uses its own type system** that doesn't match
Back to our query, here we intend to: -- Pass `readingStr` to a [JavaScript UDF](/azure/stream-analytics/stream-analytics-javascript-user-defined-functions)
+- Pass `readingStr` to a [JavaScript UDF](./stream-analytics-javascript-user-defined-functions.md)
- Count the number of records in the array - Round `readingNum` to the second decimal place - Insert the data into a SQL table
It's a good practice to map what happens to each field as it goes through the jo
## Prerequisites
-We'll develop the query in **Visual Studio Code** using the **ASA Tools** extension. The first steps of this [tutorial](/azure/stream-analytics/quick-create-visual-studio-code) will guide you through installing the required components.
+We'll develop the query in **Visual Studio Code** using the **ASA Tools** extension. The first steps of this [tutorial](./quick-create-visual-studio-code.md) will guide you through installing the required components.
-In VS Code, we'll use [local runs](/azure/stream-analytics/visual-studio-code-local-run-all) with **local** input/output to not incur any cost, and speed up the debugging loop. **We won't need** to set up an event hub or an Azure SQL Database.
+In VS Code, we'll use [local runs](./visual-studio-code-local-run-all.md) with **local** input/output to not incur any cost, and speed up the debugging loop. **We won't need** to set up an event hub or an Azure SQL Database.
## Base query Let's start with a basic implementation, with **no input validation**. We'll add it in the next section.
-In VS Code, we'll [create a new ASA project](/azure/stream-analytics/quick-create-visual-studio-code#create-a-stream-analytics-project)
+In VS Code, we'll [create a new ASA project](./quick-create-visual-studio-code.md#create-a-stream-analytics-project)
In the `input` folder, we'll create a new JSON file called `data_readings.json` and add the following records to it:
In the `input` folder, we'll create a new JSON file called `data_readings.json`
] ```
-Then we'll [define a local input](/azure/stream-analytics/visual-studio-code-local-run#define-a-local-input), called `readings`, referencing the JSON file we created above.
+Then we'll [define a local input](./visual-studio-code-local-run.md#define-a-local-input), called `readings`, referencing the JSON file we created above.
Once configured it should look like this:
function main(arg1) {
} ```
-In [local runs](/azure/stream-analytics/visual-studio-code-local-run-all), we don't need to define outputs. We don't even need to use `INTO` unless there are more than one output. In the `.asaql` file, we can replace the existing query by:
+In [local runs](./visual-studio-code-local-run-all.md), we don't need to define outputs. We don't even need to use `INTO` unless there are more than one output. In the `.asaql` file, we can replace the existing query by:
```SQL SELECT
GROUP BY
Let's quickly go through the query we submitted: -- To count the number of records in each array, we first need to unpack them. We'll use **[CROSS APPLY](/stream-analytics-query/apply-azure-stream-analytics)** and [GetArrayElements()](/stream-analytics-query/getarrayelements-azure-stream-analytics) (more [samples here](/azure/stream-analytics/stream-analytics-parsing-json))
+- To count the number of records in each array, we first need to unpack them. We'll use **[CROSS APPLY](/stream-analytics-query/apply-azure-stream-analytics)** and [GetArrayElements()](/stream-analytics-query/getarrayelements-azure-stream-analytics) (more [samples here](./stream-analytics-parsing-json.md))
- Doing so, we surface two data sets in the query: the original input and the array values. To make sure we don't mix up fields, we define aliases (`AS r`) and use them everywhere - Then to actually `COUNT` the array values, we need to aggregate with **[GROUP BY](/stream-analytics-query/group-by-azure-stream-analytics)**
- - For that we must define a [time window](/azure/stream-analytics/stream-analytics-window-functions). Here since we don't need one for our logic, the [snapshot window](/stream-analytics-query/snapshot-window-azure-stream-analytics) is the right choice
+ - For that we must define a [time window](./stream-analytics-window-functions.md). Here since we don't need one for our logic, the [snapshot window](/stream-analytics-query/snapshot-window-azure-stream-analytics) is the right choice
- We also have to `GROUP BY` all the fields, and project them all in the `SELECT`. Explicitly projecting fields is a good practice, as `SELECT *` will let errors flow through from the input to the output - If we define a time window, we may want to define a timestamp with **[TIMESTAMP BY](/stream-analytics-query/timestamp-by-azure-stream-analytics)**. Here it's not necessary for our logic to work. For local runs, without `TIMESTAMP BY` all records are loaded on a single timestamp, the run start time. - We use the UDF to filter readings where `readingStr` has fewer than two characters. We should have used [LEN](/stream-analytics-query/len-azure-stream-analytics) here. We're using a UDF for demonstration purpose only
-We can [start a run](/azure/stream-analytics/visual-studio-code-local-run#run-queries-locally) and observe the data being processed:
+We can [start a run](./visual-studio-code-local-run.md#run-queries-locally) and observe the data being processed:
|deviceId|readingTimestamp|readingStr|readingNum|arrayCount| |-|-|-|-|-|
Let's extend our query to validate the input.
The first step of input validation is to define the schema expectations of the core business logic. Looking back at original requirement, our main logic is to: -- Pass `readingStr` to a [JavaScript UDF](/azure/stream-analytics/stream-analytics-javascript-user-defined-functions) to measure its length
+- Pass `readingStr` to a [JavaScript UDF](./stream-analytics-javascript-user-defined-functions.md) to measure its length
- Count the number of records in the array - Round `readingNum` to the second decimal place - Insert the data into a SQL table
FROM readingsToBeRejected
[GetType](/stream-analytics-query/gettype-azure-stream-analytics) can be used to explicitly check for a type. It works well with [CASE](/stream-analytics-query/case-azure-stream-analytics) in the projection, or [WHERE](/stream-analytics-query/where-azure-stream-analytics) at the set level. `GetType` can also be used to dynamically check the incoming schema against a metadata repository. The repository can be loaded via a reference data set.
-[Unit-testing](/azure/stream-analytics/cicd-tools?tabs=visual-studio-code#automated-test) is a good practice to ensure our query is resilient. We'll build a series of tests that consist of input files and their expected output. Our query will have to match the output it generates to pass. In ASA, unit-testing is done via the [asa-streamanalytics-cicd](/azure/stream-analytics/cicd-tools?tabs=visual-studio-code#installation) npm module. Test cases with various malformed events should be created and tested in the deployment pipeline.
+[Unit-testing](./cicd-tools.md?tabs=visual-studio-code#automated-test) is a good practice to ensure our query is resilient. We'll build a series of tests that consist of input files and their expected output. Our query will have to match the output it generates to pass. In ASA, unit-testing is done via the [asa-streamanalytics-cicd](./cicd-tools.md?tabs=visual-studio-code#installation) npm module. Test cases with various malformed events should be created and tested in the deployment pipeline.
-Finally, we can do some light integration testing in VS Code. We can insert records into the SQL table via a [local run to a live output](/azure/stream-analytics/visual-studio-code-local-run-all).
+Finally, we can do some light integration testing in VS Code. We can insert records into the SQL table via a [local run to a live output](./visual-studio-code-local-run-all.md).
## Get support
stream-analytics Sql Database Upsert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/sql-database-upsert.md
Last updated 12/03/2021
# Update or merge records in Azure SQL Database with Azure Functions
-Currently, [Azure Stream Analytics](/azure/stream-analytics/) (ASA) only supports inserting (appending) rows to SQL outputs ([Azure SQL Databases](/azure/stream-analytics/sql-database-output), and [Azure Synapse Analytics](/azure/stream-analytics/azure-synapse-analytics-output)). This article discusses workarounds to enable UPDATE, UPSERT, or MERGE on SQL databases, with Azure Functions as the intermediary layer.
+Currently, [Azure Stream Analytics](./index.yml) (ASA) only supports inserting (appending) rows to SQL outputs ([Azure SQL Databases](./sql-database-output.md), and [Azure Synapse Analytics](./azure-synapse-analytics-output.md)). This article discusses workarounds to enable UPDATE, UPSERT, or MERGE on SQL databases, with Azure Functions as the intermediary layer.
Alternative options to Azure Functions are presented at the end.
This article shows how to use Azure Functions to implement Replace and Accumulat
## Azure Functions Output
-In our job, we'll replace the ASA SQL output by the [ASA Azure Functions output](/azure/stream-analytics/azure-functions-output). The UPDATE, UPSERT, or MERGE capabilities will be implemented in the function.
+In our job, we'll replace the ASA SQL output by the [ASA Azure Functions output](./azure-functions-output.md). The UPDATE, UPSERT, or MERGE capabilities will be implemented in the function.
-There are currently two options to access a SQL Database in a function. First is the [Azure SQL output binding](/azure/azure-functions/functions-bindings-azure-sql). It's currently limited to C#, and only offers replace mode. Second is to compose a SQL query to be submitted via the appropriate [SQL driver](/sql/connect/sql-connection-libraries) ([Microsoft.Data.SqlClient](https://github.com/dotnet/SqlClient) for .NET).
+There are currently two options to access a SQL Database in a function. First is the [Azure SQL output binding](../azure-functions/functions-bindings-azure-sql.md). It's currently limited to C#, and only offers replace mode. Second is to compose a SQL query to be submitted via the appropriate [SQL driver](/sql/connect/sql-connection-libraries) ([Microsoft.Data.SqlClient](https://github.com/dotnet/SqlClient) for .NET).
For both samples below, we'll assume the following table schema. The binding option requires **a primary key** to be set on the target table. It's not necessary, but recommended, when using a SQL driver.
CONSTRAINT [PK_device_updated] PRIMARY KEY CLUSTERED
); ```
-A function has to meet the following expectations to be used as an [output from ASA](/azure/stream-analytics/azure-functions-output):
+A function has to meet the following expectations to be used as an [output from ASA](./azure-functions-output.md):
- Azure Stream Analytics expects HTTP status 200 from the Functions app for batches that were processed successfully - When Azure Stream Analytics receives a 413 ("http Request Entity Too Large") exception from an Azure function, it reduces the size of the batches that it sends to Azure Function
A function has to meet the following expectations to be used as an [output from
## Option 1: Update by key with the Azure Function SQL Binding
-This option uses the [Azure Function SQL Output Binding](/azure/azure-functions/functions-bindings-azure-sql). This extension can replace an object in a table, without having to write a SQL statement. At this time, it doesn't support compound assignment operators (accumulations).
+This option uses the [Azure Function SQL Output Binding](../azure-functions/functions-bindings-azure-sql.md). This extension can replace an object in a table, without having to write a SQL statement. At this time, it doesn't support compound assignment operators (accumulations).
This sample was built on: -- Azure Functions runtime [version 4](/azure/azure-functions/functions-versions?tabs=in-process%2Cv4&pivots=programming-language-csharp)
+- Azure Functions runtime [version 4](../azure-functions/functions-versions.md?pivots=programming-language-csharp&tabs=in-process%2cv4)
- [.NET 6.0](/dotnet/core/whats-new/dotnet-6) - Microsoft.Azure.WebJobs.Extensions.Sql [0.1.131-preview](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Sql/0.1.131-preview) To better understand the binding approach, it's recommended to follow [this tutorial](https://github.com/Azure/azure-functions-sql-extension#quick-start).
-First, create a default HttpTrigger function app by following this [tutorial](/azure/azure-functions/create-first-function-vs-code-csharp?tabs=in-process). The following information will be used:
+First, create a default HttpTrigger function app by following this [tutorial](../azure-functions/create-first-function-vs-code-csharp.md?tabs=in-process). The following information will be used:
- Language: `C#` - Runtime: `.NET 6` (under function/runtime v4)
You can now test the wiring between the local function and the database by debug
[{"DeviceId":3,"Value":13.4,"Timestamp":"2021-11-30T03:22:12.991Z"},{"DeviceId":4,"Value":41.4,"Timestamp":"2021-11-30T03:22:12.991Z"}] ```
-The function can now be [published](/azure/azure-functions/create-first-function-vs-code-csharp#publish-the-project-to-azure) to Azure. An [application setting](/azure/azure-functions/functions-how-to-use-azure-function-app-settings?tabs=portal#settings) should be set for `SqlConnectionString`. The Azure SQL **Server** firewall should [allow Azure services](/azure/azure-sql/database/network-access-controls-overview) in for the live function to reach it.
+The function can now be [published](../azure-functions/create-first-function-vs-code-csharp.md#publish-the-project-to-azure) to Azure. An [application setting](../azure-functions/functions-how-to-use-azure-function-app-settings.md?tabs=portal#settings) should be set for `SqlConnectionString`. The Azure SQL **Server** firewall should [allow Azure services](../azure-sql/database/network-access-controls-overview.md) in for the live function to reach it.
The function can then be defined as an output in the ASA job, and used to replace records instead of inserting them.
This option uses [Microsoft.Data.SqlClient](https://github.com/dotnet/SqlClient)
This sample was built on: -- Azure Functions runtime [version 4](/azure/azure-functions/functions-versions?tabs=in-process%2Cv4&pivots=programming-language-csharp)
+- Azure Functions runtime [version 4](../azure-functions/functions-versions.md?pivots=programming-language-csharp&tabs=in-process%2cv4)
- [.NET 6.0](/dotnet/core/whats-new/dotnet-6) - Microsoft.Data.SqlClient [4.0.0](https://www.nuget.org/packages/Microsoft.Data.SqlClient/)
-First, create a default HttpTrigger function app by following this [tutorial](/azure/azure-functions/create-first-function-vs-code-csharp?tabs=in-process). The following information will be used:
+First, create a default HttpTrigger function app by following this [tutorial](../azure-functions/create-first-function-vs-code-csharp.md?tabs=in-process). The following information will be used:
- Language: `C#` - Runtime: `.NET 6` (under function/runtime v4)
You can now test the wiring between the local function and the database by debug
[{"DeviceId":3,"Value":13.4,"Timestamp":"2021-11-30T03:22:12.991Z"},{"DeviceId":4,"Value":41.4,"Timestamp":"2021-11-30T03:22:12.991Z"}] ```
-The function can now be [published](/azure/azure-functions/create-first-function-vs-code-csharp#publish-the-project-to-azure) to Azure. An [application setting](/azure/azure-functions/functions-how-to-use-azure-function-app-settings?tabs=portal#settings) should be set for `SqlConnectionString`. The Azure SQL **Server** firewall should [allow Azure services](/azure/azure-sql/database/network-access-controls-overview) in for the live function to reach it.
+The function can now be [published](../azure-functions/create-first-function-vs-code-csharp.md#publish-the-project-to-azure) to Azure. An [application setting](../azure-functions/functions-how-to-use-azure-function-app-settings.md?tabs=portal#settings) should be set for `SqlConnectionString`. The Azure SQL **Server** firewall should [allow Azure services](../azure-sql/database/network-access-controls-overview.md) in for the live function to reach it.
The function can then be defined as an output in the ASA job, and used to replace records instead of inserting them.
A background task will operate once the data is inserted in the database via the
For Azure SQL, `INSTEAD OF` [DML triggers](/sql/relational-databases/triggers/dml-triggers?view=azuresqldb-current&preserve-view=true) can be used to intercept the INSERT commands issued by ASA and replace them with UPDATEs.
-For Synapse SQL, ASA can insert into a [staging table](/azure/synapse-analytics/sql/data-loading-best-practices#load-to-a-staging-table). A recurring task can then transform the data as needed into an intermediary table. Finally the [data is moved](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-partition#partition-switching) to the production table.
+For Synapse SQL, ASA can insert into a [staging table](../synapse-analytics/sql/data-loading-best-practices.md#load-to-a-staging-table). A recurring task can then transform the data as needed into an intermediary table. Finally the [data is moved](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-partition.md#partition-switching) to the production table.
### Pre-processing in Azure Cosmos DB
-Azure Cosmos DB [supports UPSERT natively](/azure/stream-analytics/stream-analytics-documentdb-output#upserts-from-stream-analytics). Here only append/replace is possible. Accumulations must be managed client-side in Cosmos DB.
+Azure Cosmos DB [supports UPSERT natively](./stream-analytics-documentdb-output.md#upserts-from-stream-analytics). Here only append/replace is possible. Accumulations must be managed client-side in Cosmos DB.
If the requirements match, an option is to replace the target SQL database by an Azure Cosmos DB instance. Doing so requires an important change in the overall solution architecture.
-For Synapse SQL, Cosmos DB can be used as an intermediary layer via [Azure Synapse Link for Azure Cosmos DB](/azure/cosmos-db/synapse-link). Synapse Link can be used to create an [analytical store](/azure/cosmos-db/analytical-store-introduction). This data store can then be queried directly in Synapse SQL.
+For Synapse SQL, Cosmos DB can be used as an intermediary layer via [Azure Synapse Link for Azure Cosmos DB](../cosmos-db/synapse-link.md). Synapse Link can be used to create an [analytical store](../cosmos-db/analytical-store-introduction.md). This data store can then be queried directly in Synapse SQL.
### Comparison of the alternatives
For further assistance, try our [Microsoft Q&A question page for Azure Stream An
* [Use managed identities to access Azure SQL Database or Azure Synapse Analytics from an Azure Stream Analytics job](sql-database-output-managed-identity.md) * [Use reference data from a SQL Database for an Azure Stream Analytics job](sql-reference-data.md) * [Run Azure Functions in Azure Stream Analytics jobs - Tutorial for Redis output](stream-analytics-with-azure-functions.md)
-* [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
+* [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
stream-analytics Stream Analytics With Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/stream-analytics-with-azure-functions.md
Follow the [Real-time fraud detection](stream-analytics-real-time-fraud-detectio
1. See the [Create a function app](../azure-functions/functions-get-started.md) section of the Functions documentation. This sample was built on:
- - Azure Functions runtime [version 4](/azure/azure-functions/functions-versions?tabs=in-process%2Cv4&pivots=programming-language-csharp)
+ - Azure Functions runtime [version 4](../azure-functions/functions-versions.md?pivots=programming-language-csharp&tabs=in-process%2cv4)
- [.NET 6.0](/dotnet/core/whats-new/dotnet-6) - StackExchange.Redis [2.2.8](https://www.nuget.org/packages/StackExchange.Redis/)
-2. Create a default HttpTrigger function app in **Visual Studio Code** by following this [tutorial](/azure/azure-functions/create-first-function-vs-code-csharp?tabs=in-process). The following information will be used: language: `C#`, runtime: `.NET 6` (under function v4), template: `HTTP trigger`.
+2. Create a default HttpTrigger function app in **Visual Studio Code** by following this [tutorial](../azure-functions/create-first-function-vs-code-csharp.md?tabs=in-process). The following information will be used: language: `C#`, runtime: `.NET 6` (under function v4), template: `HTTP trigger`.
3. Install the Redis client library by running the following command in a terminal located in the project folder:
Follow the [Real-time fraud detection](stream-analytics-real-time-fraud-detectio
When Stream Analytics receives the "HTTP Request Entity Too Large" exception from the function, it reduces the size of the batches it sends to functions. The following code ensures that Stream Analytics doesn't send oversized batches. Make sure that the maximum batch count and size values used in the function are consistent with the values entered in the Stream Analytics portal.
-5. The function can now be [published](/azure/azure-functions/create-first-function-vs-code-csharp#publish-the-project-to-azure) to Azure.
+5. The function can now be [published](../azure-functions/create-first-function-vs-code-csharp.md#publish-the-project-to-azure) to Azure.
-6. Open the function on the Azure Portal, and set [application settings](/azure/azure-functions/functions-how-to-use-azure-function-app-settings?tabs=portal#settings) for `RedisConnectionString` and `RedisDatabaseIndex`.
+6. Open the function on the Azure Portal, and set [application settings](../azure-functions/functions-how-to-use-azure-function-app-settings.md?tabs=portal#settings) for `RedisConnectionString` and `RedisDatabaseIndex`.
## Update the Stream Analytics job with the function as output
In this tutorial, you have created a simple Stream Analytics job, that runs an A
> [!div class="nextstepaction"] > [Update or merge records in Azure SQL Database with Azure Functions](sql-database-upsert.md)
-> [Run JavaScript user-defined functions within Stream Analytics jobs](stream-analytics-javascript-user-defined-functions.md)
+> [Run JavaScript user-defined functions within Stream Analytics jobs](stream-analytics-javascript-user-defined-functions.md)
synapse-analytics Security White Paper Data Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/guidance/security-white-paper-data-protection.md
Data is encrypted at rest and in transit.
By default, Azure Storage [automatically encrypts all data](../../storage/common/storage-service-encryption.md) using 256-bit Advanced Encryption Standard encryption (AES 256). It's one of the strongest block ciphers available and is FIPS 140-2 compliant. The platform manages the encryption key, and it forms the *first layer* of data encryption. This encryption applies to both user and system databases, including the **master** database.
-Enabling [Transparent Data Encryption](../../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) can add a *second layer* of data encryption. It performs real-time I/O encryption and decryption of database files, transaction logs files, and backups at rest without requiring any changes to the application. By default, it uses AES 256.
+Enabling [Transparent Data Encryption](../../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) can add a *second layer* of data encryption for dedicated SQL pools. It performs real-time I/O encryption and decryption of database files, transaction logs files, and backups at rest without requiring any changes to the application. By default, it uses AES 256.
By default, TDE protects the database encryption key (DEK) with a built-in server certificate (service managed). There's an option to bring your own key (BYOK) that can be securely stored in [Azure Key Vault](../../key-vault/general/basic-concepts.md).
synapse-analytics Overview Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/overview-features.md
Consumption models in Synapse SQL enable you to use different database objects.
| | Dedicated | Serverless | | | | |
-| **Tables** | [Yes](/sql/t-sql/statements/create-table-azure-sql-data-warehouse?view=azure-sqldw-latest&preserve-view=true) | No, the in-database tables are not supported. Serverless SQL pool can query only [external tables](develop-tables-external-tables.md?tabs=native) that reference data placed on [Azure Storage](#data-access) |
-| **Views** | [Yes](/sql/t-sql/statements/create-view-transact-sql?view=azure-sqldw-latest&preserve-view=true). Views can use [query language elements](#query-language) that are available in dedicated model. | [Yes](/sql/t-sql/statements/create-view-transact-sql?view=azure-sqldw-latest&preserve-view=true), you can create views over [external tables](develop-tables-external-tables.md?tabs=native) and other views. Views can use [query language elements](#query-language) that are available in serverless model. |
-| **Schemas** | [Yes](/sql/t-sql/statements/create-schema-transact-sql?view=azure-sqldw-latest&preserve-view=true) | [Yes](/sql/t-sql/statements/create-schema-transact-sql?view=azure-sqldw-latest&preserve-view=true), schemas are supported. |
-| **Temporary tables** | [Yes](../sql-data-warehouse/sql-data-warehouse-tables-temporary.md?context=/azure/synapse-analytics/context/context) | Temporary tables might be used just to store some information from the system views, literals, or other temp tables. UPDATE/DELETE on temp table is also supported. You can join temp tables with the system views. You cannot select data from an external table to insert it into temp table or join temp table with external table - these operations will fail because external data and temp-tables cannot be mixed in the same query. |
+| **Tables** | [Yes](/sql/t-sql/statements/create-table-azure-sql-data-warehouse?view=azure-sqldw-latest&preserve-view=true) | No, the in-database tables are not supported. Serverless SQL pool can query only [external tables](develop-tables-external-tables.md?tabs=native) that reference data stored in [Azure Data Lake storage or Dataverse](#data-access). |
+| **Views** | [Yes](/sql/t-sql/statements/create-view-transact-sql?view=azure-sqldw-latest&preserve-view=true). Views can use [query language elements](#query-language) that are available in dedicated model. | [Yes](/sql/t-sql/statements/create-view-transact-sql?view=azure-sqldw-latest&preserve-view=true), you can create views over [external tables](develop-tables-external-tables.md?tabs=native), the queries with the OPENROWSET function, and other views. Views can use [query language elements](#query-language) that are available in serverless model. |
+| **Schemas** | [Yes](/sql/t-sql/statements/create-schema-transact-sql?view=azure-sqldw-latest&preserve-view=true) | [Yes](/sql/t-sql/statements/create-schema-transact-sql?view=azure-sqldw-latest&preserve-view=true), schemas are supported. Use schemas to isolate different tenants and place their tables per schemas. |
+| **Temporary tables** | [Yes](../sql-data-warehouse/sql-data-warehouse-tables-temporary.md?context=/azure/synapse-analytics/context/context) | Temporary tables might be used just to store some information from the system views, literals, or other temporary tables. UPDATE/DELETE on temp table is also supported. You can join temporary tables with the system views. You cannot select data from an external table to insert it into temporary table or join a temporary table with an external table - these operations will fail because external data and temporary tables cannot be mixed in the same query. |
| **User defined procedures** | [Yes](/sql/t-sql/statements/create-procedure-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, stored procedures can be placed in any user databases (not `master` database). Procedures can just read external data and use [query language elements](#query-language) that are available in serverless pool. | | **User defined functions** | [Yes](/sql/t-sql/statements/create-function-sql-data-warehouse?view=azure-sqldw-latest&preserve-view=true) | Yes, only inline table-valued functions. Scalar user-defined functions are not supported. | | **Triggers** | No | No, serverless SQL pools do not allow changing data, so the triggers cannot react on data changes. |
-| **External tables** | [Yes](/sql/t-sql/statements/create-external-table-transact-sql?view=azure-sqldw-latest&preserve-view=true). See supported [data formats](#data-formats). | Yes, [external tables](/sql/t-sql/statements/create-external-table-transact-sql?view=azure-sqldw-latest&preserve-view=true) are available. See the supported [data formats](#data-formats). |
-| **Caching queries** | Yes, multiple forms (SSD-based caching, in-memory, resultset caching). In addition, Materialized View are supported | No, only the file statistics are cached. |
+| **External tables** | [Yes](/sql/t-sql/statements/create-external-table-transact-sql?view=azure-sqldw-latest&preserve-view=true). See supported [data formats](#data-formats). | Yes, [external tables](/sql/t-sql/statements/create-external-table-transact-sql?view=azure-sqldw-latest&preserve-view=true) are available and can be used to read data from [Azure Data Lake storage or Dataverse](#data-access). See the supported [data formats](#data-formats). |
+| **Caching queries** | Yes, multiple forms (SSD-based caching, in-memory, [resultset caching](../sql-data-warehouse/performance-tuning-result-set-caching.md)). In addition, Materialized View are supported. | No, only the file statistics are cached. |
+| **Result set caching** | [Yes](../sql-data-warehouse/performance-tuning-result-set-caching.md) | No, the query results are not cached. Only the file statistics are cached. |
+| **Materialized views** | Yes | No, the Materialized views are not supported in the serverless SQL pools. |
| **Table variables** | [No](/sql/t-sql/data-types/table-transact-sql?view=azure-sqldw-latest&preserve-view=true), use temporary tables | No, table variables are not supported. | | **[Table distribution](../sql-data-warehouse/sql-data-warehouse-tables-distribute.md?context=/azure/synapse-analytics/context/context)** | Yes | No, table distributions are not supported. | | **[Table indexes](../sql-data-warehouse/sql-data-warehouse-tables-index.md?context=/azure/synapse-analytics/context/context)** | Yes | No, indexes are not supported. |
-| **Table partitioning** | [Yes](../sql-data-warehouse/sql-data-warehouse-tables-partition.md?context=/azure/synapse-analytics/context/context). | External tables do not support partitioning. You can partition files using Hive-partition folder structure and create partitioned tables in Spark. The Spark partitioning will be [synchronized with the serverless pool](../metadat#partitioned-views) on folder partition structure, but external tables cannot be created on partitioned folders. |
+| **Table partitioning** | [Yes](../sql-data-warehouse/sql-data-warehouse-tables-partition.md?context=/azure/synapse-analytics/context/context). | External tables do not support partitioning. You can partition files using Hive-partition folder structure and create partitioned tables in Spark. The Spark partitioning will be [synchronized with the serverless pool](../metadat#partitioned-views) on folder partition structure, but the external tables cannot be created on partitioned folders. |
| **[Statistics](develop-tables-statistics.md)** | Yes | Yes, statistics are [created on external files](develop-tables-statistics.md#statistics-in-serverless-sql-pool). | | **Workload management, resource classes, and concurrency control** | Yes, see [workload management, resource classes, and concurrency control](../sql-data-warehouse/resource-classes-for-workload-management.md?context=/azure/synapse-analytics/context/context). | No, serverless SQL pool automatically manages the resources. | | **Cost control** | Yes, using scale-up and scale-down actions. | Yes, using [the Azure portal or T-SQL procedure](./data-processed.md#cost-control). |
Query languages used in Synapse SQL can have different supported features depend
| **UPDATE statement** | Yes | No, update Parquet/CSV data using Spark and the changes will be automatically available in serverless pool. Use Cosmos DB with the analytical storage for highly transactional workloads. | | **DELETE statement** | Yes | No, delete Parquet/CSV data using Spark and the changes will be automatically available in serverless pool. Use Cosmos DB with the analytical storage for highly transactional workloads.| | **MERGE statement** | Yes ([preview](/sql/t-sql/statements/merge-transact-sql?view=azure-sqldw-latest&preserve-view=true)) | No, merge Parquet/CSV data using Spark and the changes will be automatically available in serverless pool. |
-| **CTAS statement** | Yes | No |
-| **CETAS statement** | Yes, you can perform initial load into an external table using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). | Yes, you can perform initial load into an external table using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). |
-| **[Transactions](develop-transactions.md)** | Yes | Yes, applicable only on the meta-data objects. |
-| **[Labels](develop-label.md)** | Yes | No |
-| **Data load** | Yes. Preferred utility is [COPY](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true) statement, but the system supports both BULK load (BCP) and [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true) for data loading. | No, you can initially load data into an external table using CETAS statement. |
-| **Data export** | Yes. Using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). | Yes. Using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). |
+| **CTAS statement** | Yes | No, [CREATE TABLE AS SELECT](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?view=azure-sqldw-latest&preserve-view=true) statement is not supported in serverless SQL pool. |
+| **CETAS statement** | Yes, you can perform initial load into an external table using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). | Yes, you can perform initial load into an external table using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). CETAS supports Parquet and CSV output formats. |
+| **[Transactions](develop-transactions.md)** | Yes | Yes, transactions are applicable only on the meta-data objects. |
+| **[Labels](develop-label.md)** | Yes | No, labels are not supported. |
+| **Data load** | Yes. Preferred utility is [COPY](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true) statement, but the system supports both BULK load (BCP) and [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true) for data loading. | No, you cannot load data into the serverless SQL pool because data is stored on external storage. You can initially load data into an external table using CETAS statement. |
+| **Data export** | Yes. Using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). | Yes. You can export data from external storage (Azure data lake, Dataverse, Cosmos DB) into Azure data lake using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). |
| **Types** | Yes, all Transact-SQL types except [cursor](/sql/t-sql/data-types/cursor-transact-sql?view=azure-sqldw-latest&preserve-view=true), [hierarchyid](/sql/t-sql/data-types/hierarchyid-data-type-method-reference?view=azure-sqldw-latest&preserve-view=true), [ntext, text, and image](/sql/t-sql/data-types/ntext-text-and-image-transact-sql?view=azure-sqldw-latest&preserve-view=true), [rowversion](/sql/t-sql/data-types/rowversion-transact-sql?view=azure-sqldw-latest&preserve-view=true), [Spatial Types](/sql/t-sql/spatial-geometry/spatial-types-geometry-transact-sql?view=azure-sqldw-latest&preserve-view=true), [sql\_variant](/sql/t-sql/data-types/sql-variant-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [xml](/sql/t-sql/xml/xml-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all Transact-SQL types are supported, except [cursor](/sql/t-sql/data-types/cursor-transact-sql?view=azure-sqldw-latest&preserve-view=true), [hierarchyid](/sql/t-sql/data-types/hierarchyid-data-type-method-reference?view=azure-sqldw-latest&preserve-view=true), [ntext, text, and image](/sql/t-sql/data-types/ntext-text-and-image-transact-sql?view=azure-sqldw-latest&preserve-view=true), [rowversion](/sql/t-sql/data-types/rowversion-transact-sql?view=azure-sqldw-latest&preserve-view=true), [Spatial Types](/sql/t-sql/spatial-geometry/spatial-types-geometry-transact-sql?view=azure-sqldw-latest&preserve-view=true), [sql\_variant](/sql/t-sql/data-types/sql-variant-transact-sql?view=azure-sqldw-latest&preserve-view=true), [xml](/sql/t-sql/xml/xml-transact-sql?view=azure-sqldw-latest&preserve-view=true), and Table type. See how to [map Parquet column types to SQL types here](develop-openrowset.md#type-mapping-for-parquet). |
-| **Cross-database queries** | No | Yes, 3-part-name references are supported including [USE](/sql/t-sql/language-elements/use-transact-sql?view=azure-sqldw-latest&preserve-view=true) statement. The queries can reference the serverless SQL databases or the Lake databases in the same workspace. |
-| **Built-in/system functions (analysis)** | Yes, all Transact-SQL [Analytic](/sql/t-sql/functions/analytic-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Conversion, [Date and Time](/sql/t-sql/functions/date-and-time-data-types-and-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Logical, [Mathematical](/sql/t-sql/functions/mathematical-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true) functions, except [CHOOSE](/sql/t-sql/functions/logical-functions-choose-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [PARSE](/sql/t-sql/functions/parse-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all Transact-SQL [Analytic](/sql/t-sql/functions/analytic-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Conversion, [Date and Time](/sql/t-sql/functions/date-and-time-data-types-and-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Logical, [Mathematical](/sql/t-sql/functions/mathematical-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true) functions. |
-| **Built-in/system functions ([string](/sql/t-sql/functions/string-functions-transact-sql))** | Yes. All Transact-SQL [String](/sql/t-sql/functions/string-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), [JSON](/sql/t-sql/functions/json-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), and Collation functions, except [STRING_ESCAPE](/sql/t-sql/functions/string-escape-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [TRANSLATE](/sql/t-sql/functions/translate-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes. All Transact-SQL [String](/sql/t-sql/functions/string-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), [JSON](/sql/t-sql/functions/json-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), and Collation functions. |
+| **Cross-database queries** | No | Yes, 3-part-name references are supported including [USE](/sql/t-sql/language-elements/use-transact-sql?view=azure-sqldw-latest&preserve-view=true) statement. The queries can reference the serverless SQL databases or the Lake databases in the same workspace. Cross-workspace queries are not supported. |
+| **Built-in/system functions (analysis)** | Yes, all Transact-SQL [Analytic](/sql/t-sql/functions/analytic-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Conversion, [Date and Time](/sql/t-sql/functions/date-and-time-data-types-and-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Logical, [Mathematical](/sql/t-sql/functions/mathematical-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true) functions, except [CHOOSE](/sql/t-sql/functions/logical-functions-choose-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [PARSE](/sql/t-sql/functions/parse-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all Transact-SQL [Analytic](/sql/t-sql/functions/analytic-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Conversion, [Date and Time](/sql/t-sql/functions/date-and-time-data-types-and-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Logical, and [Mathematical](/sql/t-sql/functions/mathematical-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true) functions are supported. |
+| **Built-in/system functions ([string](/sql/t-sql/functions/string-functions-transact-sql))** | Yes. All Transact-SQL [String](/sql/t-sql/functions/string-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), [JSON](/sql/t-sql/functions/json-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), and Collation functions, except [STRING_ESCAPE](/sql/t-sql/functions/string-escape-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [TRANSLATE](/sql/t-sql/functions/translate-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes. All Transact-SQL [String](/sql/t-sql/functions/string-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), [JSON](/sql/t-sql/functions/json-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), and Collation functions are supported. |
| **Built-in/system functions ([Cryptographic](/sql/t-sql/functions/cryptographic-functions-transact-sql))** | Some | `HASHBYTES` is the only supported cryptographic function in serverless SQL pools. |
-| **Built-in/system table-value functions** | Yes, [Transact-SQL Rowset functions](/sql/t-sql/functions/functions?view=azure-sqldw-latest&preserve-view=true#rowset-functions), except [OPENXML](/sql/t-sql/functions/openxml-transact-sql?view=azure-sqldw-latest&preserve-view=true), [OPENDATASOURCE](/sql/t-sql/functions/opendatasource-transact-sql?view=azure-sqldw-latest&preserve-view=true), [OPENQUERY](/sql/t-sql/functions/openquery-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all [Transact-SQL Rowset functions](/sql/t-sql/functions/functions?view=azure-sqldw-latest&preserve-view=true#rowset-functions) are supported, except [OPENXML](/sql/t-sql/functions/openxml-transact-sql?view=azure-sqldw-latest&preserve-view=true), [OPENDATASOURCE](/sql/t-sql/functions/opendatasource-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [OPENQUERY](/sql/t-sql/functions/openquery-transact-sql?view=azure-sqldw-latest&preserve-view=true) |
-| **Built-in/system aggregates** | Transact-SQL built-in aggregates except, except [CHECKSUM_AGG](/sql/t-sql/functions/checksum-agg-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [GROUPING_ID](/sql/t-sql/functions/grouping-id-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all Transact-SQL built-in [aggregates](/sql/t-sql/functions/aggregate-functions-transact-sql?view=sql-server-ver15) are supported. |
-| **Operators** | Yes, all [Transact-SQL operators](/sql/t-sql/language-elements/operators-transact-sql?view=azure-sqldw-latest&preserve-view=true) except [!>](/sql/t-sql/language-elements/not-greater-than-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [!<](/sql/t-sql/language-elements/not-less-than-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all [Transact-SQL operators](/sql/t-sql/language-elements/operators-transact-sql?view=azure-sqldw-latest&preserve-view=true) |
-| **Control of flow** | Yes. All [Transact-SQL Control-of-flow statement](/sql/t-sql/language-elements/control-of-flow?view=azure-sqldw-latest&preserve-view=true) except [CONTINUE](/sql/t-sql/language-elements/continue-transact-sql?view=azure-sqldw-latest&preserve-view=true), [GOTO](/sql/t-sql/language-elements/goto-transact-sql?view=azure-sqldw-latest&preserve-view=true), [RETURN](/sql/t-sql/language-elements/return-transact-sql?view=azure-sqldw-latest&preserve-view=true), [USE](/sql/t-sql/language-elements/use-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [WAITFOR](/sql/t-sql/language-elements/waitfor-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes. All [Transact-SQL Control-of-flow statement](/sql/t-sql/language-elements/control-of-flow?view=azure-sqldw-latest&preserve-view=true) SELECT query in `WHILE (...)` condition |
+| **Built-in/system table-value functions** | Yes, [Transact-SQL Rowset functions](/sql/t-sql/functions/functions?view=azure-sqldw-latest&preserve-view=true#rowset-functions), except [OPENXML](/sql/t-sql/functions/openxml-transact-sql?view=azure-sqldw-latest&preserve-view=true), [OPENDATASOURCE](/sql/t-sql/functions/opendatasource-transact-sql?view=azure-sqldw-latest&preserve-view=true), [OPENQUERY](/sql/t-sql/functions/openquery-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all [Transact-SQL Rowset functions](/sql/t-sql/functions/functions?view=azure-sqldw-latest&preserve-view=true#rowset-functions) are supported, except [OPENXML](/sql/t-sql/functions/openxml-transact-sql?view=azure-sqldw-latest&preserve-view=true), [OPENDATASOURCE](/sql/t-sql/functions/opendatasource-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [OPENQUERY](/sql/t-sql/functions/openquery-transact-sql?view=azure-sqldw-latest&preserve-view=true). |
+| **Built-in/system aggregates** | Transact-SQL built-in aggregates except, except [CHECKSUM_AGG](/sql/t-sql/functions/checksum-agg-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [GROUPING_ID](/sql/t-sql/functions/grouping-id-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all Transact-SQL built-in [aggregates](/sql/t-sql/functions/aggregate-functions-transact-sql?view=sql-server-ver15&preserve-view=true) are supported. |
+| **Operators** | Yes, all [Transact-SQL operators](/sql/t-sql/language-elements/operators-transact-sql?view=azure-sqldw-latest&preserve-view=true) except [!>](/sql/t-sql/language-elements/not-greater-than-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [!<](/sql/t-sql/language-elements/not-less-than-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all [Transact-SQL operators](/sql/t-sql/language-elements/operators-transact-sql?view=azure-sqldw-latest&preserve-view=true) are supported. |
+| **Control of flow** | Yes. All [Transact-SQL Control-of-flow statement](/sql/t-sql/language-elements/control-of-flow?view=azure-sqldw-latest&preserve-view=true) except [CONTINUE](/sql/t-sql/language-elements/continue-transact-sql?view=azure-sqldw-latest&preserve-view=true), [GOTO](/sql/t-sql/language-elements/goto-transact-sql?view=azure-sqldw-latest&preserve-view=true), [RETURN](/sql/t-sql/language-elements/return-transact-sql?view=azure-sqldw-latest&preserve-view=true), [USE](/sql/t-sql/language-elements/use-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [WAITFOR](/sql/t-sql/language-elements/waitfor-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes. All [Transact-SQL Control-of-flow statements](/sql/t-sql/language-elements/control-of-flow?view=azure-sqldw-latest&preserve-view=true) are supported. SELECT query in `WHILE (...)` condition is not supported. |
| **DDL statements (CREATE, ALTER, DROP)** | Yes. All Transact-SQL DDL statement applicable to the supported object types | Yes, all Transact-SQL DDL statement applicable to the supported object types are supported. | ## Security
Synapse SQL pools enable you to use built-in security features to secure your da
| **SQL username/password authentication**| Yes | Yes, users can access serverless SQL pool using their usernames and passwords. | | **Azure Active Directory (Azure AD) authentication**| Yes, Azure AD users | Yes, Azure AD logins and users can access serverless SQL pools using their Azure AD identities. | | **Storage Azure Active Directory (Azure AD) passthrough authentication** | Yes | Yes, [Azure AD passthrough authentication](develop-storage-files-storage-access-control.md?tabs=user-identity#supported-storage-authorization-types) is applicable to Azure AD logins. The identity of the Azure AD user is passed to the storage if a credential is not specified. Azure AD passthrough authentication is not available for the SQL users. |
-| **Storage SAS token authentication** | No | Yes, using [DATABASE SCOPED CREDENTIAL](/sql/t-sql/statements/create-database-scoped-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) in [EXTERNAL DATA SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) or instance-level [CREDENTIAL](/sql/t-sql/statements/create-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true). |
-| **Storage Access Key authentication** | Yes, using [DATABASE SCOPED CREDENTIAL](/sql/t-sql/statements/create-database-scoped-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) in [EXTERNAL DATA SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) | No, use SAS token instead of storage access key. |
+| **Storage shared access signature (SAS) token authentication** | No | Yes, using [DATABASE SCOPED CREDENTIAL](/sql/t-sql/statements/create-database-scoped-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) with [shared access signature token](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#database-scoped-credential) in [EXTERNAL DATA SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) or instance-level [CREDENTIAL](/sql/t-sql/statements/create-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) with [shared access signature](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#server-scoped-credential). |
+| **Storage Access Key authentication** | Yes, using [DATABASE SCOPED CREDENTIAL](/sql/t-sql/statements/create-database-scoped-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) in [EXTERNAL DATA SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) | No, [use SAS token](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#database-scoped-credential) instead of storage access key. |
| **Storage [Managed Identity](../../data-factory/data-factory-service-identity.md?context=/azure/synapse-analytics/context/context&tabs=synapse-analytics) authentication** | Yes, using [Managed Service Identity Credential](../../azure-sql/database/vnet-service-endpoint-rule-overview.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&preserve-view=true&toc=%2fazure%2fsynapse-analytics%2ftoc.json&view=azure-sqldw-latest&preserve-view=true) | Yes, The query can access the storage using the workspace [Managed Identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#database-scoped-credential) credential. | | **Storage Application identity/Service principal (SPN) authentication** | [Yes](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, you can create a [credential](develop-storage-files-storage-access-control.md?tabs=service-principal#database-scoped-credential) with a [service principal application ID](develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types) that will be used to authenticate on the storage. | | **Server roles** | No | Yes, sysadmin, public, and other server-roles are supported. |
-| **SERVER SCOPED CREDENTIAL** | No | Yes, the server scoped credentials are used by the `OPENROWSET` function that do not uses explicit data source. |
+| **SERVER SCOPED CREDENTIAL** | No | Yes, the [server scoped credentials](develop-storage-files-storage-access-control.md?tabs=user-identity#server-scoped-credential) are used by the `OPENROWSET` function that do not uses explicit data source. |
| **Permissions - [Server-level](/sql/relational-databases/security/authentication-access/server-level-roles)** | No | Yes, for example, `CONNECT ANY DATABASE` and `SELECT ALL USER SECURABLES` enable a user to read data from any databases. | | **Database roles** | Yes | Yes, you can use `db_owner`, `db_datareader` and `db_ddladmin` roles. |
-| **DATABASE SCOPED CREDENTIAL** | Yes, used in external data sources. | Yes, used in external data sources. |
-| **Permissions - [Database-level](/sql/relational-databases/security/authentication-access/database-level-roles?view=azure-sqldw-latest&preserve-view=true)** | Yes | Yes |
-| **Permissions - Schema-level** | Yes, including ability to GRANT, DENY, and REVOKE permissions to users/logins on the schema | Yes, you can specify schema-level permissions including ability to GRANT, DENY, and REVOKE permissions to users/logins on the schema |
-| **Permissions - Object-level** | Yes, including ability to GRANT, DENY, and REVOKE permissions to users | Yes, you can GRANT, DENY, and REVOKE permissions to users/logins on the system objects that are supported |
+| **DATABASE SCOPED CREDENTIAL** | Yes, used in external data sources. | Yes, database scoped credentials can be used in external data sources to [define storage authentication method](develop-storage-files-storage-access-control.md?tabs=user-identity.md#database-scoped-credential). |
+| **Permissions - [Database-level](/sql/relational-databases/security/authentication-access/database-level-roles?view=azure-sqldw-latest&preserve-view=true)** | Yes | Yes, you can grant, deny, or revoke permissions on the database objects. |
+| **Permissions - Schema-level** | Yes, including ability to GRANT, DENY, and REVOKE permissions to users/logins on the schema | Yes, you can specify schema-level permissions including ability to GRANT, DENY, and REVOKE permissions to users/logins on the schema. |
+| **Permissions - Object-level** | Yes, including ability to GRANT, DENY, and REVOKE permissions to users | Yes, you can GRANT, DENY, and REVOKE permissions to users/logins on the system objects that are supported. |
| **Permissions - [Column-level security](../sql-data-warehouse/column-level-security.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json)** | Yes | Yes, column-level security is supported in serverless SQL pools. |
-| **Built-in/system security &amp; identity functions** | Some Transact-SQL security functions and operators: `CURRENT_USER`, `HAS_DBACCESS`, `IS_MEMBER`, `IS_ROLEMEMBER`, `SESSION_USER`, `SUSER_NAME`, `SUSER_SNAME`, `SYSTEM_USER`, `USER`, `USER_NAME`, `EXECUTE AS`, `OPEN/CLOSE MASTER KEY` | Some Transact-SQL security functions and operators are supported: `CURRENT_USER`, `HAS_DBACCESS`, `HAS_PERMS_BY_NAME`, `IS_MEMBER', 'IS_ROLEMEMBER`, `IS_SRVROLEMEMBER`, `SESSION_USER`, `SESSION_CONTEXT`, `SUSER_NAME`, `SUSER_SNAME`, `SYSTEM_USER`, `USER`, `USER_NAME`, `EXECUTE AS`, and `REVERT`. Security functions cannot be used to query external data (store the result in variable that can be used in the query). |
| **Row-level security** | [Yes](/sql/relational-databases/security/row-level-security?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) | No built-in support. Use custom views as a [workaround](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/how-to-implement-row-level-security-in-serverless-sql-pools/ba-p/2354759). |
-| **Transparent Data Encryption (TDE)** | [Yes](../../azure-sql/database/transparent-data-encryption-tde-overview.md) | No |
-| **Data Discovery & Classification** | [Yes](../../azure-sql/database/data-discovery-and-classification-overview.md) | No |
-| **Vulnerability Assessment** | [Yes](../../azure-sql/database/sql-vulnerability-assessment.md) | No |
-| **Advanced Threat Protection** | [Yes](../../azure-sql/database/threat-detection-overview.md) | No |
+| **Data masking** | [Yes](../guidance/security-white-paper-access-control.md#dynamic-data-masking) | No, use wrapper SQL views that explicitly mask some columns as a workaround. |
+| **Built-in/system security &amp; identity functions** | Some Transact-SQL security functions and operators: `CURRENT_USER`, `HAS_DBACCESS`, `IS_MEMBER`, `IS_ROLEMEMBER`, `SESSION_USER`, `SUSER_NAME`, `SUSER_SNAME`, `SYSTEM_USER`, `USER`, `USER_NAME`, `EXECUTE AS`, `OPEN/CLOSE MASTER KEY` | Some Transact-SQL security functions and operators are supported: `CURRENT_USER`, `HAS_DBACCESS`, `HAS_PERMS_BY_NAME`, `IS_MEMBER`, `IS_ROLEMEMBER`, `IS_SRVROLEMEMBER`, `SESSION_USER`, `SESSION_CONTEXT`, `SUSER_NAME`, `SUSER_SNAME`, `SYSTEM_USER`, `USER`, `USER_NAME`, `EXECUTE AS`, and `REVERT`. Security functions cannot be used to query external data (store the result in variable that can be used in the query). |
+| **Transparent Data Encryption (TDE)** | [Yes](../../azure-sql/database/transparent-data-encryption-tde-overview.md) | No, Transparent Data Encryption is not supported. |
+| **Data Discovery & Classification** | [Yes](../../azure-sql/database/data-discovery-and-classification-overview.md) | No, Data Discovery & Classification is not supported. |
+| **Vulnerability Assessment** | [Yes](../../azure-sql/database/sql-vulnerability-assessment.md) | No, Vulnerability Assessment is not available. |
+| **Advanced Threat Protection** | [Yes](../../azure-sql/database/threat-detection-overview.md) | No, Advanced Threat Protection is not supported. |
| **Auditing** | [Yes](../../azure-sql/database/auditing-overview.md) | Yes, [auditing is supported](../../azure-sql/database/auditing-overview.md) in serverless SQL pools. | | **[Firewall rules](../security/synapse-workspace-ip-firewall.md)**| Yes | Yes, the firewall rules can be set on the serverless SQL endpoint. | | **[Private endpoint](../security/synapse-workspace-managed-private-endpoints.md)**| Yes | Yes, the private endpoint can be set on the serverless SQL pool. |
You can use various tools to connect to Synapse SQL to query data.
| | | | | **Synapse Studio** | Yes, SQL scripts | Yes, SQL scripts can be used in Synapse Studio. Use SSMS or ADS instead of Synapse Studio if you are returning a large amount of data as a result. | | **Power BI** | Yes | Yes, you can [use Power BI](tutorial-connect-power-bi-desktop.md) to create reports on serverless SQL pool. Import mode is recommended for reporting.|
-| **Azure Analysis Service** | Yes | Yes |
-| **Azure Data Studio (ADS)** | Yes | Yes, you can [use ADS](get-started-azure-data-studio.md)(version 1.18.0 or higher) to query serverless SQL pool. SQL scripts and SQL Notebooks are supported. |
-| **SQL Server Management Studio (SSMS)** | Yes | Yes, you can [use SSMS](get-started-ssms.md)(version 18.5 or higher) to query serverless SQL pool. |
+| **Azure Analysis Service** | Yes | Yes, you can load data in Azure Analysis Service using the serverless SQL pool. |
+| **Azure Data Studio (ADS)** | Yes | Yes, you can [use Azure Data Studio](get-started-azure-data-studio.md) (version 1.18.0 or higher) to query serverless SQL pool. SQL scripts and SQL Notebooks are supported. |
+| **SQL Server Management Studio (SSMS)** | Yes | Yes, you can [use SQL Server Management Studio](get-started-ssms.md) (version 18.5 or higher) to query serverless SQL pool. SSMS shows only the objects that are available in the serverless SQL pools. |
> [!NOTE] > You can use SSMS to connect to serverless SQL pool and query. It is partially supported starting from version 18.5, you can use it to connect and query only.
Data that is analyzed can be stored on various storage types. The following tabl
| | Dedicated | Serverless | | | | |
-| **Internal storage** | Yes | No, data is placed in Azure Data Lake or Cosmos DB analytical storage. |
+| **Internal storage** | Yes | No, data is placed in Azure Data Lake or [Cosmos DB analytical storage](query-cosmos-db-analytical-store.md). |
| **Azure Data Lake v2** | Yes | Yes, you can use external tables and the `OPENROWSET` function to read data from ADLS. | | **Azure Blob Storage** | Yes | Yes, you can use external tables and the `OPENROWSET` function to read data from Azure Blob Storage. | | **Azure SQL/SQL Server (remote)** | No | No, serverless SQL pool cannot reference Azure SQL database. You can reference serverless SQL pools from Azure SQL using [elastic queries](https://devblogs.microsoft.com/azure-sql/read-azure-storage-files-using-synapse-sql-external-tables/) or [linked servers](https://devblogs.microsoft.com/azure-sql/linked-server-to-synapse-sql-to-implement-polybase-like-scenarios-in-managed-instance). | | **Dataverse** | No | Yes, you can read Dataverse tables using [Synapse link](https://docs.microsoft.com/powerapps/maker/data-platform/azure-synapse-link-data-lake). |
-| **Azure CosmosDB transactional storage** | No | No, you cannot access Cosmos DB containers to update data or read data from the Cosmos DB transactional storage. Use Spark pools to update the Cosmos DB transactional storage. |
-| **Azure CosmosDB analytical storage** | No | Yes, you can access Cosmos DB analytical storage using [Synapse Link](../../cosmos-db/synapse-link.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json) |
+| **Azure Cosmos DB transactional storage** | No | No, you cannot access Cosmos DB containers to update data or read data from the Cosmos DB transactional storage. Use [Spark pools to update the Cosmos DB](../synapse-link/how-to-query-analytical-store-spark.md) transactional storage. |
+| **Azure Cosmos DB analytical storage** | No | Yes, you can [query Cosmos DB analytical storage](query-cosmos-db-analytical-store.md) using [Synapse Link](../../cosmos-db/synapse-link.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json). |
| **Apache Spark tables (in workspace)** | No | Yes, serverless pool can read PARQUET and CSV tables using [metadata synchronization](develop-storage-files-spark-tables.md). | | **Apache Spark tables (remote)** | No | No, serverless pool can access only the PARQUET and CSV tables that are [created in Apache Spark pools in the same Synapse workspace](develop-storage-files-spark-tables.md). | | **Databricks tables (remote)** | No | No, serverless pool can access only the PARQUET and CSV tables that are [created in Apache Spark pools in the same Synapse workspace](develop-storage-files-spark-tables.md). |
Data that is analyzed can be stored in various storage formats. The following ta
| | Dedicated | Serverless | | | | | | **Delimited** | [Yes](/sql/t-sql/statements/create-external-file-format-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, you can [query delimited files](query-single-csv-file.md). |
-| **CSV** | Yes (multi-character delimiters not supported) | Yes, you can [query CSV files](query-single-csv-file.md). |
-| **Parquet** | [Yes](/sql/t-sql/statements/create-external-file-format-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, you can [query CSV files](query-parquet-files.md), including the files with [nested types](query-parquet-nested-types.md) |
-| **Hive ORC** | [Yes](/sql/t-sql/statements/create-external-file-format-transact-sql?view=azure-sqldw-latest&preserve-view=true) | No |
-| **Hive RC** | [Yes](/sql/t-sql/statements/create-external-file-format-transact-sql?view=azure-sqldw-latest&preserve-view=true) | No |
-| **JSON** | Yes | Yes, you can [query JSON files](query-json-files.md) using delimited text format and JSON functions. |
-| **Avro** | No | No |
-| **[Delta Lake](https://delta.io/)** | No | [Yes](query-delta-lake-format.md), including files with [nested types](query-parquet-nested-types.md) |
-| **[CDM](/common-data-model/)** | No | No |
+| **CSV** | Yes (multi-character delimiters not supported) | Yes, you can [query CSV files](query-single-csv-file.md). For better performance use PARSER_VERSION 2.0 that provides [faster parsing](develop-openrowset.md#fast-delimited-text-parsing). If you are appending rows to your CSV files, make sure that you [query the files as appendable](query-single-csv-file.md#querying-appendable-files). |
+| **Parquet** | [Yes](/sql/t-sql/statements/create-external-file-format-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, you can [query Parquet files](query-parquet-files.md), including the files with [nested types](query-parquet-nested-types.md). |
+| **Hive ORC** | [Yes](/sql/t-sql/statements/create-external-file-format-transact-sql?view=azure-sqldw-latest&preserve-view=true) | No, serverless SQL pools cannot read Hive ORC format. |
+| **Hive RC** | [Yes](/sql/t-sql/statements/create-external-file-format-transact-sql?view=azure-sqldw-latest&preserve-view=true) | No, serverless SQL pools cannot read Hive RC format. |
+| **JSON** | Yes | Yes, you can [query JSON files](query-json-files.md) using delimited text format and the T-SQL [JSON](/sql/t-sql/functions/json-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true) functions. |
+| **Avro** | No | No, serverless SQL pools cannot read Avro format. |
+| **[Delta Lake](https://delta.io/)** | No | Yes, you can [query delta lake files](query-delta-lake-format.md), including the files with [nested types](query-parquet-nested-types.md). |
+| **[Common Data Model (CDM)](/common-data-model/)** | No | No, serverless SQL pool cannot read data stored using Common Data Model. |
## Next steps Additional information on best practices for dedicated SQL pool and serverless SQL pool can be found in the following articles:
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/whats-new.md
The following updates are new to Azure Synapse Analytics this month.
* Synapse Link for Dataverse [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1397891373) [article](/powerapps/maker/data-platform/azure-synapse-link-synapse) * Custom partitions for Synapse link for Azure Cosmos DB in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--409563090) [article](../cosmos-db/custom-partitioning-analytical-store.md)
-* Map data tool (Public Preview), a no-code guided ETL experience [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF7) [article](/azure/synapse-analytics/database-designer/overview-map-data)
+* Map data tool (Public Preview), a no-code guided ETL experience [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF7) [article](./database-designer/overview-map-data.md)
* Quick reuse of spark cluster [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF7) [article](../data-factory/concepts-integration-runtime-performance.md#time-to-live) * External Call transformation [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF9) [article](../data-factory/data-flow-external-call.md) * Flowlets (Public Preview) [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF10) [article](../data-factory/concepts-data-flow-flowlet.md) ## Next steps
-[Get started with Azure Synapse Analytics](get-started.md)
+[Get started with Azure Synapse Analytics](get-started.md)
virtual-desktop Set Up Golden Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/set-up-golden-image.md
This article will walk you through how to use the Azure portal to create a custom image to use for your Azure Virtual Desktop session hosts. This custom image, which we'll call a "golden image," contains all apps and configuration settings you want to apply to your deployment. There are other approaches to customizing your session hosts, such as using device management tools like [Microsoft Endpoint Manager](/mem/intune/fundamentals/azure-virtual-desktop-multi-session) or automating your image build using tools like [Azure Image Builder](../virtual-machines/windows/image-builder-virtual-desktop.md) with [Azure DevOps](/azure/devops/pipelines/get-started/key-pipelines-concepts?view=azure-devops&preserve-view=true). Which strategy works best depends on the complexity and size of your planned Azure Virtual Desktop environment and your current application deployment processes. ## Create an image from an Azure VM
-When creating a new VM for your golden image, make sure to choose an OS that's in the list of [supported virtual machine OS images](overview.md#supported-virtual-machine-os-images). We recommend using a Windows 10 multi-session (with or without Microsoft 365) or Windows Server image for pooled host pools. We recommend using Windows 10 Enterprise images for personal host pools. You can use either Generation 1 or Generation 2 VMs; Gen 2 VMs support features that aren't supported for Gen 1 machines. Learn more about Generation 1 and Generation 2 VMs at [Support for generation 2 VMs on Azure](/azure/virtual-machines/generation-2).
+When creating a new VM for your golden image, make sure to choose an OS that's in the list of [supported virtual machine OS images](overview.md#supported-virtual-machine-os-images). We recommend using a Windows 10 multi-session (with or without Microsoft 365) or Windows Server image for pooled host pools. We recommend using Windows 10 Enterprise images for personal host pools. You can use either Generation 1 or Generation 2 VMs; Gen 2 VMs support features that aren't supported for Gen 1 machines. Learn more about Generation 1 and Generation 2 VMs at [Support for generation 2 VMs on Azure](../virtual-machines/generation-2.md).
### Take your first snapshot First, [create the base VM](../virtual-machines/windows/quick-create-portal.md) for your chosen image. After you've deployed the image, take a snapshot of the disk of your image VM. Snapshots are save states that will let you roll back any changes if you run into problems while building the image. Since you'll be taking many snapshots throughout the build process, make sure to give the snapshot a name you can easily identify. ### Customize your VM
Some optional things you can do before running Sysprep:
- Clean up temp files in system storage - Optimize drivers (defrag) - Remove any user profiles
-Generalize the VM by running [sysprep](/azure/virtual-machines/generalize#windows.md).
+Generalize the VM by running [sysprep](../virtual-machines/generalize.md).
## Capture the VM After you've completed sysprep and shut down your machine in the Azure portal, open the **VM** tab and select the **Capture** button to save the image for later use. When you capture a VM, you can either add the image to a shared image gallery or capture it as a managed image.
-The [Shared Image Gallery](/azure/virtual-machines/shared-image-galleries) lets you add features and use existing images in other deployments. Images from a Shared Image Gallery are highly-available, ensure easy versioning, and you can deploy them at scale. However, if you have a simpler deployment, you may want to use a standalone managed image instead.
+The [Shared Image Gallery](../virtual-machines/shared-image-galleries.md) lets you add features and use existing images in other deployments. Images from a Shared Image Gallery are highly-available, ensure easy versioning, and you can deploy them at scale. However, if you have a simpler deployment, you may want to use a standalone managed image instead.
> [!IMPORTANT] > We recommend using Shared Image Gallery images for production environments because of their enhanced capabilities, such as replication and image versioning. When you create a capture, you'll need to delete the VM afterwards, as you'll no longer be able to use it after the capture process is finished. Don't try to capture the same VM twice, even if there's an issue with the capture. Instead, create a new VM from your latest snapshot, then run sysprep again.
virtual-machines Dsc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/dsc-overview.md
The portal collects the following input:
- **Data Collection**: Determines if the extension will collect telemetry. For more information, see [Azure DSC extension data collection](https://devblogs.microsoft.com/powershell/azure-dsc-extension-data-collection-2/). -- **Version**: Specifies the version of the DSC extension to install. For information about versions, see [DSC extension version history](/azure/automation/automation-dsc-extension-history).
+- **Version**: Specifies the version of the DSC extension to install. For information about versions, see [DSC extension version history](../../automation/automation-dsc-extension-history.md).
- **Auto Upgrade Minor Version**: This field maps to the **AutoUpdate** switch in the cmdlets and enables the extension to automatically update to the latest version during installation. **Yes** will instruct the extension handler to use the latest available version and **No** will force the **Version** specified to be installed. Selecting neither **Yes** nor **No** is the same as selecting **No**.
Logs for the extension are stored in the following location: `C:\WindowsAzure\Lo
- For more information about PowerShell DSC, go to the [PowerShell documentation center](/powershell/dsc/overview). - Examine the [Resource Manager template for the DSC extension](dsc-template.md). - For more functionality that you can manage by using PowerShell DSC, and for more DSC resources, browse the [PowerShell gallery](https://www.powershellgallery.com/packages?q=DscResource&x=0&y=0).-- For details about passing sensitive parameters into configurations, see [Manage credentials securely with the DSC extension handler](dsc-credentials.md).
+- For details about passing sensitive parameters into configurations, see [Manage credentials securely with the DSC extension handler](dsc-credentials.md).
virtual-machines Generation 2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/generation-2.md
For more information, see [Trusted launch](trusted-launch.md).
## Creating a generation 2 VM ### Azure Resource Manager Template
-To create a simple Windows Generation 2 VM, see [Create a Windows virtual machine from a Resource Manager template](https://docs.microsoft.com/azure/virtual-machines/windows/ps-template)
-To create a simple Linux Generation 2 VM, see [How to create a Linux virtual machine with Azure Resource Manager templates](https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-secured-vm-from-template)
+To create a simple Windows Generation 2 VM, see [Create a Windows virtual machine from a Resource Manager template](./windows/ps-template.md)
+To create a simple Linux Generation 2 VM, see [How to create a Linux virtual machine with Azure Resource Manager templates](./linux/create-ssh-secured-vm-from-template.md)
### Marketplace image
You can also create generation 2 VMs by using virtual machine scale sets. In the
Learn more about the [trusted launch](trusted-launch-portal.md) with gen 2 VMs.
-Learn about [generation 2 virtual machines in Hyper-V](/windows-server/virtualization/hyper-v/plan/should-i-create-a-generation-1-or-2-virtual-machine-in-hyper-v).
+Learn about [generation 2 virtual machines in Hyper-V](/windows-server/virtualization/hyper-v/plan/should-i-create-a-generation-1-or-2-virtual-machine-in-hyper-v).