Updates from: 04/20/2022 01:12:23
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Password Reset Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-password-reset-policy.md
To test the user flow:
### Create a password reset policy
-Custom policies are a set of XML files that you upload to your Azure AD B2C tenant to define user journeys. We provide starter packs that have several pre-built policies, including sign-up and sign-in, password reset, and profile editing policies. For more information, see [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy).
+Custom policies are a set of XML files that you upload to your Azure AD B2C tenant to define user journeys. We provide [starter packs](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack) that have several pre-built policies, including sign up and sign in, password reset, and profile editing policies. For more information, see [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy).
::: zone-end
active-directory-domain-services Tutorial Create Instance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance-advanced.md
To complete this tutorial, you need the following resources and privileges:
* An Azure Active Directory tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
-* You need [Domain Services Contributor](/azure/role-based-access-control/built-in-roles#domain-services-contributor) Azure role to create the required Azure AD DS resources.
+* You need [Domain Services Contributor](../role-based-access-control/built-in-roles.md#domain-services-contributor) Azure role to create the required Azure AD DS resources.
Although not required for Azure AD DS, it's recommended to [configure self-service password reset (SSPR)][configure-sspr] for the Azure AD tenant. Users can change their password without SSPR, but SSPR helps if they forget their password and need to reset it.
active-directory-domain-services Tutorial Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md
To complete this tutorial, you need the following resources and privileges:
* An Azure Active Directory tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
-* You need [Domain Services Contributor](/azure/role-based-access-control/built-in-roles#domain-services-contributor) Azure role to create the required Azure AD DS resources.
+* You need [Domain Services Contributor](../role-based-access-control/built-in-roles.md#domain-services-contributor) Azure role to create the required Azure AD DS resources.
* A virtual network with DNS servers that can query necessary infrastructure such as storage. DNS servers that can't perform general internet queries might block the ability to create a managed domain. Although not required for Azure AD DS, it's recommended to [configure self-service password reset (SSPR)][configure-sspr] for the Azure AD tenant. Users can change their password without SSPR, but SSPR helps if they forget their password and need to reset it.
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Previously updated : 07/26/2021 Last updated : 04/13/2022 # Tutorial: Develop and plan provisioning for a SCIM endpoint in Azure Active Directory
-As an application developer, you can use the System for Cross-Domain Identity Management (SCIM) user management API to enable automatic provisioning of users and groups between your application and Azure AD (AAD). This article describes how to build a SCIM endpoint and integrate with the AAD provisioning service. The SCIM specification provides a common user schema for provisioning. When used in conjunction with federation standards like SAML or OpenID Connect, SCIM gives administrators an end-to-end, standards-based solution for access management.
+As an application developer, you can use the System for Cross-Domain Identity Management (SCIM) user management API to enable automatic provisioning of users and groups between your application and Azure AD. This article describes how to build a SCIM endpoint and integrate with the Azure AD provisioning service. The SCIM specification provides a common user schema for provisioning. When used in conjunction with federation standards like SAML or OpenID Connect, SCIM gives administrators an end-to-end, standards-based solution for access management.
![Provisioning from Azure AD to an app with SCIM](media/use-scim-to-provision-users-and-groups/scim-provisioning-overview.png)
To automate provisioning to an application will require building and integrating
1. Design your user and group schema
- Identify the application's objects and attributes to determine how they map to the user and group schema supported by the AAD SCIM implementation.
+ Identify the application's objects and attributes to determine how they map to the user and group schema supported by the Azure AD SCIM implementation.
-1. Understand the AAD SCIM implementation
+1. Understand the Azure AD SCIM implementation
- Understand how the AAD SCIM client is implemented to model your SCIM protocol request handling and responses.
+ Understand how the Azure AD SCIM client is implemented to model your SCIM protocol request handling and responses.
1. Build a SCIM endpoint
- An endpoint must be SCIM 2.0-compatible to integrate with the AAD provisioning service. As an option, use Microsoft Common Language Infrastructure (CLI) libraries and code samples to build your endpoint. These samples are for reference and testing only; we recommend against using them as dependencies in your production app.
+ An endpoint must be SCIM 2.0-compatible to integrate with the Azure AD provisioning service. As an option, use Microsoft Common Language Infrastructure (CLI) libraries and code samples to build your endpoint. These samples are for reference and testing only; we recommend against using them as dependencies in your production app.
-1. Integrate your SCIM endpoint with the AAD SCIM client
+1. Integrate your SCIM endpoint with the Azure AD SCIM client
- If your organization uses a third-party application to implement a profile of SCIM 2.0 that AAD supports, you can quickly automate both provisioning and deprovisioning of users and groups.
+ If your organization uses a third-party application to implement a profile of SCIM 2.0 that Azure AD supports, you can quickly automate both provisioning and deprovisioning of users and groups.
-1. Publish your application to the AAD application gallery
+1. Publish your application to the Azure AD application gallery
Make it easy for customers to discover your application and easily configure provisioning.
There are several endpoints defined in the SCIM RFC. You can start with the `/Us
> [!NOTE] > Use the `/Schemas` endpoint to support custom attributes or if your schema changes frequently as it enables a client to retrieve the most up-to-date schema automatically. Use the `/Bulk` endpoint to support groups.
-## Understand the AAD SCIM implementation
+## Understand the Azure AD SCIM implementation
-To support a SCIM 2.0 user management API, this section describes how the AAD SCIM client is implemented and shows how to model your SCIM protocol request handling and responses.
+To support a SCIM 2.0 user management API, this section describes how the Azure AD SCIM client is implemented and shows how to model your SCIM protocol request handling and responses.
> [!IMPORTANT] > The behavior of the Azure AD SCIM implementation was last updated on December 18, 2018. For information on what changed, see [SCIM 2.0 protocol compliance of the Azure AD User Provisioning service](application-provisioning-config-problem-scim-compatibility.md).
Within the [SCIM 2.0 protocol specification](http://www.simplecloud.info/#Specif
|Retrieve a known resource for a user or group created earlier|[section 3.4.1](https://tools.ietf.org/html/rfc7644#section-3.4.1)| |Query users or groups|[section 3.4.2](https://tools.ietf.org/html/rfc7644#section-3.4.2). By default, users are retrieved by their `id` and queried by their `username` and `externalId`, and groups are queried by `displayName`.| |The filter [excludedAttributes=members](#get-group) when querying the group resource|section 3.4.2.5|
-|Accept a single bearer token for authentication and authorization of AAD to your application.||
+|Accept a single bearer token for authentication and authorization of Azure AD to your application.||
|Soft-deleting a user `active=false` and restoring the user `active=true`|The user object should be returned in a request whether or not the user is active. The only time the user should not be returned is when it is hard deleted from the application.| |Support the /Schemas endpoint|[section 7](https://tools.ietf.org/html/rfc7643#page-30) The schema discovery endpoint will be used to discover additional attributes.|
+|Support listing users and paginating|[section 3.4.2.4](https://datatracker.ietf.org/doc/html/rfc7644#section-3.4.2.4).|
-Use the general guidelines when implementing a SCIM endpoint to ensure compatibility with AAD:
+Use the general guidelines when implementing a SCIM endpoint to ensure compatibility with Azure AD:
##### General: * `id` is a required property for all resources. Every response that returns a resource should ensure each resource has this property, except for `ListResponse` with zero members.
-* Values sent should be stored in the same format as what the were sent in. Invalid values should be rejected with a descriptive, actionable error message. Transformations of data should not happen between data being sent by Azure AD and data being stored in the SCIM application. (e.g. A phone number sent as 55555555555 should not be saved/returned as +5 (555) 555-5555)
+* Values sent should be stored in the same format as what they were sent in. Invalid values should be rejected with a descriptive, actionable error message. Transformations of data should not happen between data being sent by Azure AD and data being stored in the SCIM application. (e.g. A phone number sent as 55555555555 should not be saved/returned as +5 (555) 555-5555)
* It isn't necessary to include the entire resource in the **PATCH** response.
-* Don't require a case-sensitive match on structural elements in SCIM, in particular **PATCH** `op` operation values, as defined in [section 3.5.2](https://tools.ietf.org/html/rfc7644#section-3.5.2). AAD emits the values of `op` as **Add**, **Replace**, and **Remove**.
-* Microsoft AAD makes requests to fetch a random user and group to ensure that the endpoint and the credentials are valid. It's also done as a part of the **Test Connection** flow in the [Azure portal](https://portal.azure.com).
+* Don't require a case-sensitive match on structural elements in SCIM, in particular **PATCH** `op` operation values, as defined in [section 3.5.2](https://tools.ietf.org/html/rfc7644#section-3.5.2). Azure AD emits the values of `op` as **Add**, **Replace**, and **Remove**.
+* Microsoft Azure AD makes requests to fetch a random user and group to ensure that the endpoint and the credentials are valid. It's also done as a part of the **Test Connection** flow in the [Azure portal](https://portal.azure.com).
* Support HTTPS on your SCIM endpoint.
-* Custom complex and multivalued attributes are supported but AAD does not have many complex data structures to pull data from in these cases. Simple paired name/value type complex attributes can be mapped to easily, but flowing data to complex attributes with three or more subattributes are not well supported at this time.
-* The "type" sub-attribute values of multivalued complex attributes must be unique. For example, there can not be two different email addresses with the "work" sub-type.
+* Custom complex and multivalued attributes are supported but Azure AD does not have many complex data structures to pull data from in these cases. Simple paired name/value type complex attributes can be mapped to easily, but flowing data to complex attributes with three or more subattributes are not well supported at this time.
+* The "type" sub-attribute values of multivalued complex attributes must be unique. For example, there cannot be two different email addresses with the "work" sub-type.
##### Retrieving Resources: * Response to a query/filter request should always be a `ListResponse`.
-* Microsoft AAD only uses the following operators: `eq`, `and`
+* Microsoft Azure AD only uses the following operators: `eq`, `and`
* The attribute that the resources can be queried on should be set as a matching attribute on the application in the [Azure portal](https://portal.azure.com), see [Customizing User Provisioning Attribute Mappings](customize-application-attributes.md). ##### /Users:
Use the general guidelines when implementing a SCIM endpoint to ensure compatibi
* If a value is not present, do not send null values. * Property values should be camel cased (e.g. readWrite). * Must return a list response.
-* The /schemas request will be made by the Azure AD SCIM client every time someone saves the provisioning configuration in the Azure Portal or every time a user lands on the edit provisioning page in the Azure Portal. Any additional attributes discovered will be surfaced to customers in the attribute mappings under the target attribute list. Schema discovery only leads to additional target attributes being added. It will not result in attributes being removed.
+* The /schemas request will be made by the Azure AD SCIM client every time someone saves the provisioning configuration in the Azure portal or every time a user lands on the edit provisioning page in the Azure portal. Any additional attributes discovered will be surfaced to customers in the attribute mappings under the target attribute list. Schema discovery only leads to additional target attributes being added. It will not result in attributes being removed.
### User provisioning and deprovisioning
-The following illustration shows the messages that AAD sends to a SCIM service to manage the lifecycle of a user in your application's identity store.
+The following illustration shows the messages that Azure AD sends to a SCIM service to manage the lifecycle of a user in your application's identity store.
![Shows the user provisioning and deprovisioning sequence](media/use-scim-to-provision-users-and-groups/scim-figure-4.png)<br/> *User provisioning and deprovisioning sequence* ### Group provisioning and deprovisioning
-Group provisioning and deprovisioning are optional. When implemented and enabled, the following illustration shows the messages that AAD sends to a SCIM service to manage the lifecycle of a group in your application's identity store. Those messages differ from the messages about users in two ways:
+Group provisioning and deprovisioning are optional. When implemented and enabled, the following illustration shows the messages that Azure AD sends to a SCIM service to manage the lifecycle of a group in your application's identity store. Those messages differ from the messages about users in two ways:
* Requests to retrieve groups specify that the members attribute is to be excluded from any resource provided in response to the request. * Requests to determine whether a reference attribute has a certain value are requests about the members attribute.
Group provisioning and deprovisioning are optional. When implemented and enabled
*Group provisioning and deprovisioning sequence* ### SCIM protocol requests and responses
-This section provides example SCIM requests emitted by the AAD SCIM client and example expected responses. For best results, you should code your app to handle these requests in this format and emit the expected responses.
+This section provides example SCIM requests emitted by the Azure AD SCIM client and example expected responses. For best results, you should code your app to handle these requests in this format and emit the expected responses.
> [!IMPORTANT]
-> To understand how and when the AAD user provisioning service emits the operations described below, see the section [Provisioning cycles: Initial and incremental](how-provisioning-works.md#provisioning-cycles-initial-and-incremental) in [How provisioning works](how-provisioning-works.md).
+> To understand how and when the Azure AD user provisioning service emits the operations described below, see the section [Provisioning cycles: Initial and incremental](how-provisioning-works.md#provisioning-cycles-initial-and-incremental) in [How provisioning works](how-provisioning-works.md).
[User Operations](#user-operations) - [Create User](#create-user) ([Request](#request) / [Response](#response))
TLS 1.2 Cipher Suites minimum bar:
### IP Ranges The Azure AD provisioning service currently operates under the IP Ranges for AzureActiveDirectory as listed [here](https://www.microsoft.com/download/details.aspx?id=56519&WT.mc_id=rss_alldownloads_all). You can add the IP ranges listed under the AzureActiveDirectory tag to allow traffic from the Azure AD provisioning service into your application. Note that you will need to review the IP range list carefully for computed addresses. An address such as '40.126.25.32' could be represented in the IP range list as '40.126.0.0/18'. You can also programmatically retrieve the IP range list using the following [API](/rest/api/virtualnetwork/servicetags/list).
-Azure AD also supports an agent based solution to provide connectivity to applications in private networks (on-premises, hosted in Azure, hosted in AWS, etc.). Customers can deploy a lightweight agent, which provides connectivity to Azure AD without opening an inbound ports, on a server in their private network. Learn more [here](./on-premises-scim-provisioning.md).
+Azure AD also supports an agent based solution to provide connectivity to applications in private networks (on-premises, hosted in Azure, hosted in AWS, etc.). Customers can deploy a lightweight agent, which provides connectivity to Azure AD without opening any inbound ports, on a server in their private network. Learn more [here](./on-premises-scim-provisioning.md).
## Build a SCIM endpoint
private string GenerateJSONWebToken()
***Example 1. Query the service for a matching user***
-Azure Active Directory (AAD) queries the service for a user with an `externalId` attribute value matching the mailNickname attribute value of a user in AAD. The query is expressed as a Hypertext Transfer Protocol (HTTP) request such as this example, wherein jyoung is a sample of a mailNickname of a user in Azure Active Directory.
+Azure Active Directory queries the service for a user with an `externalId` attribute value matching the mailNickname attribute value of a user in Azure AD. The query is expressed as a Hypertext Transfer Protocol (HTTP) request such as this example, wherein jyoung is a sample of a mailNickname of a user in Azure Active Directory.
>[!NOTE]
-> This is an example only. Not all users will have a mailNickname attribute, and the value a user has may not be unique in the directory. Also, the attribute used for matching (which in this case is `externalId`) is configurable in the [AAD attribute mappings](customize-application-attributes.md).
+> This is an example only. Not all users will have a mailNickname attribute, and the value a user has may not be unique in the directory. Also, the attribute used for matching (which in this case is `externalId`) is configurable in the [Azure AD attribute mappings](customize-application-attributes.md).
``` GET https://.../scim/Users?filter=externalId eq jyoung HTTP/1.1
In the sample query, for a user with a given value for the `externalId` attribut
***Example 2. Provision a user***
-If the response to a query to the web service for a user with an `externalId` attribute value that matches the mailNickname attribute value of a user doesn't return any users, then AAD requests that the service provision a user corresponding to the one in AAD. Here is an example of such a request:
+If the response to a query to the web service for a user with an `externalId` attribute value that matches the mailNickname attribute value of a user doesn't return any users, then Azure AD requests that the service provision a user corresponding to the one in Azure AD. Here is an example of such a request:
``` POST https://.../scim/Users HTTP/1.1
In the example of a request to update a user, the object provided as the value o
***Example 6. Deprovision a user***
-To deprovision a user from an identity store fronted by an SCIM service, AAD sends a request such as:
+To deprovision a user from an identity store fronted by an SCIM service, Azure AD sends a request such as:
``` DELETE ~/scim/Users/54D382A4-2050-4C03-94D1-E769F1D15682 HTTP/1.1
The object provided as the value of the resourceIdentifier argument has these pr
* ResourceIdentifier.Identifier: "54D382A4-2050-4C03-94D1-E769F1D15682" * ResourceIdentifier.SchemaIdentifier: "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"
-## Integrate your SCIM endpoint with the AAD SCIM client
+## Integrate your SCIM endpoint with the Azure AD SCIM client
-Azure AD can be configured to automatically provision assigned users and groups to applications that implement a specific profile of the [SCIM 2.0 protocol](https://tools.ietf.org/html/rfc7644). The specifics of the profile are documented in [Understand the Azure AD SCIM implementation](#understand-the-aad-scim-implementation).
+Azure AD can be configured to automatically provision assigned users and groups to applications that implement a specific profile of the [SCIM 2.0 protocol](https://tools.ietf.org/html/rfc7644). The specifics of the profile are documented in [Understand the Azure AD SCIM implementation](#understand-the-azure-ad-scim-implementation).
Check with your application provider, or your application provider's documentation for statements of compatibility with these requirements.
Applications that support the SCIM profile described in this article can be conn
**To connect an application that supports SCIM:**
-1. Sign in to the [AAD portal](https://aad.portal.azure.com). Note that you can get access a free trial for Azure Active Directory with P2 licenses by signing up for the [developer program](https://developer.microsoft.com/office/dev-program)
+1. Sign in to the [Azure AD portal](https://aad.portal.azure.com). Note that you can get access a free trial for Azure Active Directory with P2 licenses by signing up for the [developer program](https://developer.microsoft.com/office/dev-program)
1. Select **Enterprise applications** from the left pane. A list of all configured apps is shown, including apps that were added from the gallery. 1. Select **+ New application** > **+ Create your own application**. 1. Enter a name for your application, choose the option "*integrate any other application you don't find in the gallery*" and select **Add** to create an app object. The new app is added to the list of enterprise applications and opens to its app management screen.
Once the initial cycle has started, you can select **Provisioning logs** in the
> [!NOTE] > The initial cycle takes longer to perform than later syncs, which occur approximately every 40 minutes as long as the service is running.
-## Publish your application to the AAD application gallery
+## Publish your application to the Azure AD application gallery
If you're building an application that will be used by more than one tenant, you can make it available in the Azure AD application gallery. This will make it easy for organizations to discover the application and configure provisioning. Publishing your app in the Azure AD gallery and making provisioning available to others is easy. Check out the steps [here](../manage-apps/v2-howto-app-gallery-listing.md). Microsoft will work with you to integrate your application into our gallery, test your endpoint, and release onboarding [documentation](../saas-apps/tutorial-list.md) for customers to use. ### Gallery onboarding checklist Use the checklist to onboard your application quickly and customers have a smooth deployment experience. The information will be gathered from you when onboarding to the gallery. > [!div class="checklist"]
-> * Support a [SCIM 2.0](#understand-the-aad-scim-implementation) user and group endpoint (Only one is required but both are recommended)
+> * Support a [SCIM 2.0](#understand-the-azure-ad-scim-implementation) user and group endpoint (Only one is required but both are recommended)
> * Support at least 25 requests per second per tenant to ensure that users and groups are provisioned and deprovisioned without delay (Required) > * Establish engineering and support contacts to guide customers post gallery onboarding (Required) > * 3 Non-expiring test credentials for your application (Required)
The SCIM spec doesn't define a SCIM-specific scheme for authentication and autho
|OAuth client credentials grant|Access tokens are much shorter-lived than passwords, and have an automated refresh mechanism that long-lived bearer tokens do not have. Both the authorization code grant and the client credentials grant create the same type of access token, so moving between these methods is transparent to the API. Provisioning can be completely automated, and new tokens can be silently requested without user interaction. ||Not supported for gallery and non-gallery apps. Support is in our backlog.| > [!NOTE]
-> It's not recommended to leave the token field blank in the AAD provisioning configuration custom app UI. The token generated is primarily available for testing purposes.
+> It's not recommended to leave the token field blank in the Azure AD provisioning configuration custom app UI. The token generated is primarily available for testing purposes.
### OAuth code grant flow
Best practices (recommended, but not required):
1. When the provisioning cycle begins, the service checks if the current access token is valid and exchanges it for a new token if needed. The access token is provided in each request made to the app and the validity of the request is checked before each request. > [!NOTE]
-> While it's not possible to setup OAuth on the non-gallery applications, you can manually generate an access token from your authorization server and input it as the secret token to a non-gallery application. This allows you to verify compatibility of your SCIM server with the AAD SCIM client before onboarding to the app gallery, which does support the OAuth code grant.
+> While it's not possible to setup OAuth on the non-gallery applications, you can manually generate an access token from your authorization server and input it as the secret token to a non-gallery application. This allows you to verify compatibility of your SCIM server with the Azure AD SCIM client before onboarding to the app gallery, which does support the OAuth code grant.
**Long-lived OAuth bearer tokens:** If your application doesn't support the OAuth authorization code grant flow, instead generate a long lived OAuth bearer token that an administrator can use to setup the provisioning integration. The token should be perpetual, or else the provisioning job will be [quarantined](application-provisioning-quarantine-status.md) when the token expires.
active-directory Howto Authentication Passwordless Security Key On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md
You must also meet the following system requirements:
- [Windows Server 2016](https://support.microsoft.com/help/4534307/windows-10-update-kb4534307) - [Windows Server 2019](https://support.microsoft.com/help/4534321/windows-10-update-kb4534321) -- AES256_HMAC_SHA1 must be enabled when **Network security: Configure encryption types allowed for Kerberos** policy is [configured](https://docs.microsoft.com/windows/security/threat-protection/security-policy-settings/network-security-configure-encryption-types-allowed-for-kerberos) on domain controllers.
+- AES256_HMAC_SHA1 must be enabled when **Network security: Configure encryption types allowed for Kerberos** policy is [configured](/windows/security/threat-protection/security-policy-settings/network-security-configure-encryption-types-allowed-for-kerberos) on domain controllers.
- Have the credentials required to complete the steps in the scenario: - An Active Directory user who is a member of the Domain Admins group for a domain and a member of the Enterprise Admins group for a forest. Referred to as **$domainCred**.
An FIDO2 Windows login looks for a writable DC to exchange the user TGT. As long
## Next steps
-[Learn more about passwordless authentication](concept-authentication-passwordless.md)
+[Learn more about passwordless authentication](concept-authentication-passwordless.md)
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
Previously updated : 02/08/2022 Last updated : 04/19/2022
Administrators can select published authentication contexts in their Conditional
For more information about authentication context use in applications, see the following articles. -- [Microsoft Information Protection sensitivity labels to protect SharePoint sites](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites#more-information-about-the-dependencies-for-the-authentication-context-option)
+- [Use sensitivity labels to protect content in Microsoft Teams, Microsoft 365 groups, and SharePoint sites](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites)
- [Microsoft Defender for Cloud Apps](/cloud-app-security/session-policy-aad?branch=pr-en-us-2082#require-step-up-authentication-authentication-context) - [Custom applications](../develop/developer-guide-conditional-access-authentication-context.md)
active-directory Howto Conditional Access Policy Risk User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk-user.md
Previously updated : 11/05/2021 Last updated : 03/21/2022
# Conditional Access: User risk-based Conditional Access
-Microsoft works with researchers, law enforcement, various security teams at Microsoft, and other trusted sources to find leaked username and password pairs. Organizations with Azure AD Premium P2 licenses can create Conditional Access policies incorporating [Azure AD Identity Protection user risk detections](../identity-protection/concept-identity-protection-risks.md#user-linked-detections).
+Microsoft works with researchers, law enforcement, various security teams at Microsoft, and other trusted sources to find leaked username and password pairs. Organizations with Azure AD Premium P2 licenses can create Conditional Access policies incorporating [Azure AD Identity Protection user risk detections](../identity-protection/concept-identity-protection-risks.md).
There are two locations where this policy may be configured, Conditional Access and Identity Protection. Configuration using a Conditional Access policy is the preferred method providing more context including enhanced diagnostic data, report-only mode integration, Graph API support, and the ability to utilize other Conditional Access attributes in the policy.
Organizations can choose to deploy this policy using the steps outlined below or
1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts. 1. Select **Done**. 1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
-1. Under **Conditions** > **User risk**, set **Configure** to **Yes**. Under **Configure user risk levels needed for policy to be enforced** select **High**, then select **Done**.
-1. Under **Access controls** > **Grant**, select **Grant access**, **Require password change**, and select **Select**.
-1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Under **Conditions** > **User risk**, set **Configure** to **Yes**.
+ 1. Under **Configure user risk levels needed for policy to be enforced**, select **High**.
+ 1. Select **Done**.
+1. Under **Access controls** > **Grant**.
+ 1. Select **Grant access**, **Require password change**.
+ 1. Select **Select**.
+1. Confirm your settings, and set **Enable policy** to **Report-only**.
1. Select **Create** to create to enable your policy. After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
-## Enable through Identity Protection
-
-1. Sign in to the **Azure portal**.
-1. Select **All services**, then browse to **Azure AD Identity Protection**.
-1. Select **User risk policy**.
-1. Under **Assignments**, select **Users**.
- 1. Under **Include**, select **All users**.
- 1. Under **Exclude**, select **Select excluded users**, choose your organization's emergency access or break-glass accounts, and select **Select**.
- 1. Select **Done**.
-1. Under **Conditions**, select **User risk**, then choose **High**.
- 1. Select **Select**, then **Done**.
-1. Under **Controls** > **Access**, choose **Allow access**, and then select **Require password change**.
- 1. Select **Select**.
-1. Set **Enforce Policy** to **On**.
-1. Select **Save**.
- ## Next steps [Conditional Access common policies](concept-conditional-access-policy-common.md)
active-directory Howto Conditional Access Policy Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk.md
Previously updated : 11/05/2021 Last updated : 03/21/2022
Organizations can choose to deploy this policy using the steps outlined below or
1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts. 1. Select **Done**. 1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
-1. Under **Conditions** > **Sign-in risk**, set **Configure** to **Yes**. Under **Select the sign-in risk level this policy will apply to**
+1. Under **Conditions** > **Sign-in risk**, set **Configure** to **Yes**. Under **Select the sign-in risk level this policy will apply to**.
1. Select **High** and **Medium**. 1. Select **Done**.
-1. Under **Access controls** > **Grant**, select **Grant access**, **Require multi-factor authentication**, and select **Select**.
+1. Under **Access controls** > **Grant**.
+ 1. Select **Grant access**, **Require multi-factor authentication**.
+ 1. Select **Select**.
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy. After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
-## Enable through Identity Protection
-
-1. Sign in to the **Azure portal**.
-1. Select **All services**, then browse to **Azure AD Identity Protection**.
-1. Select **Sign-in risk policy**.
-1. Under **Assignments**, select **Users**.
- 1. Under **Include**, select **All users**.
- 1. Under **Exclude**, select **Select excluded users**, choose your organization's emergency access or break-glass accounts, and select **Select**.
- 1. Select **Done**.
-1. Under **Conditions**, select **Sign-in risk**, then choose **Medium and above**.
- 1. Select **Select**, then **Done**.
-1. Under **Controls** > **Access**, choose **Allow access**, and then select **Require multi-factor authentication**.
- 1. Select **Select**.
-1. Set **Enforce Policy** to **On**.
-1. Select **Save**.
- ## Next steps [Conditional Access common policies](concept-conditional-access-policy-common.md)
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust.md
For examples, see [Configure an app to trust a GitHub repo](workload-identity-fe
Run the following command to configure a federated identity credential on an app and create a trust relationship with a Kubernetes service account. Specify the following parameters: -- *issuer* is your service account issuer URL (the [OIDC issuer URL](/azure/aks/cluster-configuration#oidc-issuer-preview) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
+- *issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer-preview) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
- *subject* is the subject name in the tokens issued to the service account. Kubernetes uses the following format for subject names: `system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>`. - *name* is the name of the federated credential, which cannot be changed later. - *audiences* lists the audiences that can appear in the 'aud' claim of the external token. This field is mandatory, and defaults to "api://AzureADTokenExchange".
Select the **Kubernetes accessing Azure resources** scenario from the dropdown m
Fill in the **Cluster issuer URL**, **Namespace**, **Service account name**, and **Name** fields: -- **Cluster issuer URL** is the [OIDC issuer URL](/azure/aks/cluster-configuration#oidc-issuer-preview) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster.
+- **Cluster issuer URL** is the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer-preview) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster.
- **Service account name** is the name of the Kubernetes service account, which provides an identity for processes that run in a Pod. - **Namespace** is the service account namespace. - **Name** is the name of the federated credential, which cannot be changed later.
active-directory Directory Delete Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md
You can put a subscription into the **Deprovisioned** state to be deleted in thr
If you have an Active or Cancelled Azure Subscription associated to your Azure AD Tenant then you would not be able to delete Azure AD Tenant. After you cancel, billing is stopped immediately. However, Microsoft waits 30 - 90 days before permanently deleting your data in case you need to access it or you change your mind. We don't charge you for keeping the data. -- If you have a free trial or pay-as-you-go subscription, you don't have to wait 90 days for the subscription to automatically delete. You can delete your subscription three days after you cancel it. The Delete subscription option isn't available until three days after you cancel your subscription. For more details please read through [Delete free trial or pay-as-you-go subscriptions](https://docs.microsoft.com/azure/cost-management-billing/manage/cancel-azure-subscription#delete-free-trial-or-pay-as-you-go-subscriptions).-- All other subscription types are deleted only through the [subscription cancellation](https://docs.microsoft.com/azure/cost-management-billing/manage/cancel-azure-subscription#cancel-subscription-in-the-azure-portal) process. In other words, you can't delete a subscription directly unless it's a free trial or pay-as-you-go subscription. However, after you cancel a subscription, you can create an [Azure support request](https://go.microsoft.com/fwlink/?linkid=2083458) to ask to have the subscription deleted immediately.-- Alternatively, you can also move/transfer the Azure subscription to another Azure AD tenant account. When you transfer billing ownership of your subscription to an account in another Azure AD tenant, you can move the subscription to the new account's tenant. Additionally, perfoming Switch Directory on the subscription would not help as the billing would still be aligned with Azure AD Tenant which was used to sign up for the subscription. For more information review [Transfer a subscription to another Azure AD tenant account](https://docs.microsoft.com/azure/cost-management-billing/manage/billing-subscription-transfer#transfer-a-subscription-to-another-azure-ad-tenant-account)
+- If you have a free trial or pay-as-you-go subscription, you don't have to wait 90 days for the subscription to automatically delete. You can delete your subscription three days after you cancel it. The Delete subscription option isn't available until three days after you cancel your subscription. For more details please read through [Delete free trial or pay-as-you-go subscriptions](../../cost-management-billing/manage/cancel-azure-subscription.md#delete-free-trial-or-pay-as-you-go-subscriptions).
+- All other subscription types are deleted only through the [subscription cancellation](../../cost-management-billing/manage/cancel-azure-subscription.md#cancel-subscription-in-the-azure-portal) process. In other words, you can't delete a subscription directly unless it's a free trial or pay-as-you-go subscription. However, after you cancel a subscription, you can create an [Azure support request](https://go.microsoft.com/fwlink/?linkid=2083458) to ask to have the subscription deleted immediately.
+- Alternatively, you can also move/transfer the Azure subscription to another Azure AD tenant account. When you transfer billing ownership of your subscription to an account in another Azure AD tenant, you can move the subscription to the new account's tenant. Additionally, perfoming Switch Directory on the subscription would not help as the billing would still be aligned with Azure AD Tenant which was used to sign up for the subscription. For more information review [Transfer a subscription to another Azure AD tenant account](../../cost-management-billing/manage/billing-subscription-transfer.md#transfer-a-subscription-to-another-azure-ad-tenant-account)
Once you have all the Azure and Office/Microsoft 365 Subscriptions cancelled and deleted you can proceed with cleaning up rest of the things within Azure AD Tenant before actually delete it.
You can put a self-service sign-up product like Microsoft Power BI or Azure Righ
## Next steps
-[Azure Active Directory documentation](../index.yml)
+[Azure Active Directory documentation](../index.yml)
active-directory Groups Assign Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-assign-sensitivity-labels.md
Previously updated : 11/19/2021 Last updated : 04/19/2022
# Assign sensitivity labels to Microsoft 365 groups in Azure Active Directory
-Azure Active Directory (Azure AD) supports applying sensitivity labels published by the [Microsoft 365 compliance center](https://sip.protection.office.com/homepage) to Microsoft 365 groups. Sensitivity labels apply to group across services like Outlook, Microsoft Teams, and SharePoint. For more information about Microsoft 365 apps support, see [Microsoft 365 support for sensitivity labels](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites#support-for-the-sensitivity-labels).
+Azure Active Directory (Azure AD) supports applying sensitivity labels published by the [Microsoft Purview compliance portal](https://compliance.microsoft.com) to Microsoft 365 groups. Sensitivity labels apply to group across services like Outlook, Microsoft Teams, and SharePoint. For more information about Microsoft 365 apps support, see [Microsoft 365 support for sensitivity labels](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites#support-for-the-sensitivity-labels).
> [!IMPORTANT] > To configure this feature, there must be at least one active Azure Active Directory Premium P1 license in your Azure AD organization.
After you enable this feature, the ΓÇ£classicΓÇ¥ classifications for groups will
The sensitivity label option is only displayed for groups when all the following conditions are met:
-1. Labels are published in the Microsoft 365 Compliance Center for this Azure AD organization.
+1. Labels are published in the Microsoft Purview compliance portal for this Azure AD organization.
1. The feature is enabled, EnableMIPLabels is set to True in from the Azure AD PowerShell module. 1. Labels are synchronized to Azure AD with the Execute-AzureAdLabelSync cmdlet in the Security & Compliance PowerShell module. It can take up to 24 hours after synchronization for the label to be available to Azure AD. 1. The group is a Microsoft 365 group.
Please make sure all the conditions are met in order to assign labels to a group
If the label you are looking for is not in the list, this could be the case for one of the following reasons: -- The label might not be published in the Microsoft 365 Compliance Center. This could also apply to labels that are no longer published. Please check with your administrator for more information.
+- The label might not be published in the Microsoft Purview compliance portal. This could also apply to labels that are no longer published. Please check with your administrator for more information.
- The label may be published, however, it is not available to the user that is signed-in. Please check with your administrator for more information on how to get access to the label. ### How to change the label on a group
Labels can be swapped at any time using the same steps as assigning a label to a
### Group setting changes to published labels aren't updated on the groups
-When you make changes to group settings for a published label in [Microsoft 365 compliance center](https://sip.protection.office.com/homepage), those policy changes aren't automatically applied on the labeled groups. Once the sensitivity label is published and applied to groups, Microsoft recommend that you not change the group settings for the label in Microsoft 365 Compliance Center.
+When you make changes to group settings for a published label in the [Microsoft Purview compliance portal](https://compliance.microsoft.com), those policy changes aren't automatically applied on the labeled groups. Once the sensitivity label is published and applied to groups, Microsoft recommend that you not change the group settings for the label in the Microsoft Purview compliance portal.
If you must make a change, use an [Azure AD PowerShell script](https://github.com/microsoftgraph/powershell-aad-samples/blob/master/ReassignSensitivityLabelToO365Groups.ps1) to manually apply updates to the impacted groups. This method makes sure that all existing groups enforce the new setting.
active-directory Active Directory Get Started Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-get-started-premium.md
Before you sign up for Active Directory Premium 1 or Premium 2, you must first d
Signing up using your Azure subscription with previously purchased and activated Azure AD licenses, automatically activates the licenses in the same directory. If that's not the case, you must still activate your license plan and your Azure AD access. For more information about activating your license plan, see [Activate your new license plan](#activate-your-new-license-plan). For more information about activating your Azure AD access, see [Activate your Azure AD access](#activate-your-azure-ad-access). ## Sign up using your existing Azure or Microsoft 365 subscription
-As an Azure or Microsoft 365 subscriber, you can purchase the Azure Active Directory Premium editions online. For detailed steps, see [Buy or remove licenses](https://docs.microsoft.com/microsoft-365/commerce/licenses/buy-licenses?view=o365-worldwide).
+As an Azure or Microsoft 365 subscriber, you can purchase the Azure Active Directory Premium editions online. For detailed steps, see [Buy or remove licenses](/microsoft-365/commerce/licenses/buy-licenses?view=o365-worldwide).
## Sign up using your Enterprise Mobility + Security licensing plan Enterprise Mobility + Security is a suite, comprised of Azure AD Premium, Azure Information Protection, and Microsoft Intune. If you already have an EMS license, you can get started with Azure AD, using one of these licensing options:
After your purchased licenses are provisioned in your directory, you'll receive
The activation process typically takes only a few minutes and then you can use your Azure AD tenant. ## Next steps
-Now that you have Azure AD Premium, you can [customize your domain](add-custom-domain.md), add your [corporate branding](customize-branding.md), [create a tenant](active-directory-access-create-new-tenant.md), and [add groups](active-directory-groups-create-azure-portal.md) and [users](add-users-azure-active-directory.md).
+Now that you have Azure AD Premium, you can [customize your domain](add-custom-domain.md), add your [corporate branding](customize-branding.md), [create a tenant](active-directory-access-create-new-tenant.md), and [add groups](active-directory-groups-create-azure-portal.md) and [users](add-users-azure-active-directory.md).
active-directory Active Directory Whatis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-whatis.md
Azure Active Directory (Azure AD) is a cloud-based identity and access management service. This service helps your employees access external resources, such as Microsoft 365, the Azure portal, and thousands of other SaaS applications. Azure Active Directory also helps them access internal resources like apps on your corporate intranet network, along with any cloud apps developed for your own organization. For more information about creating a tenant for your organization, see [Quickstart: Create a new tenant in Azure Active Directory](active-directory-access-create-new-tenant.md).
-To learn the differences between Azure Active Directory and Azure Active Directory, see [Compare Active Directory to Azure Active Directory](active-directory-compare-azure-ad-to-ad.md). You can also refer [Microsoft Cloud for Enterprise Architects Series](/microsoft-365/solutions/cloud-architecture-models) posters to better understand the core identity services in Azure like Azure AD and Microsoft-365.
+To learn the differences between Active Directory and Azure Active Directory, see [Compare Active Directory to Azure Active Directory](active-directory-compare-azure-ad-to-ad.md). You can also refer [Microsoft Cloud for Enterprise Architects Series](/microsoft-365/solutions/cloud-architecture-models) posters to better understand the core identity services in Azure like Azure AD and Microsoft-365.
## Who uses Azure AD?
active-directory How To Connect Azure Ad Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-azure-ad-trust.md
You can restore the issuance transform rules using the suggested steps below
## Best practice for securing and monitoring the AD FS trust with Azure AD When you federate your AD FS with Azure AD, it is critical that the federation configuration (trust relationship configured between AD FS and Azure AD) is monitored closely, and any unusual or suspicious activity is captured. To do so, we recommend setting up alerts and getting notified whenever any changes are made to the federation configuration. To learn how to setup alerts, see [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md).
-If you are using cloud Azure MFA, for multi factor authentication, with federated users, we highly recommend enabling additional security protection. This security protection prevents bypassing of cloud Azure MFA when federated with Azure AD. When enabled, for a federated domain in your Azure AD tenant, it ensures that a bad actor cannot bypass Azure MFA by imitating that a multi factor authentication has already been performed by the identity provider. The protection can be enabled via new security setting, `federatedIdpMfaBehavior`.For additional information see [Best practices for securing Active Directory Federation Services](https://docs.microsoft.com/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#enable-protection-to-prevent-by-passing-of-cloud-azure-mfa-when-federated-with-azure-ad)
+If you are using cloud Azure MFA, for multi factor authentication, with federated users, we highly recommend enabling additional security protection. This security protection prevents bypassing of cloud Azure MFA when federated with Azure AD. When enabled, for a federated domain in your Azure AD tenant, it ensures that a bad actor cannot bypass Azure MFA by imitating that a multi factor authentication has already been performed by the identity provider. The protection can be enabled via new security setting, `federatedIdpMfaBehavior`.For additional information see [Best practices for securing Active Directory Federation Services](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#enable-protection-to-prevent-by-passing-of-cloud-azure-mfa-when-federated-with-azure-ad)
## Next steps
-* [Manage and customize Active Directory Federation Services using Azure AD Connect](how-to-connect-fed-management.md)
+* [Manage and customize Active Directory Federation Services using Azure AD Connect](how-to-connect-fed-management.md)
active-directory Plan Hybrid Identity Design Considerations Tools Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-tools-comparison.md
na Previously updated : 04/07/2020 Last updated : 04/18/2022
Over the years the directory integration tools have grown and evolved. -- [FIM](/previous-versions/windows/desktop/forefront-2010/ff182370(v=vs.100)) and [MIM](/microsoft-identity-manager/microsoft-identity-manager-2016) are still supported and primarily enable synchronization between on-premises systems. The [FIM Windows Azure AD Connector](/previous-versions/mim/dn511001(v=ws.10)) is supported in both FIM and MIM, but not recommended for new deployments - customers with on-premises sources such as Notes or SAP HCM should use MIM to populate Active Directory Domain Services (AD DS) and then also use either Azure AD Connect sync or Azure AD Connect cloud provisioning to synchronize from AD DS to Azure AD.
+- [MIM](/microsoft-identity-manager/microsoft-identity-manager-2016) is still supported, and primarily enables synchronization from or between on-premises systems. The [FIM Windows Azure AD Connector](/previous-versions/mim/dn511001(v=ws.10)) is deprecated. Customers with on-premises sources such as Notes or SAP HCM should use MIM in one of two topologies.
+ - If users and groups are needed in Active Directory Domain Services (AD DS), then use MIM to populate users and groups into AD DS, and use either Azure AD Connect sync or Azure AD Connect cloud provisioning to synchronize those users and groups from AD DS to Azure AD.
+ - If users and groups are not needed in AD DS, then use MIM to populate users and groups into Azure AD through the [MIM Graph connector](/microsoft-identity-manager/microsoft-identity-manager-2016-connector-graph).
- [Azure AD Connect sync](how-to-connect-sync-whatis.md) incorporates the components and functionality previously released in DirSync and Azure AD Sync, for synchronizing between AD DS forests and Azure AD. - [Azure AD Connect cloud provisioning](../cloud-sync/what-is-cloud-sync.md) is a new Microsoft agent for synching from AD DS to Azure AD, useful for scenarios such as merger and acquisition where the acquired company's AD forests are isolated from the parent company's AD forests.
-To learn more about the differences between Azure AD Connect sync and Azure AD Connect cloud provisioning, see the article [What is Azure AD Connect cloud provisioning?](../cloud-sync/what-is-cloud-sync.md)
+To learn more about the differences between Azure AD Connect sync and Azure AD Connect cloud provisioning, see the article [What is Azure AD Connect cloud provisioning?](../cloud-sync/what-is-cloud-sync.md). For more information on deployment options with multiple HR sources or directories, then see the article [parallel and combined identity infrastructure options](../fundamentals/azure-active-directory-parallel-identity-options.md).
## Next steps Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history.md
To read more about auto-upgrade, see [Azure AD Connect: Automatic upgrade](how-t
### Bug fixes - Fixed an issue where some sync rule functions were not parsing surrogate pairs properly.
+ - Fixed an issue where, under certain circumstances, the sync service would not start due to a model db corruption. You can read more about the model db corruption issue in [this article](/troubleshoot/azure/active-directory/resolve-model-database-corruption-sqllocaldb)
## 2.0.91.0
This is a bug fix release. There are no functional changes in this release.
## Next steps
-Learn more about how to [integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
+Learn more about how to [integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
active-directory Concept Identity Protection Risks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-risks.md
Previously updated : 01/24/2022 Last updated : 04/15/2022 -+
Identity Protection provides organizations access to powerful resources to see a
## Risk types and detection
-Risk can be detected at the **User** and **Sign-in** level and two types of detection or calculation **Real-time** and **Offline**.
+Risk can be detected at the **User** and **Sign-in** level and two types of detection or calculation **Real-time** and **Offline**. Some risks are considered premium available to Azure AD Premium P2 customers only, while others are available to Free and Azure AD Premium P1 customers.
+
+A sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner. Risky activity can be detected for a user that isn't linked to a specific malicious sign-in but to the user itself.
Real-time detections may not show up in reporting for five to 10 minutes. Offline detections may not show up in reporting for 48 hours. > [!NOTE]
-> Our system may detect that the risk event that contributed to the risk user risk score was a false positives or the user risk was remediated with policy enforcement such as completing an MFA prompt or secure password change. Therefore our system will dismiss the risk state and a risk detail of ΓÇ£AI confirmed sign-in safeΓÇ¥ will surface and it will no longer contribute to the userΓÇÖs risk.
-
-### User-linked detections
+> Our system may detect that the risk event that contributed to the risk user risk score was a false positives or the user risk was remediated with policy enforcement such as completing multi-factor authentication or secure password change. Therefore our system will dismiss the risk state and a risk detail of ΓÇ£AI confirmed sign-in safeΓÇ¥ will surface and it will no longer contribute to the userΓÇÖs risk.
-Risky activity can be detected for a user that is not linked to a specific malicious sign-in but to the user itself.
+### Premium detections
-These risks are calculated offline using Microsoft's internal and external threat intelligence sources including security researchers, law enforcement professionals, security teams at Microsoft, and other trusted sources.
+Premium detections are visible only to Azure AD Premium P2 customers. Customers without Azure AD Premium P2 licenses still receives the premium detections but they'll be titled "additional risk detected".
-| Risk detection | Description |
-| | |
-| Leaked credentials | This risk detection type indicates that the user's valid credentials have been leaked. When cybercriminals compromise valid passwords of legitimate users, they often share those credentials. This sharing is typically done by posting publicly on the dark web, paste sites, or by trading and selling the credentials on the black market. When the Microsoft leaked credentials service acquires user credentials from the dark web, paste sites, or other sources, they are checked against Azure AD users' current valid credentials to find valid matches. For more information about leaked credentials, see [Common questions](#common-questions). |
-| Azure AD threat intelligence | This risk detection type indicates user activity that is unusual for the given user or is consistent with known attack patterns based on Microsoft's internal and external threat intelligence sources. |
-| Possible attempt to access Primary Refresh Token (PRT)| This risk detection type is detected by Microsoft Defender for Endpoint (MDE). A Primary Refresh Token (PRT) is a key artifact of Azure AD authentication on Windows 10, Windows Server 2016, and later versions, iOS, and Android devices. It is a JSON Web Token (JWT) that's specially issued to Microsoft first-party token brokers to enable single sign-on (SSO) across the applications used on those devices. Attackers can attempt to access this resource to move laterally into an organization or perform credential theft. This detection will move users to high risk and will only fire in organizations that have deployed MDE. This is a low-volume detection that will be infrequently seen by most organizations. However, when it does occur it is high risk and users should be remediated.|
### Sign-in risk
-A sign-in risk represents the probability that a given authentication request is not authorized by the identity owner.
-
-These risks can be calculated in real-time or calculated offline using Microsoft's internal and external threat intelligence sources including security researchers, law enforcement professionals, security teams at Microsoft, and other trusted sources.
+#### Premium sign-in risk detections
| Risk detection | Detection type | Description | | | | |
-| Anonymous IP address | Real-time | This risk detection type indicates sign-ins from an anonymous IP address (for example, Tor browser or anonymous VPN). These IP addresses are typically used by actors who want to hide their login telemetry (IP address, location, device, and so on) for potentially malicious intent. |
| Atypical travel | Offline | This risk detection type identifies two sign-ins originating from geographically distant locations, where at least one of the locations may also be atypical for the user, given past behavior. Among several other factors, this machine learning algorithm takes into account the time between the two sign-ins and the time it would have taken for the user to travel from the first location to the second, indicating that a different user is using the same credentials. <br><br> The algorithm ignores obvious "false positives" contributing to the impossible travel conditions, such as VPNs and locations regularly used by other users in the organization. The system has an initial learning period of the earliest of 14 days or 10 logins, during which it learns a new user's sign-in behavior. |
-| Anomalous Token | Offline | This detection indicates that there are abnormal characteristics in the token such as an unusual token lifetime or a token that is played from an unfamiliar location. This detection covers Session Tokens and Refresh Tokens. ***NOTE:** Anomalous token is tuned to incur more noise than other detections at the same risk level. This tradeoff is chosen to increase the likelihood of detecting replayed tokens that may otherwise go unnoticed. Because this is a high noise detection, there is a higher than normal chance that some of the sessions flagged by this detection are false positives. We recommend investigating the sessions flagged by this detection in the context of other sign-ins from the user. If the location, application, IP address, User Agent, or other characteristics are unexpected for the user, the tenant admin should consider this as an indicator of potential token replay*. |
+| Anomalous Token | Offline | This detection indicates that there are abnormal characteristics in the token such as an unusual token lifetime or a token that is played from an unfamiliar location. This detection covers Session Tokens and Refresh Tokens. <br><br> **NOTE:** Anomalous token is tuned to incur more noise than other detections at the same risk level. This tradeoff is chosen to increase the likelihood of detecting replayed tokens that may otherwise go unnoticed. Because this is a high noise detection, there's a higher than normal chance that some of the sessions flagged by this detection are false positives. We recommend investigating the sessions flagged by this detection in the context of other sign-ins from the user. If the location, application, IP address, User Agent, or other characteristics are unexpected for the user, the tenant admin should consider this as an indicator of potential token replay. |
| Token Issuer Anomaly | Offline |This risk detection indicates the SAML token issuer for the associated SAML token is potentially compromised. The claims included in the token are unusual or match known attacker patterns. | | Malware linked IP address | Offline | This risk detection type indicates sign-ins from IP addresses infected with malware that is known to actively communicate with a bot server. This detection is determined by correlating IP addresses of the user's device against IP addresses that were in contact with a bot server while the bot server was active. <br><br> **[This detection has been deprecated](../fundamentals/whats-new-archive.md#planned-deprecationmalware-linked-ip-address-detection-in-identity-protection)**. Identity Protection will no longer generate new "Malware linked IP address" detections. Customers who currently have "Malware linked IP address" detections in their tenant will still be able to view, remediate, or dismiss them until the 90-day detection retention time is reached.| | Suspicious browser | Offline | Suspicious browser detection indicates anomalous behavior based on suspicious sign-in activity across multiple tenants from different countries in the same browser. |
-| Unfamiliar sign-in properties | Real-time | This risk detection type considers past sign-in history (IP, Latitude / Longitude and ASN) to look for anomalous sign-ins. The system stores information about previous locations used by a user, and considers these "familiar" locations. The risk detection is triggered when the sign-in occurs from a location that is not already in the list of familiar locations. Newly created users will be in "learning mode" for a while where unfamiliar sign-in properties risk detections will be turned off while our algorithms learn the user's behavior. The learning mode duration is dynamic and depends on how much time it takes the algorithm to gather enough information about the user's sign-in patterns. The minimum duration is five days. A user can go back into learning mode after a long period of inactivity and after a secure password reset. The system also ignores sign-ins from familiar devices, and locations that are geographically close to a familiar location. <br><br> We also run this detection for basic authentication (or legacy protocols). Because these protocols do not have modern properties such as client ID, there is limited telemetry to reduce false positives. We recommend our customers to move to modern authentication. <br><br> Unfamiliar sign-in properties can be detected on both interactive and non-interactive sign-ins. When this detection is detected on non-interactive sign-ins, it deserves increased scrutiny due to the risk of token replay attacks. |
-| Admin confirmed user compromised | Offline | This detection indicates an admin has selected 'Confirm user compromised' in the Risky users UI or using riskyUsers API. To see which admin has confirmed this user compromised, check the user's risk history (via UI or API). |
+| Unfamiliar sign-in properties | Real-time |This risk detection type considers past sign-in history to look for anomalous sign-ins. The system stores information about previous sign-ins, and triggers a risk detection when a sign-in occurs with properties that are unfamiliar to the user. These properties can include IP, ASN, location, device, browser, and tenant IP subnet. Newly created users will be in "learning mode" period where the unfamiliar sign-in properties risk detection will be turned off while our algorithms learn the user's behavior. The learning mode duration is dynamic and depends on how much time it takes the algorithm to gather enough information about the user's sign-in patterns. The minimum duration is five days. A user can go back into learning mode after a long period of inactivity. <br><br> We also run this detection for basic authentication (or legacy protocols). Because these protocols don't have modern properties such as client ID, there's limited telemetry to reduce false positives. We recommend our customers to move to modern authentication. <br><br> Unfamiliar sign-in properties can be detected on both interactive and non-interactive sign-ins. When this detection is detected on non-interactive sign-ins, it deserves increased scrutiny due to the risk of token replay attacks. |
| Malicious IP address | Offline | This detection indicates sign-in from a malicious IP address. An IP address is considered malicious based on high failure rates because of invalid credentials received from the IP address or other IP reputation sources. | | Suspicious inbox manipulation rules | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#suspicious-inbox-manipulation-rules). This detection profiles your environment and triggers alerts when suspicious rules that delete or move messages or folders are set on a user's inbox. This detection may indicate that the user's account is compromised, that messages are being intentionally hidden, and that the mailbox is being used to distribute spam or malware in your organization. | | Password spray | Offline | A password spray attack is where multiple usernames are attacked using common passwords in a unified brute force manner to gain unauthorized access. This risk detection is triggered when a password spray attack has been performed. |
These risks can be calculated in real-time or calculated offline using Microsoft
| New country | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#activity-from-infrequent-country). This detection considers past activity locations to determine new and infrequent locations. The anomaly detection engine stores information about previous locations used by users in the organization. | | Activity from anonymous IP address | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#activity-from-anonymous-ip-addresses). This detection identifies that users were active from an IP address that has been identified as an anonymous proxy IP address. | | Suspicious inbox forwarding | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#suspicious-inbox-forwarding). This detection looks for suspicious email forwarding rules, for example, if a user created an inbox rule that forwards a copy of all emails to an external address. |
-| Azure AD threat intelligence | Offline | This risk detection type indicates sign-in activity that is unusual for the given user or is consistent with known attack patterns based on Microsoft's internal and external threat intelligence sources. |
| Mass Access to Sensitive Files | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/defender-cloud-apps/investigate-anomaly-alerts#unusual-file-access-by-user). This detection profiles your environment and triggers alerts when users access multiple files from Microsoft SharePoint or Microsoft OneDrive. An alert is triggered only if the number of accessed files is uncommon for the user and the files might contain sensitive information|
+#### Nonpremium sign-in risk detections
+
+| Risk detection | Detection type | Description |
+| | | |
+| Additional risk detected | Real-time or Offline | This detection indicates that one of the premium detections was detected. Since the premium detections are visible only to Azure AD Premium P2 customers, they're titled "additional risk detected" for customers without Azure AD Premium P2 licenses. |
+| Anonymous IP address | Real-time | This risk detection type indicates sign-ins from an anonymous IP address (for example, Tor browser or anonymous VPN). These IP addresses are typically used by actors who want to hide their login telemetry (IP address, location, device, and so on) for potentially malicious intent. |
+| Admin confirmed user compromised | Offline | This detection indicates an admin has selected 'Confirm user compromised' in the Risky users UI or using riskyUsers API. To see which admin has confirmed this user compromised, check the user's risk history (via UI or API). |
+| Azure AD threat intelligence | Offline | This risk detection type indicates user activity that is unusual for the given user or is consistent with known attack patterns based on Microsoft's internal and external threat intelligence sources. |
+
+### User-linked detections
+
+#### Premium user risk detections
+
+| Risk detection | Detection type | Description |
+| | | |
+| Possible attempt to access Primary Refresh Token (PRT) | Offline | This risk detection type is detected by Microsoft Defender for Endpoint (MDE). A Primary Refresh Token (PRT) is a key artifact of Azure AD authentication on Windows 10, Windows Server 2016, and later versions, iOS, and Android devices. A PRT is a JSON Web Token (JWT) that's specially issued to Microsoft first-party token brokers to enable single sign-on (SSO) across the applications used on those devices. Attackers can attempt to access this resource to move laterally into an organization or perform credential theft. This detection will move users to high risk and will only fire in organizations that have deployed MDE. This detection is low-volume and will be seen infrequently by most organizations. However, when it does occur it's high risk and users should be remediated. |
-### Other risk detections
+#### Nonpremium user risk detections
-| Risk detection | Detection type | Description |
+| Risk detection | Detection type | Description |
| | | |
-| Additional risk detected | Real-time or Offline | This detection indicates that one of the above premium detections was detected. Since the premium detections are visible only to Azure AD Premium P2 customers, they are titled "additional risk detected" for customers without Azure AD Premium P2 licenses. |
+| Additional risk detected | Real-time or Offline | This detection indicates that one of the premium detections was detected. Since the premium detections are visible only to Azure AD Premium P2 customers, they're titled "additional risk detected" for customers without Azure AD Premium P2 licenses. |
+| Leaked credentials | Offline | This risk detection type indicates that the user's valid credentials have been leaked. When cybercriminals compromise valid passwords of legitimate users, they often share those credentials. This sharing is typically done by posting publicly on the dark web, paste sites, or by trading and selling the credentials on the black market. When the Microsoft leaked credentials service acquires user credentials from the dark web, paste sites, or other sources, they're checked against Azure AD users' current valid credentials to find valid matches. For more information about leaked credentials, see [Common questions](#common-questions). |
+| Azure AD threat intelligence | Offline | This risk detection type indicates user activity that is unusual for the given user or is consistent with known attack patterns based on Microsoft's internal and external threat intelligence sources. |
## Common questions ### Risk levels
-Identity Protection categorizes risk into three tiers: low, medium, and high. When configuring [custom Identity protection policies](./concept-identity-protection-policies.md#custom-conditional-access-policy), you can also configure it to trigger upon **No risk** level. No Risk means there is no active indication that the user's identity has been compromised.
+Identity Protection categorizes risk into three tiers: low, medium, and high. When configuring [custom Identity protection policies](./concept-identity-protection-policies.md#custom-conditional-access-policy), you can also configure it to trigger upon **No risk** level. No Risk means there's no active indication that the user's identity has been compromised.
-While Microsoft does not provide specific details about how risk is calculated, we will say that each level brings higher confidence that the user or sign-in is compromised. For example, something like one instance of unfamiliar sign-in properties for a user might not be as threatening as leaked credentials for another user.
+While Microsoft doesn't provide specific details about how risk is calculated, we'll say that each level brings higher confidence that the user or sign-in is compromised. For example, something like one instance of unfamiliar sign-in properties for a user might not be as threatening as leaked credentials for another user.
### Password hash synchronization
Microsoft finds leaked credentials in various places, including:
#### Why am I not seeing any leaked credentials?
-Leaked credentials are processed anytime Microsoft finds a new, publicly available batch. Because of the sensitive nature, the leaked credentials are deleted shortly after processing. Only new leaked credentials found after you enable password hash synchronization (PHS) will be processed against your tenant. Verifying against previously found credential pairs is not done.
+Leaked credentials are processed anytime Microsoft finds a new, publicly available batch. Because of the sensitive nature, the leaked credentials are deleted shortly after processing. Only new leaked credentials found after you enable password hash synchronization (PHS) will be processed against your tenant. Verifying against previously found credential pairs isn't done.
#### I have not seen any leaked credential risk events for quite some time?
-If you have not seen any leaked credential risk events, it is because of the following reasons:
+If you haven't seen any leaked credential risk events, it is because of the following reasons:
-- You do not have PHS enabled for your tenant.
+- You don't have PHS enabled for your tenant.
- Microsoft has not found any leaked credential pairs that match your users. #### How often does Microsoft process new credentials?
active-directory Howto Identity Protection Configure Risk Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md
Previously updated : 01/24/2022 Last updated : 03/18/2022
Both policies work to automate the response to risk detections in your environme
## Choosing acceptable risk levels
-Organizations must decide the level of risk they are willing to accept balancing user experience and security posture.
+Organizations must decide the level of risk they're willing to accept balancing user experience and security posture.
Microsoft's recommendation is to set the user risk policy threshold to **High** and the sign-in risk policy to **Medium and above** and allow self-remediation options. Choosing to block access rather than allowing self-remediation options, like password change and multi-factor authentication, will impact your users and administrators. Weigh this choice when configuring your policies.
Organizations can choose to block access when risk is detected. Blocking sometim
- When a user risk policy triggers: - Administrators can require a secure password reset, requiring Azure AD MFA be done before the user creates a new password with SSPR, resetting the user risk. - When a sign-in risk policy triggers:
- - Azure AD MFA can be triggered, allowing to user to prove it is them by using one of their registered authentication methods, resetting the sign-in risk.
+ - Azure AD MFA can be triggered, allowing to user to prove it's them by using one of their registered authentication methods, resetting the sign-in risk.
> [!WARNING] > Users must register for Azure AD MFA and SSPR before they face a situation requiring remediation. Users not registered are blocked and require administrator intervention.
Organizations can choose to block access when risk is detected. Blocking sometim
## Exclusions
-Policies allow for excluding users such as your [emergency access or break-glass administrator accounts](../roles/security-emergency-access.md). Organizations may need to exclude other accounts from specific policies based on the way the accounts are used. Exclusions should be reviewed regularly to see if they are still applicable.
+Policies allow for excluding users such as your [emergency access or break-glass administrator accounts](../roles/security-emergency-access.md). Organizations may need to exclude other accounts from specific policies based on the way the accounts are used. Exclusions should be reviewed regularly to see if they're still applicable.
## Enable policies
-There are two locations where these policies may be configured, Conditional Access and Identity Protection. Configuration using Conditional Access policies is the preferred method, providing more context including:
+There are two locations where these policies may be configured, Conditional Access and Identity Protection. Configuration using Conditional Access policies is the preferred method, providing more context including:
- Enhanced diagnostic data - Report-only mode integration - Graph API support - Use more Conditional Access attributes in policy
+Organizations can choose to deploy policies using the steps outlined below or using the [Conditional Access templates (Preview)](../conditional-access/concept-conditional-access-policy-common.md#conditional-access-templates-preview).
+ > [!VIDEO https://www.youtube.com/embed/zEsbbik-BTE]
-Before enabling remediation policies, organizations may want to [investigate](howto-identity-protection-investigate-risk.md) and [remediate](howto-identity-protection-remediate-unblock.md) any active risks.
+Before organizations enable remediation policies, they may want to [investigate](howto-identity-protection-investigate-risk.md) and [remediate](howto-identity-protection-remediate-unblock.md) any active risks.
### User risk with Conditional Access
active-directory How To Managed Identity Regional Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-managed-identity-regional-move.md
+
+ Title: Move managed identities to another region - Azure AD
+description: Steps involved in getting a managed identity recreated in another region
+
+documentationcenter:
++
+editor:
++++
+ na
+ Last updated : 04/13/2022+++++
+# Move managed identity for Azure resources across regions
+
+There are situations in which you'd want to move your existing user-assigned managed identities from one region to another. For example, you may need to move a solution that uses user-assigned managed identities to another region. You may also want to move an existing identity to another region as part of disaster recovery planning, and testing.
+
+Moving User-assigned managed identities across Azure regions is not supported. You can however, recreate a user-assigned managed identity in the target region.
+
+## Prerequisites
+
+- Permissions to list permissions granted to existing user-assigned managed identity.
+- Permissions to grant a new user-assigned managed identity the required permissions.
+- Permissions to assign a new user-assigned identity to the Azure resources.
+- Permissions to edit Group membership, if your user-assigned managed identity is a member of one or more groups.
+
+## Prepare and move
+
+1. Copy user-assigned managed identity assigned permissions. You can list [Azure role assignments](../../role-based-access-control/role-assignments-list-powershell.md) but that may not be enough depending on how permissions were granted to the user-assigned managed identity. You should confirm that your solution doesn't depend on permissions granted using a service specific option.
+1. Create a [new user-assigned managed identity](how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-powershell#create-a-user-assigned-managed-identity-2) at the target region.
+1. Grant the managed identity the same permissions as the original identity that it's replacing, including Group membership. You can review [Assign Azure roles to a managed identity](../../role-based-access-control/role-assignments-portal-managed-identity.md), and [Group membership](../../active-directory/fundamentals/active-directory-groups-view-azure-portal.md).
+1. Specify the new identity in the properties of the resource instance that uses the newly created user assigned managed identity.
+
+## Verify
+
+After reconfiguring your service to use your new managed identities in the target region, you need to confirm that all operations have been restored.
+
+## Clean up
+
+Once that you confirm your service is back online, you can proceed to delete any resources in the source region that you no longer use.
+
+## Next steps
+
+In this tutorial, you took the steps needed to recreate a user-assigned managed identity in a new region.
+
+- [Manage user-assigned managed identities](how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-powershell#delete-a-user-assigned-managed-identity-2)
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
The following Azure services support managed identities for Azure resources:
| Azure Media services | [Managed identities](/azure/media-services/latest/concept-managed-identities) | | Azure Monitor | [Azure Monitor customer-managed key](../../azure-monitor/logs/customer-managed-keys.md?tabs=portal) | | Azure Policy | [Remediate non-compliant resources with Azure Policy](../../governance/policy/how-to/remediate-resources.md) |
-| Azure Purview | [Credentials for source authentication in Azure Purview](../../purview/manage-credentials.md) |
+| Microsoft Purview | [Credentials for source authentication in Microsoft Purview](../../purview/manage-credentials.md) |
| Azure Resource Mover | [Move resources across regions (from resource group)](../../resource-mover/move-region-within-resource-group.md) | Azure Site Recovery | [Replicate machines with private endpoints](../../site-recovery/azure-to-azure-how-to-enable-replication-private-endpoints.md#enable-the-managed-identity-for-the-vault) | | Azure Search | [Set up an indexer connection to a data source using a managed identity](../../search/search-howto-managed-identities-data-sources.md) |
active-directory Delegate By Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/delegate-by-task.md
You can further restrict permissions by assigning roles at smaller scopes or by
> [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Create Azure AD Domain Services instance | [Application Administrator](../roles/permissions-reference.md#application-administrator)<br>[Groups Administrator](../roles/permissions-reference.md#groups-administrator)<br> [Domain Services Contributor](/azure/role-based-access-control/built-in-roles#domain-services-contributor)| |
+> | Create Azure AD Domain Services instance | [Application Administrator](../roles/permissions-reference.md#application-administrator)<br>[Groups Administrator](../roles/permissions-reference.md#groups-administrator)<br> [Domain Services Contributor](../../role-based-access-control/built-in-roles.md#domain-services-contributor)| |
> | Perform all Azure AD Domain Services tasks | [AAD DC Administrators group](../../active-directory-domain-services/tutorial-create-management-vm.md#administrative-tasks-you-can-perform-on-a-managed-domain) | | > | Read all configuration | Reader on Azure subscription containing AD DS service | |
You can further restrict permissions by assigning roles at smaller scopes or by
- [Assign Azure AD roles to users](manage-roles-portal.md) - [Assign Azure AD roles at different scopes](assign-roles-different-scopes.md) - [Create and assign a custom role in Azure Active Directory](custom-create.md)-- [Azure AD built-in roles](permissions-reference.md)
+- [Azure AD built-in roles](permissions-reference.md)
active-directory Security Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/security-planning.md
keywords:
Previously updated : 11/04/2021 Last updated : 04/19/2022
Evaluate the accounts that are assigned or eligible for the Global Administrator
#### Turn on multi-factor authentication and register all other highly privileged single-user non-federated administrator accounts
-Require Azure AD Multi-Factor Authentication (MFA) at sign-in for all individual users who are permanently assigned to one or more of the Azure AD administrator roles: Global Administrator, Privileged Role Administrator, Exchange Administrator, and SharePoint Administrator. Use the guide to enable [Multi-factor Authentication (MFA) for your administrator accounts](../authentication/howto-mfa-userstates.md) and ensure that all those users have registered at [https://aka.ms/mfasetup](https://aka.ms/mfasetup). More information can be found under step 2 and step 3 of the guide [Protect access to data and services in Microsoft 365](https://support.office.com/article/Protect-access-to-data-and-services-in-Office-365-a6ef28a4-2447-4b43-aae2-f5af6d53c68e).
+Require Azure AD Multi-Factor Authentication (MFA) at sign-in for all individual users who are permanently assigned to one or more of the Azure AD administrator roles: Global Administrator, Privileged Role Administrator, Exchange Administrator, and SharePoint Administrator. Use the guidance at [Enforce multifactor authentication on your administrators](../authentication/how-to-authentication-find-coverage-gaps.md#enforce-multifactor-authentication-on-your-administrators) and ensure that all those users have registered at [https://aka.ms/mfasetup](https://aka.ms/mfasetup). More information can be found under step 2 and step 3 of the guide [Protect user and device access in Microsoft 365](/microsoft-365/compliance/protect-access-to-data-and-services).
## Stage 2: Mitigate frequently used attacks
active-directory Cisco Webex Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-webex-tutorial.md
Previously updated : 11/01/2021 Last updated : 04/18/2022
In this tutorial, you'll learn how to integrate Cisco Webex Meetings with Azure
* Enable your users to be automatically signed-in to Cisco Webex Meetings with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal. - ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Cisco Webex Meetings single sign-on (SSO) enabled subscription. * Service Provider Metadata file from Cisco Webex Meetings.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
> [!NOTE] > This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
In this tutorial, you configure and test Azure AD SSO in a test environment.
* Cisco Webex Meetings supports [**Automated** user provisioning and deprovisioning](cisco-webex-provisioning-tutorial.md) (recommended). * Cisco Webex Meetings supports **Just In Time** user provisioning.
-## Adding Cisco Webex Meetings from the gallery
+## Add Cisco Webex Meetings from the gallery
To configure the integration of Cisco Webex Meetings into Azure AD, you need to add Cisco Webex Meetings from the gallery to your list of managed SaaS apps.
To configure and test Azure AD SSO with Cisco Webex Meetings, perform the follow
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-
1. **[Configure Cisco Webex Meetings SSO](#configure-cisco-webex-meetings-sso)** - to configure the single sign-on settings on application side.
- * **[Create Cisco Webex Meetings test user](#create-cisco-webex-meetings-test-user)** - to have a counterpart of B.Simon in Cisco Webex Meetings that is linked to the Azure AD representation of user.
-
+ 1. **[Create Cisco Webex Meetings test user](#create-cisco-webex-meetings-test-user)** - to have a counterpart of B.Simon in Cisco Webex Meetings that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
> You will get the Service Provider Metadata file from **Configure Cisco Webex Meetings SSO** section, which is explained later in the tutorial. 1. If you wish to configure the application in **SP** initiated mode, perform the following steps:
- 1. On the **Basic SAML Configuration** section, click the edit/pen icon.
+ 1. On the **Basic SAML Configuration** section, click the pencil icon.
![Edit Basic SAML Configuration](common/edit-urls.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Sign in to Cisco Webex Meetings with your administrator credentials. 1. Go to **Common Site Settings** and navigate to **SSO Configuration**.
- ![Screenshot shows Cisco Webex Administration with Common Site Settings and S S O Configuration selected.](./media/cisco-webex-tutorial/tutorial-cisco-webex-11.png)
+ ![Screenshot shows Cisco Webex Administration with Common Site Settings and S S O Configuration selected.](./media/cisco-webex-tutorial/settings.png)
1. On the **Webex Administration** page, perform the following steps:
- ![Screenshot shows the Webex Administration page with the information described in this step.](./media/cisco-webex-tutorial/tutorial-cisco-webex-10.png)
+ ![Screenshot shows the Webex Administration page with the information described in this step.](./media/cisco-webex-tutorial/metadata.png)
- 1. select **SAML 2.0** as **Federation Protocol**.
+ 1. Select **SAML 2.0** as **Federation Protocol**.
1. Click on **Import SAML Metadata** link to upload the metadata file, which you have downloaded from Azure portal. 1. Select **SSO Profile** as **IDP initiated** and click on **Export** button to download the Service Provider Metadata file and upload it in the **Basic SAML Configuration** section on Azure portal.
- 1. In the **AuthContextClassRef** textbox, type one of the following values:
- * `urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified`
- * `urn:oasis:names:tc:SAML:2.0:ac:classes:Password`
-
- To enable the MFA by using Azure AD, enter the two values like this:
- `urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport;urn:oasis:names:tc:SAML:2.0:ac:classes:X509`
- 1. Select **Auto Account Creation**. > [!NOTE]
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure Cisco Webex Meetings you can enforce Session Control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure Cisco Webex Meetings you can enforce Session Control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory F5 Big Ip Oracle Enterprise Business Suite Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/f5-big-ip-oracle-enterprise-business-suite-easy-button.md
Integrating a BIG-IP with Azure AD provides many benefits, including:
* Manage Identities and access from a single control plane, the [Azure portal](https://portal.azure.com/)
-To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](/azure/active-directory/manage-apps/f5-aad-integration) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](../manage-apps/f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
## Scenario description
Prior BIG-IP experience isnΓÇÖt necessary, but you need:
* An account with Azure AD application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator)
-* An [SSL Web certificate](/azure/active-directory/manage-apps/f5-bigip-deployment-guide#ssl-profile) for publishing services over HTTPS, or use default BIG-IP certs while testing
+* An [SSL Web certificate](../manage-apps/f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS, or use default BIG-IP certs while testing
* An existing Oracle EBS suite including Oracle AccessGate and an LDAP enabled OID (Oracle Internet Database)
Along with this the SAML federation metadata for the published application is al
If the BIG-IP webtop portal is used to access published applications then a sign-out from there would be processed by the APM to also call the Azure AD sign-out endpoint. But consider a scenario where the BIG-IP webtop portal isnΓÇÖt used, then the user has no way of instructing the APM to sign out. Even if the user signs-out of the application itself, the BIG-IP is technically oblivious to this. So for this reason, SP initiated sign-out needs careful consideration to ensure sessions are securely terminated when no longer required. One way of achieving this would be to add an SLO function to your applications sign out button, so that it can redirect your client to either the Azure AD SAML or BIG-IP sign-out endpoint. The URL for SAML sign-out endpoint for your tenant can be found in **App Registrations > Endpoints**.
-If making a change to the app is a no go, then consider having the BIG-IP listen for the application's sign-out call, and upon detecting the request have it trigger SLO. Refer to our [Oracle PeopleSoft SLO guidance](/azure/active-directory/manage-apps/f5-big-ip-oracle-peoplesoft-easy-button#peoplesoft-single-logout) for using BIG-IP irules to achieve this. More details on using BIG-IP iRules to achieve this is available in the F5 knowledge article [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
+If making a change to the app is a no go, then consider having the BIG-IP listen for the application's sign-out call, and upon detecting the request have it trigger SLO. Refer to our [Oracle PeopleSoft SLO guidance](../manage-apps/f5-big-ip-oracle-peoplesoft-easy-button.md#peoplesoft-single-logout) for using BIG-IP irules to achieve this. More details on using BIG-IP iRules to achieve this is available in the F5 knowledge article [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
## Summary
For increased security, organizations using this pattern could also consider blo
## Advanced deployment
-There may be cases where the Guided Configuration templates lack the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for headers-based SSO](/azure/active-directory/manage-apps/f5-big-ip-header-advanced). Alternatively, the BIG-IP gives the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
+There may be cases where the Guided Configuration templates lack the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for headers-based SSO](../manage-apps/f5-big-ip-header-advanced.md). Alternatively, the BIG-IP gives the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
The following command from a bash shell validates the APM service account used f
```ldapsearch -xLLL -H 'ldap://192.168.0.58' -b "CN=oraclef5,dc=contoso,dc=lds" -s sub -D "CN=f5-apm,CN=partners,DC=contoso,DC=lds" -w 'P@55w0rd!' "(cn=testuser)" ```
-For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this [F5 knowledge article on LDAP Query](https://techdocs.f5.com/en-us/bigip-16-1-0/big-ip-access-policy-manager-authentication-methods/ldap-query.html).
+For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this [F5 knowledge article on LDAP Query](https://techdocs.f5.com/en-us/bigip-16-1-0/big-ip-access-policy-manager-authentication-methods/ldap-query.html).
active-directory F5 Big Ip Oracle Jd Edwards Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/f5-big-ip-oracle-jd-edwards-easy-button.md
Integrating a BIG-IP with Azure AD provides many benefits, including:
* Manage Identities and access from a single control plane, the [Azure portal](https://portal.azure.com/)
-To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](/azure/active-directory/manage-apps/f5-aad-integration) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](../manage-apps/f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
## Scenario description
Prior BIG-IP experience isnΓÇÖt necessary, but you need:
* An Azure AD free subscription or above
-* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](/azure/active-directory/manage-apps/f5-bigip-deployment-guide)
+* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](../manage-apps/f5-bigip-deployment-guide.md)
* Any of the following F5 BIG-IP license SKUs
Prior BIG-IP experience isnΓÇÖt necessary, but you need:
* An account with Azure AD application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator)
-* An [SSL Web certificate](/azure/active-directory/manage-apps/f5-bigip-deployment-guide#ssl-profile) for publishing services over HTTPS, or use default BIG-IP certs while testing
+* An [SSL Web certificate](../manage-apps/f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS, or use default BIG-IP certs while testing
* An existing Oracle JDE environment
Along with this the SAML federation metadata for the published application is al
If the BIG-IP webtop portal is used to access published applications then a sign-out from there would be processed by the APM to also call the Azure AD sign-out endpoint. But consider a scenario where the BIG-IP webtop portal isnΓÇÖt used, then the user has no way of instructing the APM to sign out. Even if the user signs-out of the application itself, the BIG-IP is technically oblivious to this. So for this reason, SP initiated sign-out needs careful consideration to ensure sessions are securely terminated when no longer required. One way of achieving this would be to add an SLO function to your applications sign out button, so that it can redirect your client to either the Azure AD SAML or BIG-IP sign-out endpoint. The URL for SAML sign-out endpoint for your tenant can be found in **App Registrations > Endpoints**.
-If making a change to the app is a no go, then consider having the BIG-IP listen for the application's sign-out call, and upon detecting the request have it trigger SLO. Refer to our [Oracle PeopleSoft SLO guidance](/azure/active-directory/manage-apps/f5-big-ip-oracle-peoplesoft-easy-button#peoplesoft-single-logout) for using BIG-IP irules to achieve this. More details on using BIG-IP iRules to achieve this is available in the F5 knowledge article [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
+If making a change to the app is a no go, then consider having the BIG-IP listen for the application's sign-out call, and upon detecting the request have it trigger SLO. Refer to our [Oracle PeopleSoft SLO guidance](../manage-apps/f5-big-ip-oracle-peoplesoft-easy-button.md#peoplesoft-single-logout) for using BIG-IP irules to achieve this. More details on using BIG-IP iRules to achieve this is available in the F5 knowledge article [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
## Summary
For increased security, organizations using this pattern could also consider blo
## Advanced deployment
-There may be cases where the Guided Configuration templates lack the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for headers-based SSO](/azure/active-directory/manage-apps/f5-big-ip-header-advanced). Alternatively, the BIG-IP gives the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
+There may be cases where the Guided Configuration templates lack the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for headers-based SSO](../manage-apps/f5-big-ip-header-advanced.md). Alternatively, the BIG-IP gives the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
2. The **View Variables** link in this location may also help root cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes from Azure AD or another source
-See [BIG-IP APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
+See [BIG-IP APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory F5 Big Ip Sap Erp Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/f5-big-ip-sap-erp-easy-button.md
Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefi
* Manage identities and access from a single control plane, the [Azure portal](https://portal.azure.com/)
-To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](/azure/active-directory/manage-apps/f5-aad-integration) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](../manage-apps/f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
## Scenario description
Prior BIG-IP experience isnΓÇÖt necessary, but you will need:
* An Azure AD free subscription or above
-* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](/azure/active-directory/manage-apps/f5-bigip-deployment-guide)
+* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](../manage-apps/f5-bigip-deployment-guide.md)
* Any of the following F5 BIG-IP license offers
Prior BIG-IP experience isnΓÇÖt necessary, but you will need:
* An account with Azure AD Application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator)
-* An [SSL Web certificate](/azure/active-directory/manage-apps/f5-bigip-deployment-guide#ssl-profile) for publishing services over HTTPS, or use default BIG-IP certs while testing
+* An [SSL Web certificate](../manage-apps/f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS, or use default BIG-IP certs while testing
* An existing SAP ERP environment configured for Kerberos authentication
Easy Button provides a set of pre-defined application templates for Oracle Peopl
When a user successfully authenticates to Azure AD, it issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims tab** shows the default claims to issue for the new application. It also lets you configure more claims.
-As our example AD infrastructure is based on a .com domain suffix used both, internally and externally, we donΓÇÖt require any additional attributes to achieve a functional KCD SSO implementation. See the [advanced tutorial](/azure/active-directory/manage-apps/f5-big-ip-kerberos-advanced) for cases where you have multiple domains or userΓÇÖs log-in using an alternate suffix.
+As our example AD infrastructure is based on a .com domain suffix used both, internally and externally, we donΓÇÖt require any additional attributes to achieve a functional KCD SSO implementation. See the [advanced tutorial](../manage-apps/f5-big-ip-kerberos-advanced.md) for cases where you have multiple domains or userΓÇÖs log-in using an alternate suffix.
![Screenshot for user attributes and claims](./media/f5-big-ip-easy-button-sap-erp/user-attributes-claims.png)
For increased security, organizations using this pattern could also consider blo
## Advanced deployment
-There may be cases where the Guided Configuration templates lack the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for kerberos-based SSO](/azure/active-directory/manage-apps/f5-big-ip-kerberos-advanced).
+There may be cases where the Guided Configuration templates lack the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for kerberos-based SSO](../manage-apps/f5-big-ip-kerberos-advanced.md).
Alternatively, the BIG-IP gives you the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
2. Select the link for your active session. The **View Variables** link in this location may also help determine root cause KCD issues, particularly if the BIG-IP APM fails to obtain the right user and domain identifiers from session variables
-See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
+See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory Fortisase Sia Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortisase-sia-tutorial.md
Previously updated : 03/25/2022 Last updated : 04/13/2022
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://<TENANTHOSTNAME>.edge.prod.fortisase.com/remote/saml/metadata`
+ a. In the **Identifier (Entity ID)** text box, type a URL using one of the following patterns:
- b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://<TENANTHOSTNAME>.edge.prod.fortisase.com/remote/saml/login`
+ | User | URL |
+ ||--|
+ | For FortiSASE VPN User SSO | `https://<TENANTHOSTNAME>.edge.prod.fortisase.com/remote/saml/metadata` |
+ | For FortiSASE SWG User SSO | `https://<TENANTHOSTNAME>.edge.prod.fortisase.com:7831/XX/YY/ZZ/saml/metadata` |
+
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | User | URL |
+ ||--|
+ | For FortiSASE VPN User SSO | `https://<TENANTHOSTNAME>.edge.prod.fortisase.com/remote/saml/login` |
+ | For FortiSASE SWG User SSO | `https://<TENANTHOSTNAME>.edge.prod.fortisase.com:7831/XX/YY/ZZ/saml/login` |
- c. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<TENANTHOSTNAME>.edge.prod.fortisase.com/remote/login`
+ c. In the **Sign on URL** text box, type a URL using one of the following patterns:
+
+ | User | URL |
+ ||--|
+ | For FortiSASE VPN User SSO | `https://<TENANTHOSTNAME>.edge.prod.fortisase.com/remote/login` |
+ | For FortiSASE SWG User SSO | `https://<TENANTHOSTNAME>.edge.prod.fortisase.com:7831/XX/YY/ZZ/login` |
> [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [FortiSASE Client support team](mailto:fgc@fortinet.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. On the FortiSASE portal, go to **Configuration > VPN User SSO** or **Configuration > SWG User SSO** to find the service provider URLs. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. FortiSASE application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure FortiSASE SSO
-To configure single sign-on on **FortiSASE** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [FortiSASE support team](mailto:fgc@fortinet.com). They set this setting to have the SAML SSO connection set properly on both sides.
+1. Log in to your FortiSASE company site as an administrator.
+
+1. Go to **Configuration > VPN User SSO** or **Configuration > SWG User SSO** depending on the FortiSASE mode used.
+
+1. In the **Configure Identity Provider** section, copy the following URLs and paste in the **Basic SAML Configuration** section in the Azure portal.
+
+ ![Screenshot that shows the Configuration](./media/fortisase-tutorial/general.png "Configuration")
+
+1. In the **Configure Service Provider** section, perform the following steps:
+
+ ![Screenshot that shows Service Provider configuration](./media/fortisase-tutorial/certificate.png "Service Provider")
+
+ a. In the **IdP Entity ID** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal.
+
+ b. In the **IdP Single Sign-On URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ c. In the **IdP Single Log-Out URL** textbox, paste the **Logout URL** value which you have copied from the Azure portal.
+
+ d. Open the downloaded **Certificate (Base64)** from the Azure portal into Notepad and upload the content into the **IdP Certificate** textbox.
+
+1. Review and submit the configuration.
### Create FortiSASE test user
-In this section, a user called Britta Simon is created in FortiSASE. FortiSASE supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in FortiSASE, a new one is created after authentication.
+FortiSASE supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section.
## Test SSO
active-directory Per Angusta Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/per-angusta-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Per Angusta'
+description: Learn how to configure single sign-on between Azure Active Directory and Per Angusta.
++++++++ Last updated : 04/06/2022++++
+# Tutorial: Azure AD SSO integration with Per Angusta
+
+In this tutorial, you'll learn how to integrate Per Angusta with Azure Active Directory (Azure AD). When you integrate Per Angusta with Azure AD, you can:
+
+* Control in Azure AD who has access to Per Angusta.
+* Enable your users to be automatically signed-in to Per Angusta with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Per Angusta single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Per Angusta supports **SP** initiated SSO.
+
+## Add Per Angusta from the gallery
+
+To configure the integration of Per Angusta into Azure AD, you need to add Per Angusta from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Per Angusta** in the search box.
+1. Select **Per Angusta** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Per Angusta
+
+Configure and test Azure AD SSO with Per Angusta using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Per Angusta.
+
+To configure and test Azure AD SSO with Per Angusta, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Per Angusta SSO](#configure-per-angusta-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Per Angusta test user](#create-per-angusta-test-user)** - to have a counterpart of B.Simon in Per Angusta that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Per Angusta** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a value using the following pattern:
+ `<SUBDOMAIN>.per-angusta.com`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.per-angusta.com/saml/consume`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.per-angusta.com/saml/init`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Per Angusta Client support team](mailto:support@per-angusta.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Per Angusta.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Per Angusta**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Per Angusta SSO
+
+1. Log in to your Per Angusta company site as an administrator.
+
+1. Go to Administration tab.
+
+ ![Screenshot that shows the Admin Account](./media/per-angusta-tutorial/users.png "Account")
+
+1. In the left-side menu under **CONFIGURATION**, click **SSO SAML**.
+
+ ![Screenshot that shows the Configuration](./media/per-angusta-tutorial/general.png "Configuration")
+
+1. Perform the following steps in the configuration page:
+
+ ![Screenshot that shows the metadata](./media/per-angusta-tutorial/certificate.png "Metadata")
+
+ ![Screenshot that shows the SSO SAML Certificate](./media/per-angusta-tutorial/claims.png "SAML Certificate")
+
+ 1. Copy **Reply URL** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ 1. Copy **Entity ID** value, paste this value into the **Identifier** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ 1. Copy **SAML initialization URL** value, paste this value into the **Sign on URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ 1. Enable **Active** SSO checkbox before to test connection.
+
+ 1. In the **XML URL** textbox, paste the **App Federation Metadata Url** value which you have copied from the Azure portal.
+
+ 1. In the **Claim** textbox, select **Email** from the dropdown.
+
+ 1. In the **NameID Format** textbox, please select `urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified` from the dropdown.
+
+ 1. Click **Save**.
+
+### Create Per Angusta test user
+
+In this section, you create a user called Britta Simon in Per Angusta. Work with [Per Angusta support team](mailto:support@per-angusta.com) to add the users in the Per Angusta platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Per Angusta Sign-on URL where you can initiate the login flow.
+
+* Go to Per Angusta Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Per Angusta tile in the My Apps, this will redirect to Per Angusta Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure Per Angusta you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
advisor Advisor Reference Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-cost-recommendations.md
Learn more about [Managed Disk Snapshot - ManagedDiskSnapshot (Use Standard Stor
We've analyzed the usage patterns of your virtual machine over the past 7 days and identified virtual machines with low usage. While certain scenarios can result in low utilization by design, you can often save money by managing the size and number of virtual machines.
-Learn more about [Virtual machine - LowUsageVmV2 (Right-size or shutdown underutilized virtual machines)](/azure/advisor/advisor-cost-recommendations#optimize-virtual-machine-spend-by-resizing-or-shutting-down-underutilized-instances).
+Learn more about [Virtual machine - LowUsageVmV2 (Right-size or shutdown underutilized virtual machines)](./advisor-cost-recommendations.md#optimize-virtual-machine-spend-by-resizing-or-shutting-down-underutilized-instances).
### You have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk.
Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoScaleGuidance
## Next steps
-Learn more about [Cost Optimization - Microsoft Azure Well Architected Framework](/azure/architecture/framework/cost/overview)
+Learn more about [Cost Optimization - Microsoft Azure Well Architected Framework](/azure/architecture/framework/cost/overview)
advisor Advisor Reference Operational Excellence Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md
Learn more about [Batch account - OldPool (Recreate your pool to get the latest
Your pool is using a deprecated internal component. Please delete and recreate your pool for improved stability and performance.
-Learn more about [Batch account - RecreatePool (Delete and recreate your pool to remove a deprecated internal component)](/azure/batch/best-practices#pool-lifetime-and-billing).
+Learn more about [Batch account - RecreatePool (Delete and recreate your pool to remove a deprecated internal component)](../batch/best-practices.md#pool-lifetime-and-billing).
### Upgrade to the latest API version to ensure your Batch account remains operational.
Learn more about [Batch account - RemoveA8_A11Pools (Delete and recreate your po
Your pool is using an image with an imminent expiration date. Please recreate the pool with a new image to avoid potential interruptions. A list of newer images is available via the ListSupportedImages API.
-Learn more about [Batch account - EolImage (Recreate your pool with a new image)](/azure/batch/batch-pool-vm-sizes#supported-vm-images).
+Learn more about [Batch account - EolImage (Recreate your pool with a new image)](../batch/batch-pool-vm-sizes.md#supported-vm-images).
## Cognitive Service
Learn more about [Kubernetes service - UpdateServicePrincipal (Update cluster's
Monitoring addon workspace is deleted. Correct issues to setup monitoring addon.
-Learn more about [Kubernetes service - MonitoringAddonWorkspaceIsDeleted (Monitoring addon workspace is deleted)](/azure/azure-monitor/containers/container-insights-optout#azure-cli).
+Learn more about [Kubernetes service - MonitoringAddonWorkspaceIsDeleted (Monitoring addon workspace is deleted)](../azure-monitor/containers/container-insights-optout.md#azure-cli).
### Deprecated Kubernetes API in 1.16 is found
Learn more about [Cosmos DB account - CosmosDBMigrateToContinuousBackup (Improve
We have detected that one or more of your alert rules have invalid queries specified in their condition section. Log alert rules are created in Azure Monitor and are used to run analytics queries at specified intervals. The results of the query determine if an alert needs to be triggered. Analytics queries may become invalid overtime due to changes in referenced resources, tables, or commands. We recommend that you correct the query in the alert rule to prevent it from getting auto-disabled and ensure monitoring coverage of your resources in Azure.
-Learn more about [Alert Rule - ScheduledQueryRulesLogAlert (Repair your log alert rule)](/azure/azure-monitor/alerts/alerts-troubleshoot-log#query-used-in-a-log-alert-is-not-valid).
+Learn more about [Alert Rule - ScheduledQueryRulesLogAlert (Repair your log alert rule)](../azure-monitor/alerts/alerts-troubleshoot-log.md#query-used-in-a-log-alert-isnt-valid).
### Log alert rule was disabled The alert rule was disabled by Azure Monitor as it was causing service issues. To enable the alert rule, contact support.
-Learn more about [Alert Rule - ScheduledQueryRulesRp (Log alert rule was disabled)](/azure/azure-monitor/alerts/alerts-troubleshoot-log#query-used-in-a-log-alert-is-not-valid).
+Learn more about [Alert Rule - ScheduledQueryRulesRp (Log alert rule was disabled)](../azure-monitor/alerts/alerts-troubleshoot-log.md#query-used-in-a-log-alert-isnt-valid).
## Key Vault
Learn more about [SQL virtual machine - UpgradeToFullMode (SQL IaaS Agent should
A region can support a maximum of 250 storage accounts per subscription. You have either already reached or are about to reach that limit. If you reach that limit, you will be unable to create any more storage accounts in that subscription/region combination. Please evaluate the recommended action below to avoid hitting the limit.
-Learn more about [Storage Account - StorageAccountScaleTarget (Prevent hitting subscription limit for maximum storage accounts)](/azure/storage/blobs/storage-performance-checklist#what-to-do-when-approaching-a-scalability-target).
+Learn more about [Storage Account - StorageAccountScaleTarget (Prevent hitting subscription limit for maximum storage accounts)](../storage/blobs/storage-performance-checklist.md).
### Update to newer releases of the Storage Java v12 SDK for better reliability.
Learn more about [App service - AzureAppService-StagingEnv (Set up staging envir
## Next steps
-Learn more about [Operational Excellence - Microsoft Azure Well Architected Framework](/azure/architecture/framework/devops/overview)
+Learn more about [Operational Excellence - Microsoft Azure Well Architected Framework](/azure/architecture/framework/devops/overview)
advisor Advisor Reference Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-performance-recommendations.md
Learn more about [AVS Private cloud - vSANCapacity (vSAN capacity utilization ha
Cache instances perform best when not running under high network bandwidth which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce network bandwidth or scale to a different size or sku with more capacity.
-Learn more about [Redis Cache Server - RedisCacheNetworkBandwidth (Improve your Cache and application performance when running with high network bandwidth)](/azure/azure-cache-for-redis/cache-troubleshoot-server#server-side-bandwidth-limitation).
+Learn more about [Redis Cache Server - RedisCacheNetworkBandwidth (Improve your Cache and application performance when running with high network bandwidth)](../azure-cache-for-redis/cache-troubleshoot-server.md#server-side-bandwidth-limitation).
### Improve your Cache and application performance when running with many connected clients
Learn more about [Redis Cache Server - RedisCacheConnectedClients (Improve your
Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity.
-Learn more about [Redis Cache Server - RedisCacheServerLoad (Improve your Cache and application performance when running with high server load)](/azure/azure-cache-for-redis/cache-troubleshoot-client#high-client-cpu-usage).
+Learn more about [Redis Cache Server - RedisCacheServerLoad (Improve your Cache and application performance when running with high server load)](../azure-cache-for-redis/cache-troubleshoot-client.md#high-client-cpu-usage).
### Improve your Cache and application performance when running with high memory pressure Cache instances perform best when not running under high memory pressure which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce used memory or scale to a different size or sku with more capacity.
-Learn more about [Redis Cache Server - RedisCacheUsedMemory (Improve your Cache and application performance when running with high memory pressure)](/azure/azure-cache-for-redis/cache-troubleshoot-client#memory-pressure-on-redis-client).
+Learn more about [Redis Cache Server - RedisCacheUsedMemory (Improve your Cache and application performance when running with high memory pressure)](../azure-cache-for-redis/cache-troubleshoot-client.md#memory-pressure-on-redis-client).
## Cognitive Service
Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTables
Time to Live (TTL) affects how recent of a response a client will get when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client will be routed to a functioning endpoint faster in the case of a failover. Configure your TTL to 20 seconds to route traffic to a health endpoint as quickly as possible.
-Learn more about [Traffic Manager profile - FastFailOverTTL (Configure DNS Time to Live to 20 seconds)](/azure/traffic-manager/traffic-manager-monitoring#endpoint-failover-and-recovery).
+Learn more about [Traffic Manager profile - FastFailOverTTL (Configure DNS Time to Live to 20 seconds)](../traffic-manager/traffic-manager-monitoring.md#endpoint-failover-and-recovery).
### Configure DNS Time to Live to 60 seconds
Learn more about [SQL data warehouse - CreateTableStatisticsSqlDW (Create statis
We have detected distribution data skew greater than 15%. This can cause costly performance bottlenecks.
-Learn more about [SQL data warehouse - DataSkewSqlDW (Remove data skew to increase query performance)](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute#how-to-tell-if-your-distribution-column-is-a-good-choice).
+Learn more about [SQL data warehouse - DataSkewSqlDW (Remove data skew to increase query performance)](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute.md#how-to-tell-if-your-distribution-column-is-a-good-choice).
### Update statistics on table columns
Learn more about [SQL data warehouse - SqlDwIncreaseCacheCapacity (Scale up to o
We have detected that you had high tempdb utilization which can impact the performance of your workload.
-Learn more about [SQL data warehouse - SqlDwReduceTempdbContention (Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse)](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor#monitor-tempdb).
+Learn more about [SQL data warehouse - SqlDwReduceTempdbContention (Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse)](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor.md#monitor-tempdb).
### Convert tables to replicated tables with SQL Data Warehouse
Learn more about [SQL data warehouse - SqlDwReplicateTable (Convert tables to re
We have detected that you can increase load throughput by splitting your compressed files that are staged in your storage account. A good rule of thumb is to split compressed files into 60 or more to maximize the parallelism of your load.
-Learn more about [SQL data warehouse - FileSplittingGuidance (Split staged files in the storage account to increase load performance)](/azure/synapse-analytics/sql/data-loading-best-practices#preparing-data-in-azure-storage).
+Learn more about [SQL data warehouse - FileSplittingGuidance (Split staged files in the storage account to increase load performance)](../synapse-analytics/sql/data-loading-best-practices.md#prepare-data-in-azure-storage).
### Increase batch size when loading to maximize load throughput, data compression, and query performance We have detected that you can increase load performance and throughput by increasing the batch size when loading into your database. You should consider using the COPY statement. If you are unable to use the COPY statement, consider increasing the batch size when using loading utilities such as the SQLBulkCopy API or BCP - a good rule of thumb is a batch size between 100K to 1M rows.
-Learn more about [SQL data warehouse - LoadBatchSizeGuidance (Increase batch size when loading to maximize load throughput, data compression, and query performance)](/azure/synapse-analytics/sql/data-loading-best-practices#increase-batch-size-when-using-sqlbulkcopy-api-or-bcp).
+Learn more about [SQL data warehouse - LoadBatchSizeGuidance (Increase batch size when loading to maximize load throughput, data compression, and query performance)](../synapse-analytics/sql/data-loading-best-practices.md#increase-batch-size-when-using-sqlbulkcopy-api-or-bcp).
### Co-locate the storage account within the same region to minimize latency when loading We have detected that you are loading from a region that is different from your SQL pool. You should consider loading from a storage account that is within the same region as your SQL pool to minimize latency when loading data.
-Learn more about [SQL data warehouse - ColocateStorageAccount (Co-locate the storage account within the same region to minimize latency when loading)](/azure/synapse-analytics/sql/data-loading-best-practices#preparing-data-in-azure-storage).
+Learn more about [SQL data warehouse - ColocateStorageAccount (Co-locate the storage account within the same region to minimize latency when loading)](../synapse-analytics/sql/data-loading-best-practices.md#prepare-data-in-azure-storage).
## Storage
Learn more about [App service - AppServiceMoveToPremiumV2 (Move your App Service
Your app has opened too many TCP/IP socket connections. Exceeding ephemeral TCP/IP port connection limits can cause unexpected connectivity issues for your apps.
-Learn more about [App service - AppServiceOutboundConnections (Check outbound connections from your App Service resource)](/azure/app-service/app-service-best-practices#socketresources).
+Learn more about [App service - AppServiceOutboundConnections (Check outbound connections from your App Service resource)](../app-service/app-service-best-practices.md#socketresources).
## Next steps
-Learn more about [Performance Efficiency - Microsoft Azure Well Architected Framework](/azure/architecture/framework/scalability/overview)
+Learn more about [Performance Efficiency - Microsoft Azure Well Architected Framework](/azure/architecture/framework/scalability/overview)
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
Learn more about [Api Management - TlsRenegotiationBlocked (SSL/TLS renegotiatio
Fragmentation and memory pressure can cause availability incidents during a failover or management operations. Increasing reservation of memory for fragmentation helps in reducing the cache failures when running under high memory pressure. Memory for fragmentation can be increased via maxfragmentationmemory-reserved setting available in advanced settings blade.
-Learn more about [Redis Cache Server - RedisCacheMemoryFragmentation (Availability may be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact.)](/azure/azure-cache-for-redis/cache-configure#memory-policies).
+Learn more about [Redis Cache Server - RedisCacheMemoryFragmentation (Availability may be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact.)](../azure-cache-for-redis/cache-configure.md#memory-policies).
## Compute
Learn more about [Virtual machine (classic) - EnableBackup (Enable Backups on yo
We have identified that you are using standard disks with your premium-capable Virtual Machines and we recommend you consider upgrading the standard disks to premium disks. For any Single Instance Virtual Machine using premium storage for all Operating System Disks and Data Disks, we guarantee you will have Virtual Machine Connectivity of at least 99.9%. Consider these factors when making your upgrade decision. The first is that upgrading requires a VM reboot and this process takes 3-5 minutes to complete. The second is if the VMs in the list are mission-critical production VMs, evaluate the improved availability against the cost of premium disks.
-Learn more about [Virtual machine - MigrateStandardStorageAccountToPremium (Upgrade the standard disks attached to your premium-capable VM to premium disks)](/azure/virtual-machines/disks-types#premium-ssd).
+Learn more about [Virtual machine - MigrateStandardStorageAccountToPremium (Upgrade the standard disks attached to your premium-capable VM to premium disks)](../virtual-machines/disks-types.md#premium-ssds).
### Enable virtual machine replication to protect your applications from regional outage
Learn more about [Virtual machine - UpgradeVMToManagedDisksWithoutAdditionalCost
Using IP Address based filtering has been identified as a vulnerable way to control outbound connectivity for firewalls. It is advised to use Service Tags as an alternative for controlling connectivity. We highly recommend the use of Service Tags, to allow connectivity to Azure Site Recovery services for the machines.
-Learn more about [Virtual machine - ASRUpdateOutboundConnectivityProtocolToServiceTags (Update your outbound connectivity protocol to Service Tags for Azure Site Recovery)](/azure/site-recovery/azure-to-azure-about-networking#outbound-connectivity-using-service-tags).
+Learn more about [Virtual machine - ASRUpdateOutboundConnectivityProtocolToServiceTags (Update your outbound connectivity protocol to Service Tags for Azure Site Recovery)](../site-recovery/azure-to-azure-about-networking.md#outbound-connectivity-using-service-tags).
### Use Managed Disks to improve data reliability
Learn more about [Cosmos DB account - CosmosDBSingleRegionProdAccounts (Add a se
We observed your account is throwing a TooManyRequests error with the 16500 error code. Enabling Server Side Retry (SSR) can help mitigate this issue for you.
-Learn more about [Cosmos DB account - CosmosDBMongoServerSideRetries (Enable Server Side Retry (SSR) on your Azure Cosmos DB's API for MongoDB account)](/azure/cosmos-db/cassandra/prevent-rate-limiting-errors).
+Learn more about [Cosmos DB account - CosmosDBMongoServerSideRetries (Enable Server Side Retry (SSR) on your Azure Cosmos DB's API for MongoDB account)](../cosmos-db/cassandr).
### Migrate your Azure Cosmos DB API for MongoDB account to v4.0 to save on query/storage costs and utilize new features
Learn more about [Application gateway - AppGateway (Upgrade your SKU or add more
The VPN gateway Basic SKU is designed for development or testing scenarios. Please move to a production SKU if you are using the VPN gateway for production purposes. The production SKUs offer higher number of tunnels, BGP support, active-active, custom IPsec/IKE policy in addition to higher stability and availability.
-Learn more about [Virtual network gateway - BasicVPNGateway (Move to production gateway SKUs from Basic gateways)](/azure/vpn-gateway/vpn-gateway-about-vpn-gateway-settings#gwsku).
+Learn more about [Virtual network gateway - BasicVPNGateway (Move to production gateway SKUs from Basic gateways)](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsku).
### Add at least one more endpoint to the profile, preferably in another Azure region
Learn more about [ExpressRoute circuit - ExpressRouteGatewayE2EMonitoring (Imple
Try to avoid overriding the hostname when configuring Application Gateway. Having a different domain on the frontend of Application Gateway than the one which is used to access the backend can potentially lead to cookies or redirect urls being broken. Note that this might not be the case in all situations and that certain categories of backends (like REST API's) in general are less sensitive to this. Please make sure the backend is able to deal with this or update the Application Gateway configuration so the hostname does not need to be overwritten towards the backend. When used with App Service, attach a custom domain name to the Web App and avoid use of the *.azurewebsites.net host name towards the backend.
-Learn more about [Application gateway - AppGatewayHostOverride (Avoid hostname override to ensure site integrity)](/azure/application-gateway/troubleshoot-app-service-redirection-app-service-url#alternate-solution-use-a-custom-domain-name).
+Learn more about [Application gateway - AppGatewayHostOverride (Avoid hostname override to ensure site integrity)](../application-gateway/troubleshoot-app-service-redirection-app-service-url.md).
### Use ExpressRoute Global Reach to improve your design for disaster recovery
Learn more about [Search service - StandardServiceStorageQuota90percent (You are
After enabling Soft Delete, deleted data transitions to a soft deleted state instead of being permanently deleted. When data is overwritten, a soft deleted snapshot is generated to save the state of the overwritten data. You can configure the amount of time soft deleted data is recoverable before it permanently expires.
-Learn more about [Storage Account - StorageSoftDelete (Enable Soft Delete to protect your blob data)](https://aka.ms/softdelete).
+Learn more about [Storage Account - StorageSoftDelete (Enable Soft Delete to protect your blob data)](../storage/blobs/soft-delete-blob-overview.md).
### Use Managed Disks for storage accounts reaching capacity limit We have identified that you are using Premium SSD Unmanaged Disks in Storage account(s) that are about to reach Premium Storage capacity limit. To avoid failures when the limit is reached, we recommend migrating to Managed Disks that do not have account capacity limit. This migration can be done through the portal in less than 5 minutes.
-Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Disks for storage accounts reaching capacity limit)](/azure/storage/common/scalability-targets-standard-account#premium-performance-page-blob-storage).
+Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Disks for storage accounts reaching capacity limit)](../storage/common/scalability-targets-standard-account.md).
## Web
Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Di
Your App reached >90% CPU over the last couple of days. High CPU utilization can lead to runtime issues with your apps, to solve this you could scale out your app.
-Learn more about [App service - AppServiceCPUExhaustion (Consider scaling out your App Service Plan to avoid CPU exhaustion)](/azure/app-service/app-service-best-practices#CPUresources).
+Learn more about [App service - AppServiceCPUExhaustion (Consider scaling out your App Service Plan to avoid CPU exhaustion)](../app-service/app-service-best-practices.md#CPUresources).
### Fix the backup database settings of your App Service resource Your app's backups are consistently failing due to invalid DB configuration, you can find more details in backup history.
-Learn more about [App service - AppServiceFixBackupDatabaseSettings (Fix the backup database settings of your App Service resource)](/azure/app-service/app-service-best-practices#appbackup.).
+Learn more about [App service - AppServiceFixBackupDatabaseSettings (Fix the backup database settings of your App Service resource)](../app-service/app-service-best-practices.md#appbackup).
### Consider scaling up your App Service Plan SKU to avoid memory exhaustion The App Service Plan containing your app reached >85% memory allocated. High memory consumption can lead to runtime issues with your apps. Investigate which app in the App Service Plan is exhausting memory and scale up to a higher plan with more memory resources if needed.
-Learn more about [App service - AppServiceMemoryExhaustion (Consider scaling up your App Service Plan SKU to avoid memory exhaustion)](/azure/app-service/app-service-best-practices#memoryresources).
+Learn more about [App service - AppServiceMemoryExhaustion (Consider scaling up your App Service Plan SKU to avoid memory exhaustion)](../app-service/app-service-best-practices.md#memoryresources).
### Scale up your App Service resource to remove the quota limit
Learn more about [App service - AppServiceUseDeploymentSlots (Use deployment slo
Your app's backups are consistently failing due to invalid storage settings, you can find more details in backup history.
-Learn more about [App service - AppServiceFixBackupStorageSettings (Fix the backup storage settings of your App Service resource)](/azure/app-service/app-service-best-practices#appbackup.).
+Learn more about [App service - AppServiceFixBackupStorageSettings (Fix the backup storage settings of your App Service resource)](../app-service/app-service-best-practices.md#appbackup).
### Move your App Service resource to Standard or higher and use deployment slots
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
You need the Azure CLI version 2.0.65 or later installed and configured. Run `a
In many environments, you have defined virtual networks and subnets with allocated IP address ranges. These virtual network resources are used to support multiple services and applications. To provide network connectivity, AKS clusters can use *kubenet* (basic networking) or Azure CNI (*advanced networking*).
-With *kubenet*, only the nodes receive an IP address in the virtual network subnet. Pods can't communicate directly with each other. Instead, User Defined Routing (UDR) and IP forwarding is used for connectivity between pods across nodes. By default, UDRs and IP forwarding configuration is created and maintained by the AKS service, but you have to the option to [bring your own route table for custom route management][byo-subnet-route-table]. You could also deploy pods behind a service that receives an assigned IP address and load balances traffic for the application. The following diagram shows how the AKS nodes receive an IP address in the virtual network subnet, but not the pods:
+With *kubenet*, only the nodes receive an IP address in the virtual network subnet. Pods can't communicate directly with each other. Instead, User Defined Routing (UDR) and IP forwarding is used for connectivity between pods across nodes. By default, UDRs and IP forwarding configuration is created and maintained by the AKS service, but you have the option to [bring your own route table for custom route management][byo-subnet-route-table]. You could also deploy pods behind a service that receives an assigned IP address and load balances traffic for the application. The following diagram shows how the AKS nodes receive an IP address in the virtual network subnet, but not the pods:
![Kubenet network model with an AKS cluster](media/use-kubenet/kubenet-overview.png)
With an AKS cluster deployed into your existing virtual network subnet, you can
[express-route]: ../expressroute/expressroute-introduction.md [network-comparisons]: concepts-network.md#compare-network-models [custom-route-table]: ../virtual-network/manage-route-table.md
-[user-assigned managed identity]: use-managed-identity.md#bring-your-own-control-plane-mi
+[user-assigned managed identity]: use-managed-identity.md#bring-your-own-control-plane-mi
aks Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/devops-pipeline.md
https://github.com/MicrosoftDocs/pipelines-javascript-docker
## Create the Azure resources
-Sign in to the [Azure portal](https://portal.azure.com/), and then select the [Cloud Shell](/azure/cloud-shell/overview) button in the upper-right corner.
+Sign in to the [Azure portal](https://portal.azure.com/), and then select the [Cloud Shell](../cloud-shell/overview.md) button in the upper-right corner.
### Create a container registry
https://github.com/MicrosoftDocs/pipelines-javascript-docker
## Create the Azure resources
-Sign in to the [Azure portal](https://portal.azure.com/), and then select the [Cloud Shell](/azure/cloud-shell/overview) button in the upper-right corner.
+Sign in to the [Azure portal](https://portal.azure.com/), and then select the [Cloud Shell](../cloud-shell/overview.md) button in the upper-right corner.
### Create a container registry
You're now ready to create a release, which means to start the process of runnin
1. In the pipeline view, choose the status link in the stages of the pipeline to see the logs and agent output.
aks Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md
az group create --name myresourcegroup --location southcentralus
```azurecli-interactive az aks create \
- --resource-group myresourcegroup \
+ --resource-group myResourceGroup \
--name natcluster \ --node-count 3 \
- --outbound-type managedNATGateway \
+ --outbound-type managedNATGateway \
--nat-gateway-managed-outbound-ip-count 2 \ --nat-gateway-idle-timeout 30 ```
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-internal-vnet.md
After successful deployment, you should see your API Management service's **priv
### Enable connectivity using a Resource Manager template (`stv2` platform)
-* Azure Resource Manager [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.apimanagement/api-management-create-with-internal-vnet-publicip) (API version 2021-01-01-preview )
+* Azure Resource Manager [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.apimanagement/api-management-create-with-internal-vnet-publicip) (API version 2021-08-01 )
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.apimanagement%2Fapi-management-create-with-internal-vnet-publicip%2Fazuredeploy.json)
api-management Api Management Using With Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-vnet.md
It can take 15 to 45 minutes to update the API Management instance. The Develope
### Enable connectivity using a Resource Manager template (`stv2` compute platform)
-* Azure Resource Manager [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.apimanagement/api-management-create-with-external-vnet-publicip) (API version 2021-01-01-preview)
+* Azure Resource Manager [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.apimanagement/api-management-create-with-external-vnet-publicip) (API version 2021-08-01)
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.apimanagement%2Fapi-management-create-with-external-vnet-publicip%2Fazuredeploy.json)
api-management How To Server Sent Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-server-sent-events.md
Follow these guidelines when using API Management to reach a backend API that im
## Next steps
-* Learn more about [configuring policies](/azure/api-management/api-management-howto-policies) in API Management.
-* Learn about API Management [capacity](api-management-capacity.md).
+* Learn more about [configuring policies](./api-management-howto-policies.md) in API Management.
+* Learn about API Management [capacity](api-management-capacity.md).
api-management Websocket Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/websocket-api.md
Below are the current restrictions of WebSocket support in API Management:
* WebSocket APIs are not supported yet in the [self-hosted gateway](./self-hosted-gateway-overview.md). * Azure CLI, PowerShell, and SDK currently do not support management operations of WebSocket APIs. * 200 active connections limit per unit.
-* Websockets APIs support the following valid buffer types for messages: Close, BinaryFragment, BinayrMessage, UTF8Fragment, and UTF8Message.
+* Websockets APIs support the following valid buffer types for messages: Close, BinaryFragment, BinaryMessage, UTF8Fragment, and UTF8Message.
### Unsupported policies
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
There's no cost to migrate your App Service Environment. You'll stop being charg
> [App Service Environment v3 Networking](networking.md) > [!div class="nextstepaction"]
-> [Using an App Service Environment v3](using.md)
+> [Using an App Service Environment v3](using.md)
app-service Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-java.md
cd agoncal-application-petstore-ee7
## Configure the Maven plugin
-The deployment process to Azure App Service will use your Azure credentials from the Azure CLI automatically. If the Azure CLI isn't installed locally, then the Maven plugin will authenticate with Oauth or device sign in. For more information, see [authentication with Maven plugins](https://github.com/microsoft/azure-maven-plugins/wiki/Authentication).
+> [!TIP]
+> The Maven plugin supports **Java 17** and **Tomcat 10.0**. For more information about latest support, see [Java 17 and Tomcat 10.0 are available on Azure App Service](https://devblogs.microsoft.com/java/java-17-and-tomcat-10-0-available-on-azure-app-service/).
++
+The deployment process to Azure App Service will use your Azure credentials from the Azure CLI automatically. If the Azure CLI is not installed locally, then the Maven plugin will authenticate with Oauth or device login. For more information, see [authentication with Maven plugins](https://github.com/microsoft/azure-maven-plugins/wiki/Authentication).
Run the Maven command below to configure the deployment. This command will help you to set up the App Service operating system, Java version, and Tomcat version.
application-gateway Http Response Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/http-response-codes.md
+
+ Title: HTTP response codes - Azure Application Gateway
+description: 'Learn how to troubleshoot Application Gateway HTTP response codes'
++++ Last updated : 04/19/2022++++
+# HTTP response codes in Application Gateway
+
+This article lists some HTTP response codes that can be returned by Azure Application Gateway. Common causes and troubleshooting steps are provided to help you determine the root cause. HTTP response codes can be returned to a client request whether or not a connection was initiated to a backend target.
+
+## 3XX response codes (redirection)
+
+300-399 responses are presented when a client request matches an application gateway rule that has redirects configured. Redirects can be configured on a rule as-is or via a path map rule. For more information about redirects, see [Application Gateway redirect overview](redirect-overview.md).
+
+#### 301 Permanent Redirect
+
+HTTP 301 responses are presented when a redirection rule is specified with the **Permanent** value.
+
+#### 302 Found
+
+HTTP 302 responses are presented when a redirection rule is specified with the **Found** value.
+
+#### 303 See Other
+
+HTTP 302 responses are presented when a redirection rule is specified with the **See Other** value.
+
+#### 307 Temporary Redirect
+
+HTTP 307 responses are presented when a redirection rule is specified with the **Temporary** value.
++
+## 4XX response codes (client error)
+
+400-499 response codes indicate an issue that is initiated from the client. These issues can range from the client initiating requests to an unmatched hostname, request timeout, unauthenticated request, malicious request, and more.
+
+#### 400 ΓÇô Bad Request
+
+HTTP 400 response codes are commonly observed when:
+- Non-HTTP / HTTPS traffic is initiated to an application gateway with an HTTP or HTTPS listener.
+- HTTP traffic is initiated to a listener with HTTPS, with no redirection configured.
+- Mutual authentication is configured and unable to properly negotiate.
+
+For cases when mutual authentication is configured, several scenarios can lead to an HTTP 400 response being returned the client, such as:
+- Client certificate isn't presented, but mutual authentication is enabled.
+- DN validation is enabled and the DN of the client certificate doesn't match the DN of the specified certificate chain.
+- Client certificate chain doesn't match certificate chain configured in the defined SSL Policy.
+- Client certificate is expired.
+- OCSP Client Revocation check is enabled and the certificate is revoked.
+- OCSP Client Revocation check is enabled, but unable to be contacted.
+- OCSP Client Revocation check is enabled, but OCSP responder isn't provided in the certificate.
+
+For more information about troubleshooting mutual authentication, see [Error code troubleshooting](mutual-authentication-troubleshooting.md#solution-2).
+
+#### 403 ΓÇô Forbidden
+
+HTTP 403 Forbidden is presented when customers are utilizing WAF skus and have WAF configured in Prevention mode. If enabled WAF rulesets or custom deny WAF rules match the characteristics of an inbound request, the client will be presented a 403 forbidden response.
+
+#### 404 ΓÇô Page not found
+
+An HTTP 404 response can be returned if a request is sent to an application gateway that is:
+- Using a [v2 sku](overview-v2.md).
+- Without a hostname match defined in any [multi-site listeners](multiple-site-overview.md).
+- Not configured with a [basic listener](application-gateway-components.md#types-of-listeners).
+
+#### 408 ΓÇô Request Timeout
+
+An HTTP 408 response can be observed when client requests to the frontend listener of application gateway do not respond back within 60 seconds. This error can be observed due to traffic congestion between on-premises networks and Azure, when traffic is inspected by virtual appliances, or the client itself becomes overwhelmed.
+
+#### 499 ΓÇô Client closed the connection
+
+An HTTP 499 response is presented if a client request that is sent to application gateways using v2 sku is closed before the server finished responding. This error can be observed when a large response is returned to the client, but the client may have closed or refreshed their browser/application before the server had a chance to finish responding.
++
+## 5XX response codes (server error)
+
+500-599 response codes indicate a problem has occurred with application gateway or the backend server while performing the request.
+
+#### 500 ΓÇô Internal Server Error
+
+Azure Application Gateway shouldn't exhibit 500 response codes. Please open a support request if you see this code, because this issue is an internal error to the service. For information on how to open a support case, see [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+
+#### 502 ΓÇô Bad Gateway
+
+HTTP 502 errors can have several root causes, for example:
+- NSG, UDR, or custom DNS is blocking access to backend pool members.
+- Back-end VMs or instances of [virtual machine scale sets](/azure/virtual-machine-scale-sets/overview) aren't responding to the default health probe.
+- Invalid or improper configuration of custom health probes.
+- Azure Application Gateway's [back-end pool isn't configured or empty](application-gateway-troubleshooting-502.md#empty-backendaddresspool).
+- None of the VMs or instances in [virtual machine scale set are healthy](application-gateway-troubleshooting-502.md#unhealthy-instances-in-backendaddresspool).
+- [Request time-out or connectivity issues](application-gateway-troubleshooting-502.md#request-time-out) with user requests.
+
+For information about scenarios where 502 errors occur, and how to troubleshoot them, see [Troubleshoot Bad Gateway errors](application-gateway-troubleshooting-502.md).
+
+#### 504 ΓÇô Request timeout
+
+HTTP 504 errors are presented if a request is sent to application gateways using v2 sku, and the backend response exceeds the time-out value associated to the listener's rule. This value is defined in the HTTP setting.
+
+## Next steps
+
+If the information in this article doesn't help to resolve the issue, [submit a support ticket](https://azure.microsoft.com/support/options/).
application-gateway Redirect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-overview.md
Title: Redirect overview for Azure Application Gateway description: Learn about the redirect capability in Azure Application Gateway to redirect traffic received on one listener to another listener or to an external site. -+ Previously updated : 11/16/2019- Last updated : 04/19/2022+ # Application Gateway redirect overview
You can use application gateway to redirect traffic. It has a generic redirecti
A common redirection scenario for many web applications is to support automatic HTTP to HTTPS redirection to ensure all communication between application and its users occurs over an encrypted path. In the past, customers have used techniques such as creating a dedicated backend pool whose sole purpose is to redirect requests it receives on HTTP to HTTPS. With redirection support in Application Gateway, you can accomplish this simply by adding a new redirect configuration to a routing rule, and specifying another listener with HTTPS protocol as the target listener.
-The following types of redirection are supported:
+## Redirection types
+A redirect type sets the response status code for the clients to understand the purpose of the redirect. The following types of redirection are supported:
-- 301 Permanent Redirect-- 302 Found-- 303 See Other-- 307 Temporary Redirect-
-Application Gateway redirection support offers the following capabilities:
--- **Global redirection**-
- Redirects from one listener to another listener on the gateway. This enables HTTP to HTTPS redirection on a site.
- When configuring redirects with a multi-site target listener, it is required that all the host names (with or without wildcard characters) defined as part of the source listener are also part of the destination listener. This ensures that no traffic is dropped due to missing host names on the destination listener while setting up HTTP to HTTPS redirection.
+- 301 (Moved permanently): Indicates that the target resource has been assigned a new permanent URI. Any future references to this resource will use one of the enclosed URIs. Use 301 status code for HTTP to HTTPS redirection.
+- 302 (Found): Indicates that the target resource is temporarily under a different URI. Since the redirection can change on occasion, the client should continue to use the effective request URI for future requests.
+- 307 (Temporary redirect): Indicates that the target resource is temporarily under a different URI. The user agent MUST NOT change the request method if it does an automatic redirection to that URI. Since the redirection can change over time, the client ought to continue using the original effective request URI for future requests.
+- 308 (Permanent redirect): Indicates that the target resource has been assigned a new permanent URI. Any future references to this resource should use one of the enclosed URIs.
+## Redirection capabilities
+- **Listener redirection**
+
+ Redirects from one listener to another listener. Listener redirection is commonly used to enable HTTP to HTTPS redirection.
+
- **Path-based redirection**
- This type of redirection enables HTTP to HTTPS redirection only on a specific site area, for example a shopping cart area denoted by /cart/*.
+ This type of redirection enables redirection only on a specific site area, for example, redirecting HTTP to HTTPS requests for a shopping cart area denoted by /cart/\*.
+
- **Redirect to external site** ![Diagram shows users and an App Gateway and connections between the two, including an unlocked H T T P red arrow, a not allowed 301 direct red arrow, and a locked H T T P S a green arrow.](./media/redirect-overview/redirect.png)
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
You'll need a form document. You can use our [sample form document](https://raw.
* For best results, provide one clear photo or high-quality scan per document. * Supported file formats: JPEG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location. * For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 50 MB.
+* The file size must be less than 50 MB (4 MB for the free tier).
* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels. > [!NOTE]
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
See how text is extracted from forms and documents using the Form Recognizer Stu
* For best results, provide one clear photo or high-quality scan per document. * Supported file formats: JPEG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location. * For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 50 MB.
+* The file size must be less than 50 MB (4 MB for the free tier)
* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels. ## Supported languages and locales
applied-ai-services Build Custom Model V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-custom-model-v3.md
The Form Recognizer Studio provides and orchestrates all the API calls required
1. On the next step in the workflow, choose or create a Form Recognizer resource before you select continue. > [!IMPORTANT]
- > Custom neural models models are only available in a few regions. If you plan on training a neural model, please select or create a resource in one of [these supported regions](/azure/applied-ai-services/form-recognizer/concept-custom-neural#l).
+ > Custom neural models models are only available in a few regions. If you plan on training a neural model, please select or create a resource in one of [these supported regions](../concept-custom-neural.md).
:::image type="content" source="../media/how-to/studio-select-resource.png" alt-text="Screenshot: Select the Form Recognizer resource.":::
applied-ai-services Try V3 Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-python-sdk.md
def analyze_general_documents():
docUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf" # create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
- endpoint=endpoint, credential=AzureKeyCredential(key)
- )
+ document_analysis_client = DocumentAnalysisClient(endpoint=endpoint, credential=AzureKeyCredential(key))
poller = document_analysis_client.begin_analyze_document_from_url( "prebuilt-document", docUrl)
def analyze_layout():
formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf" # create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
- document_analysis_client = DocumentAnalysisClient(
- endpoint=endpoint, credential=AzureKeyCredential(key)
- )
+ document_analysis_client = DocumentAnalysisClient(endpoint=endpoint, credential=AzureKeyCredential(key))
poller = document_analysis_client.begin_analyze_document_from_url( "prebuilt-layout", formUrl)
def analyze_invoice():
invoiceUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf" # create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
- document_analysis_client = DocumentAnalysisClient(
- endpoint=endpoint, credential=AzureKeyCredential(key)
- )
+ document_analysis_client = DocumentAnalysisClient(endpoint=endpoint, credential=AzureKeyCredential(key))
poller = document_analysis_client.begin_analyze_document_from_url( "prebuilt-invoice", invoiceUrl)
attestation Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/audit-logs.md
-# Audit logs for Azure Attestation
+# Azure Attestation logging
-Audit logs are secure, immutable, timestamped records of discrete events that happened over time. These logs capture important events that may affect the functionality of your attestation instance.
+If you create one or more Azure Attestation resources, youΓÇÖll want to monitor how and when your attestation instance is accessed, and by whom. You can do this by enabling logging for Microsoft Azure Attestation, which saves information in an Azure storage account you provide.
-Azure Attestation manages attestation instances and the policies associated with them. Actions associated with instance management and policy changes are audited and logged.
+Logging information will be available up to 10 minutes after the operation occurred (in most cases, it will be quicker than this). Since you provide the storage account, you can secure your logs via standard Azure access controls and delete logs you no longer want to keep in your storage account.
-This article contains information on the events that are logged, the information collected, and the location of these logs.
+## Interpret your Azure Attestation logs
-## About Audit logs
+When logging is enabled, up to three containers may be automatically created for you in your specified storage account: **insights-logs-auditevent, insights-logs-operational, insights-logs-notprocessed**. It is recommended to only use **insights-logs-operational** and **insights-logs-notprocessed**. **insights-logs-auditevent** was created to provide early access to logs for customers using VBS. Future enhancements to logging will occur in the **insights-logs-operational** and **insights-logs-notprocessed**.
-Azure Attestation uses code to produce audit logs for events that affect the way attestation is performed. This typically boils down to how or when policy changes are made to your attestation instance as well as some admin actions.
+**Insights-logs-operational** contains generic information across all TEE types.
-### Auditable Events
-Here are some of the audit logs we collect:
+**Insights-logs-notprocessed** contains requests which the service was unable to process, typically due to malformed HTTP headers, incomplete message bodies, or similar issues.
-| Event/API | Event Description |
-|--|--|
-| Create Instance | Creates a new instance of an attestation service. |
-| Destroy Instance | Destroys an instance of an attestation service. |
-| Add Policy Certificate | Addition of a certificate to the current set of policy management certificates. |
-| Remove Policy Certificate | Remove a certificate from the current set of policy management certificates. |
-| Set Current Policy | Sets the attestation policy for a given TEE type. |
-| Reset Attestation Policy | Resets the attestation policy for a given TEE type. |
-| Prepare to Update Policy | Prepare to update attestation policy for a given TEE type. |
-| Rehydrate Tenants After Disaster | Re-seals all of the attestation tenants on this instance of the attestation service. This can only be performed by Attestation Service admins. |
+Individual blobs are stored as text, formatted as a JSON blob. LetΓÇÖs look at an example log entry:
-### Collected information
-For each of these events, Azure Attestation collects the following information:
-- Operation Name-- Operation Success-- Operation Caller, which could be any of the following:
- - Azure AD UPN
- - Object ID
- - Certificate
- - Azure AD Tenant ID
-- Operation Target, which could be any of the following:
- - Environment
- - Service Region
- - Service Role
- - Service Role Instance
- - Resource ID
- - Resource Region
+```json
+{
+ "Time": "2021-11-03T19:33:54.3318081Z",
+ "resourceId": "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/providers/Microsoft.Attestation/attestationProviders/<instance name>",
+ "region": "EastUS",
+ "operationName": "AttestSgxEnclave",
+ "category": "Operational",
+ "resultType": "Succeeded",
+ "resultSignature": "400",
+ "durationMs": 636,
+ "callerIpAddress": "::ffff:24.17.183.201",
+ "traceContext": "{\"traceId\":\"e4c24ac88f33c53f875e5141a0f4ce13\",\"parentId\":\"0000000000000000\",}",
+ "identity": "{\"callerAadUPN\":\"deschuma@microsoft.com\",\"callerAadObjectId\":\"6ab02abe-6ca2-44ac-834d-42947dbde2b2\",\"callerId\":\"deschuma@microsoft.com\"}",
+ "uri": "https://deschumatestrp.eus.test.attest.azure.net:443/attest/SgxEnclave?api-version=2018-09-01-preview",
+ "level": "Informational",
+ "location": "EastUS",
+ "properties":
+ {
+ "failureResourceId": "",
+ "failureCategory": "None",
+ "failureDetails": "",
+ "infoDataReceived":
+ {
+ "Headers":
+ {
+ "User-Agent": "PostmanRuntime/7.28.4"
+ },
+ "HeaderCount": 10,
+ "ContentType": "application/json",
+ "ContentLength": 6912,
+ "CookieCount": 0,
+ "TraceParent": ""
+ }
+ }
+ }
+```
-### Sample Audit log
+Most of these fields are documented in the [Top-level common schema](/azure-monitor/essentials/resource-logs-schema#top-level-common-schema). The following table lists the field names and descriptions for the entries not included in the top-level common schema:
-Audit logs are provided in JSON format. Here is an example of what an audit log may look like.
+| Field Name | Description |
+||--|
+| traceContext | JSON blob representing the W3C trace-context |
+| uri | Request URI |
-```json
-{
- "operationName": "SetCurrentPolicy",
- "resultType": "Success",
- "resultDescription": null,
- "auditEventCategory": [
- "ApplicationManagement"
- ],
- "nCloud": null,
- "requestId": null,
- "callerIpAddress": null,
- "callerDisplayName": null,
- "callerIdentities": [
- {
- "callerIdentityType": "ObjectID",
- "callerIdentity": "<some object ID>"
- },
- {
- "callerIdentityType": "TenantId",
- "callerIdentity": "<some tenant ID>"
- }
- ],
- "targetResources": [
- {
- "targetResourceType": "Environment",
- "targetResourceName": "PublicCloud"
- },
- {
- "targetResourceType": "ServiceRegion",
- "targetResourceName": "EastUS2"
- },
- {
- "targetResourceType": "ServiceRole",
- "targetResourceName": "AttestationRpType"
- },
- {
- "targetResourceType": "ServiceRoleInstance",
- "targetResourceName": "<some service role instance>"
- },
- {
- "targetResourceType": "ResourceId",
- "targetResourceName": "/subscriptions/<some subscription ID>/resourceGroups/<some resource group name>/providers/Microsoft.Attestation/attestationProviders/<some instance name>"
- },
- {
- "targetResourceType": "ResourceRegion",
- "targetResourceName": "EastUS2"
- }
- ],
- "ifxAuditFormat": "Json",
- "env_ver": "2.1",
- "env_name": "#Ifx.AuditSchema",
- "env_time": "2020-11-23T18:23:29.9427158Z",
- "env_epoch": "MKZ6G",
- "env_seqNum": 1277,
- "env_popSample": 0.0,
- "env_iKey": null,
- "env_flags": 257,
- "env_cv": "##00000000-0000-0000-0000-000000000000_00000000-0000-0000-0000-000000000000_00000000-0000-0000-0000-000000000000",
- "env_os": null,
- "env_osVer": null,
- "env_appId": null,
- "env_appVer": null,
- "env_cloud_ver": "1.0",
- "env_cloud_name": null,
- "env_cloud_role": null,
- "env_cloud_roleVer": null,
- "env_cloud_roleInstance": null,
- "env_cloud_environment": null,
- "env_cloud_location": null,
- "env_cloud_deploymentUnit": null
-}
-```
+The properties contain additional Azure attestation specific context:
+
+| Field Name | Description |
+||--|
+| failureResourceId | Resource ID of component which resulted in request failure |
+| failureCategory | Broad category indicating category of a request failure. Includes categories such as AzureNetworkingPhysical, AzureAuthorization etc. |
+| failureDetails | Detailed information about a request failure, if available |
+| infoDataReceived | Information about the request received from the client. Includes some HTTP headers, the number of headers received, the content type and content length |
+
+## Next steps
+- [How to enable Microsoft Azure Attestation logging ](azure-diagnostic-monitoring.md)
attestation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/overview.md
OE standardizes specific requirements for verification of an enclave evidence. T
Client applications can be designed to take advantage of TPM attestation by delegating security-sensitive tasks to only take place after a platform has been validated to be secure. Such applications can then make use of Azure Attestation to routinely establish trust in the platform and its ability to access sensitive data.
-### Azure Confidential VM attestation
+### AMD SEV-SNP attestation
Azure [Confidential VM](../confidential-computing/confidential-vm-overview.md) (CVM) is based on [AMD processors with SEV-SNP technology](../confidential-computing/virtual-machine-solutions-amd.md) and aims to improve VM security posture by removing trust in host, hypervisor and Cloud Service Provider (CSP). To achieve this, CVM offers VM OS disk encryption option with platform-managed keys and binds the disk encryption keys to the virtual machine's TPM. When a CVM boots up, SNP report containing the guest VM firmware measurements will be sent to Azure Attestation. The service validates the measurements and issues an attestation token that is used to release keys from [Managed-HSM](../key-vault/managed-hsm/overview.md) or [Azure Key Vault](../key-vault/general/basic-concepts.md). These keys are used to decrypt the vTPM state of the guest VM, unlock the OS disk and start the CVM. The attestation and key release process is performed automatically on each CVM boot, and the process ensures the CVM boots up only upon successful attestation of the hardware.
automanage Windows Server Azure Edition Vnext https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/windows-server-azure-edition-vnext.md
> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-[Windows Server: Azure Edition (WSAE)](https://aka.ms/wsae) is a new edition of Windows Server focused on innovation and efficiency. Featuring an annual release cadence and optimized to run on Azure properties, WSAE brings new functionality to Windows Server users faster than the traditional Long-Term Servicing Channel (LTSC) editions of Windows Server (2016,2019,2022, etc.) the first version of this new variant is Windows Server 2022 Datacenter: Azure Edition, announced at Microsoft Ignite in November 2021.
+[Windows Server: Azure Edition (WSAE)](./automanage-windows-server-services-overview.md) is a new edition of Windows Server focused on innovation and efficiency. Featuring an annual release cadence and optimized to run on Azure properties, WSAE brings new functionality to Windows Server users faster than the traditional Long-Term Servicing Channel (LTSC) editions of Windows Server (2016,2019,2022, etc.) the first version of this new variant is Windows Server 2022 Datacenter: Azure Edition, announced at Microsoft Ignite in November 2021.
The annual WSAE releases are delivered using Windows Update, rather than a full OS upgrade. As part of this annual release cadence, the WSAE Insider preview program will spin up each spring with the opportunity to access early builds of the next release - leading to general availability in the fall. Install the preview to get early access to all the new features and functionality prior to general availability. If you are a registered Microsoft Server Insider, you have access to create and use virtual machine images from this preview. For more information and to manage your Insider membership, visit the [Windows Insider home page](https://insider.windows.com/) or [Windows Insiders for Business home page.](https://insider.windows.com/for-business/)
automation Automation Manage Send Joblogs Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-manage-send-joblogs-log-analytics.md
Azure Automation can send runbook job status and job streams to your Log Analyti
- Trigger an email or alert based on your runbook job status (for example, failed or suspended). - Write advanced queries across your job streams. - Correlate jobs across Automation accounts.
- - Use customized views and search queries to visualize your runbook results, runbook job status, and other related key indicators or metrics through an [Azure dashboard](/azure/azure-portal/azure-portal-dashboards).
+ - Use customized views and search queries to visualize your runbook results, runbook job status, and other related key indicators or metrics through an [Azure dashboard](../azure-portal/azure-portal-dashboards.md).
- Get the audit logs related to Automation accounts, runbooks, and other asset create, modify and delete operations.
-Using Azure Monitor logs, you can consolidate logs from different resources in the same workspace where it can be analyzed with [queries](/azure/azure-monitor/logs/log-query-overview) to quickly retrieve, consolidate, and analyze the collected data. You can create and test queries using [Log Analytics](/azure/azure-monitor/logs/log-query-overview) in the Azure portal and then either directly analyze the data using these tools or save queries for use with [visualization](/azure/azure-monitor/best-practices-analysis) or [alert rules](/azure/azure-monitor/alerts/alerts-overview).
+Using Azure Monitor logs, you can consolidate logs from different resources in the same workspace where it can be analyzed with [queries](../azure-monitor/logs/log-query-overview.md) to quickly retrieve, consolidate, and analyze the collected data. You can create and test queries using [Log Analytics](../azure-monitor/logs/log-query-overview.md) in the Azure portal and then either directly analyze the data using these tools or save queries for use with [visualization](../azure-monitor/best-practices-analysis.md) or [alert rules](../azure-monitor/alerts/alerts-overview.md).
-Azure Monitor uses a version of the [Kusto query language (KQL)](/azure/kusto/query/) used by Azure Data Explorer that is suitable for simple log queries. It also includes advanced functionality such as aggregations, joins, and smart analytics. You can quickly learn the query language using [multiple lessons](/azure/azure-monitor/logs/get-started-queries).
+Azure Monitor uses a version of the [Kusto query language (KQL)](/azure/kusto/query/) used by Azure Data Explorer that is suitable for simple log queries. It also includes advanced functionality such as aggregations, joins, and smart analytics. You can quickly learn the query language using [multiple lessons](../azure-monitor/logs/get-started-queries.md).
## Azure Automation diagnostic settings
You can configure diagnostic settings in the Azure portal from the menu for the
:::image type="content" source="media/automation-manage-send-joblogs-log-analytics/destination-details-options-inline.png" alt-text="Screenshot showing selections in destination details section." lightbox="media/automation-manage-send-joblogs-log-analytics/destination-details-options-expanded.png":::
- - **Log Analytics** : Enter the Subscription ID and workspace name. If you don't have a workspace, you must [create one before proceeding](/azure/azure-monitor/logs/quick-create-workspace).
+ - **Log Analytics** : Enter the Subscription ID and workspace name. If you don't have a workspace, you must [create one before proceeding](../azure-monitor/logs/quick-create-workspace.md).
- **Event Hubs**: Specify the following criteria: - Subscription: The same subscription as that of the Event Hub.
- - Event Hub namespace: [Create Event Hub](/azure/event-hubs/event-hubs-create) if you don't have one yet.
- - Event Hub name (optional): If you don't specify a name, an event hub is created for each log category. If you are sending multiple categories, specify a name to limit the number of Event Hubs created. See [Azure Event Hubs quotas and limits](/azure/event-hubs/event-hubs-quotas) for details.
- - Event Hub policy (optional): A policy defines the permissions that the streaming mechanism has. See [Event Hubs feature](/azure/event-hubs/event-hubs-features#publisher-policy).
+ - Event Hub namespace: [Create Event Hub](../event-hubs/event-hubs-create.md) if you don't have one yet.
+ - Event Hub name (optional): If you don't specify a name, an event hub is created for each log category. If you are sending multiple categories, specify a name to limit the number of Event Hubs created. See [Azure Event Hubs quotas and limits](../event-hubs/event-hubs-quotas.md) for details.
+ - Event Hub policy (optional): A policy defines the permissions that the streaming mechanism has. See [Event Hubs feature](../event-hubs/event-hubs-features.md#publisher-policy).
- **Storage**: Choose the subscription, storage account, and retention policy. :::image type="content" source="media/automation-manage-send-joblogs-log-analytics/storage-account-details-inline.png" alt-text="Screenshot showing the storage account." lightbox="media/automation-manage-send-joblogs-log-analytics/storage-account-details-expanded.png":::
- - **Partner integration**: You must first install a partner integration into your subscription. Configuration options will vary by partner. For more information, see [Azure Monitor integration](/azure/partner-solutions/overview).
+ - **Partner integration**: You must first install a partner integration into your subscription. Configuration options will vary by partner. For more information, see [Azure Monitor integration](../partner-solutions/overview.md).
1. Click **Save**.
-After a few moments, the new setting appears in your list of settings for this resource, and logs are streamed to the specified destinations as new event data is generated. There can be 15 minutes time difference between the event emitted and its appearance in [Log Analytics workspace](/azure/azure-monitor/logs/data-ingestion-time).
+After a few moments, the new setting appears in your list of settings for this resource, and logs are streamed to the specified destinations as new event data is generated. There can be 15 minutes time difference between the event emitted and its appearance in [Log Analytics workspace](../azure-monitor/logs/data-ingestion-time.md).
## Query the logs
To create an alert rule, create a log search for the runbook job records that sh
```kusto AzureDiagnostics | where ResourceProvider == "MICROSOFT.AUTOMATION" and Category == "JobLogs" and (ResultType == "Failed" or ResultType == "Suspended") | summarize AggregatedValue = count() by RunbookName_s ```
- 1. To open the **Create alert rule** screen, click **+New alert rule** on the top of the page. For more information on the options to configure the alerts, see [Log alerts in Azure](/azure/azure-monitor/alerts/alerts-log#create-a-log-alert-rule-in-the-azure-portal)
+ 1. To open the **Create alert rule** screen, click **+New alert rule** on the top of the page. For more information on the options to configure the alerts, see [Log alerts in Azure](../azure-monitor/alerts/alerts-log.md#create-a-new-log-alert-rule-in-the-azure-portal)
## Azure Automation diagnostic audit logs
You can now send audit logs also to the Azure Monitor workspace. This allows ent
## Difference between activity logs and audit logs
-Activity log is aΓÇ»[platform log](/azure/azure-monitor/essentials/platform-logs-overview)in Azure that provides insight into subscription-level events. The activity log for Automation account includes information about when an automation resource is modified or created or deleted. However, it does not capture the name or ID of the resource.
+Activity log is aΓÇ»[platform log](../azure-monitor/essentials/platform-logs-overview.md)in Azure that provides insight into subscription-level events. The activity log for Automation account includes information about when an automation resource is modified or created or deleted. However, it does not capture the name or ID of the resource.
Audit logs for Automation accounts capture the name and ID of the resource such as automation variable, credential, connection and so on, along with the type of the operation performed for the resource and Azure Automation would scrub some details like client IP data conforming to the GDPR compliance.
automation Automation Security Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-guidelines.md
Review the Azure Policy recommendations for Azure Automation and act as appropri
## Next steps
-* To learn how to use Azure role-based access control (Azure RBAC), see [Manage role permissions and security in Azure Automation](/azure/automation/automation-role-based-access-control).
-* For information on how Azure protects your privacy and secures your data, see [Azure Automation data security](/azure/automation/automation-managing-data).
-* To learn about configuring the Automation account to use encryption, see [Encryption of secure assets in Azure Automation](/azure/automation/automation-secure-asset-encryption).
+* To learn how to use Azure role-based access control (Azure RBAC), see [Manage role permissions and security in Azure Automation](./automation-role-based-access-control.md).
+* For information on how Azure protects your privacy and secures your data, see [Azure Automation data security](./automation-managing-data.md).
+* To learn about configuring the Automation account to use encryption, see [Encryption of secure assets in Azure Automation](./automation-secure-asset-encryption.md).
automation Automation Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-services.md
Multiple Azure services can fulfill the above requirements. Each service has its
### Azure Resource Manager (ARM) template
-Azure Resource Manager provides a language to develop repeatable and consistent deployment templates for Azure resources. The template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. It uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources. [Learn more](/azure/azure-resource-manager/templates/overview).
+Azure Resource Manager provides a language to develop repeatable and consistent deployment templates for Azure resources. The template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. It uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources. [Learn more](../azure-resource-manager/templates/overview.md).
### Bicep
-We've introduced a new language named [Bicep](/azure/azure-resource-manager/bicep/overview) that offers the same capabilities as ARM templates but with a syntax that's easier to use. Each Bicep file is automatically converted to an ARM template during deployment. If you're considering infrastructure as code options, we recommend Bicep. For more information, see [What is Bicep?](/azure/azure-resource-manager/bicep/overview)
+We've introduced a new language named [Bicep](../azure-resource-manager/bicep/overview.md) that offers the same capabilities as ARM templates but with a syntax that's easier to use. Each Bicep file is automatically converted to an ARM template during deployment. If you're considering infrastructure as code options, we recommend Bicep. For more information, see [What is Bicep?](../azure-resource-manager/bicep/overview.md)
The following table describes the scenarios and users for ARM template and Bicep:
The following table describes the scenarios and users for ARM template and Bicep
### Azure Blueprints (Preview)
- Azure Blueprints (Preview) define a repeatable set of Azure resources that implements and adheres to an organization's standards, patterns, and requirements. Blueprints are a declarative way to orchestrate the deployment of various resource templates and other artifacts such as, Role assignments, Policy assignments, ARM templates and Resource groups. [Learn more](/azure/governance/blueprints/overview).
+ Azure Blueprints (Preview) define a repeatable set of Azure resources that implements and adheres to an organization's standards, patterns, and requirements. Blueprints are a declarative way to orchestrate the deployment of various resource templates and other artifacts such as, Role assignments, Policy assignments, ARM templates and Resource groups. [Learn more](../governance/blueprints/overview.md).
**Scenarios** | **Users** |
The following table describes the scenarios and users for ARM template and Bicep
-### [Azure Automation](/azure/automation/overview)
+### [Azure Automation](./overview.md)
-Azure Automation orchestrates repetitive processes using graphical, PowerShell, and Python runbooks in the cloud or hybrid environments. It provides a persistent shared assets including variables, connections, objects that allow orchestration of complex jobs. [Learn more](/azure/automation/automation-runbook-gallery).
+Azure Automation orchestrates repetitive processes using graphical, PowerShell, and Python runbooks in the cloud or hybrid environments. It provides a persistent shared assets including variables, connections, objects that allow orchestration of complex jobs. [Learn more](./automation-runbook-gallery.md).
There are more than 3,000 modules in the PowerShell Gallery, and the PowerShell community continues to grow. Azure Automation based on PowerShell modules can work with multiple applications and vendors, both 1st party and 3rd party. As more application vendors release PowerShell modules for integration, extensibility and automation tasks, you could use an existing PowerShell script as-is to execute it as a PowerShell runbook in an automation account without making any changes. **Scenarios** | **Users** |
- | Allows to write an [Automation PowerShell runbook](/azure/automation/learn/powershell-runbook-managed-identity) that deploys an Azure resource by using an [Azure Resource Manager template](/azure/azure-resource-manager/templates/quickstart-create-templates-use-the-portal).</br> </br> Schedule tasks, for example ΓÇô Stop dev/test VMs or services at night and turn on during the day. </br> </br> Response to alerts such as system alerts, service alerts, high CPU/memory alerts, create ServiceNow tickets, and so on. </br> </br> Hybrid automation where you can manage to automate on-premises servers such as SQL Server, Active Directory and so on. </br> </br> Azure resource life-cycle management and governance include resource provisioning, de-provisioning, adding correct tags, locks, NSGs and so on. | IT administrators, System administrators, IT operations administrators who are skilled at using PowerShell or Python based scripting. </br> </br> Infrastructure administrators manage the on-premises infrastructure using scripts or executing long-running jobs such as month-end operations on servers running on-premises.
+ | Allows to write an [Automation PowerShell runbook](./learn/powershell-runbook-managed-identity.md) that deploys an Azure resource by using an [Azure Resource Manager template](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).</br> </br> Schedule tasks, for example ΓÇô Stop dev/test VMs or services at night and turn on during the day. </br> </br> Response to alerts such as system alerts, service alerts, high CPU/memory alerts, create ServiceNow tickets, and so on. </br> </br> Hybrid automation where you can manage to automate on-premises servers such as SQL Server, Active Directory and so on. </br> </br> Azure resource life-cycle management and governance include resource provisioning, de-provisioning, adding correct tags, locks, NSGs and so on. | IT administrators, System administrators, IT operations administrators who are skilled at using PowerShell or Python based scripting. </br> </br> Infrastructure administrators manage the on-premises infrastructure using scripts or executing long-running jobs such as month-end operations on servers running on-premises.
### Azure Automation based in-guest management
-**Configuration management** : Collects inventory and tracks changes in your environment. [Learn more](/azure/automation/change-tracking/overview).
-You can configure desired the state of your machines to discover and correct configuration drift. [Learn more](/azure/automation/automation-dsc-overview).
+**Configuration management** : Collects inventory and tracks changes in your environment. [Learn more](./change-tracking/overview.md).
+You can configure desired the state of your machines to discover and correct configuration drift. [Learn more](./automation-dsc-overview.md).
-**Update management** : Assess compliance of servers and can schedule update installation on your machines. [Learn more](/azure/automation/update-management/overview).
+**Update management** : Assess compliance of servers and can schedule update installation on your machines. [Learn more](./update-management/overview.md).
**Scenarios** | **Users** |
You can configure desired the state of your machines to discover and correct con
### Azure Automanage (Preview)
-Replaces repetitive, day-to-day operational tasks with an exception-only management model, where a healthy, steady-state of VM is equal to hands-free management. [Learn more](/azure/automanage/automanage-virtual-machines).
+Replaces repetitive, day-to-day operational tasks with an exception-only management model, where a healthy, steady-state of VM is equal to hands-free management. [Learn more](../automanage/automanage-virtual-machines.md).
**Linux and Windows support** - You can intelligently onboard virtual machines to select best practices Azure services.
Replaces repetitive, day-to-day operational tasks with an exception-only managem
### Azure Policy based Guest Configuration
-Azure Policy based Guest configuration is the next iteration of Azure Automation State configuration. [Learn more](/azure/governance/policy/concepts/guest-configuration-policy-effects).
+Azure Policy based Guest configuration is the next iteration of Azure Automation State configuration. [Learn more](../governance/policy/concepts/guest-configuration-policy-effects.md).
You can check on what is installed in:
- - The next iteration of [Azure Automation State Configuration](/azure/automation/automation-dsc-overview).
+ - The next iteration of [Azure Automation State Configuration](./automation-dsc-overview.md).
- For known-bad apps, protocols certificates, administrator privileges, and health of agents. - For customer-authored content. **Scenarios** | **Users** |
- | Obtain compliance data that may include: The configuration of the operating system ΓÇô files, registry, and services, Application configuration or presence, Check environment settings. </br> </br> Audit or deploy settings to all machines (Set) in scope either reactively to existing machines or proactively to new machines as they are deployed. </br> </br> Respond to policy events to provide [remediation on demand or continuous remediation.](/azure/governance/policy/concepts/guest-configuration-policy-effects#remediation-on-demand-applyandmonitor) | The Central IT, Infrastructure Administrators, Auditors (Cloud custodians) are working towards the regulatory requirements at scale and ensuring that servers' end state looks as desired. </br> </br> The application teams validate compliance before releasing change.
+ | Obtain compliance data that may include: The configuration of the operating system ΓÇô files, registry, and services, Application configuration or presence, Check environment settings. </br> </br> Audit or deploy settings to all machines (Set) in scope either reactively to existing machines or proactively to new machines as they are deployed. </br> </br> Respond to policy events to provide [remediation on demand or continuous remediation.](../governance/policy/concepts/guest-configuration-policy-effects.md#remediation-on-demand-applyandmonitor) | The Central IT, Infrastructure Administrators, Auditors (Cloud custodians) are working towards the regulatory requirements at scale and ensuring that servers' end state looks as desired. </br> </br> The application teams validate compliance before releasing change.
### Azure Automation - Process Automation
-Orchestrates repetitive processes using graphical, PowerShell, and Python runbooks in the cloud or hybrid environment. [Learn more](/azure/automation/automation-runbook-types?).
+Orchestrates repetitive processes using graphical, PowerShell, and Python runbooks in the cloud or hybrid environment. [Learn more](./automation-runbook-types.md).
- It provides persistent shared assets, including variables, connections, objects, that allows orchestration of complex jobs.
- - You can invoke a runbook on the basis of [Azure Monitor alert](/azure/automation/automation-create-alert-triggered-runbook) or through a [webhook](/azure/automation/automation-webhooks).
+ - You can invoke a runbook on the basis of [Azure Monitor alert](./automation-create-alert-triggered-runbook.md) or through a [webhook](./automation-webhooks.md).
**Scenarios** | **Users** |
Orchestrates repetitive processes using graphical, PowerShell, and Python runboo
### Azure functions
-Provides a serverless event-driven compute platform for automation that allows you to write code to react to critical events from various sources, third-party services, and on-premises systems. For example, an HTTP trigger without worrying about the underlying platform. [Learn more](/azure/azure-functions/functions-overview).
+Provides a serverless event-driven compute platform for automation that allows you to write code to react to critical events from various sources, third-party services, and on-premises systems. For example, an HTTP trigger without worrying about the underlying platform. [Learn more](../azure-functions/functions-overview.md).
- You can use a variety of languages to write functions in a language of your choice such as C#, Java, JavaScript, PowerShell, or Python and focus on specific pieces of code. Functions runtime is an open source. - You can choose the hosting plan according to your function app scaling requirements, functionality, and resources required.
- - You can orchestrate complex workflows through [durable functions](/azure/azure-functions/durable/durable-functions-overview?tabs=csharp).
- - You should avoid large, and long-running functions that can cause unexpected timeout issues. [Learn more](/azure/azure-functions/functions-best-practices?tabs=csharp#write-robust-functions).
- - When you write Powershell scripts within the Function Apps, you must tweak the scripts to define how the function behaves such as - how it's triggered, its input and output parameters. [Learn more](/azure/azure-functions/functions-reference-powershell?tabs=portal).
+ - You can orchestrate complex workflows through [durable functions](../azure-functions/durable/durable-functions-overview.md?tabs=csharp).
+ - You should avoid large, and long-running functions that can cause unexpected timeout issues. [Learn more](../azure-functions/functions-best-practices.md?tabs=csharp#write-robust-functions).
+ - When you write Powershell scripts within the Function Apps, you must tweak the scripts to define how the function behaves such as - how it's triggered, its input and output parameters. [Learn more](../azure-functions/functions-reference-powershell.md?tabs=portal).
**Scenarios** | **Users** |
Provides a serverless event-driven compute platform for automation that allows y
### Azure logic apps
-Logic Apps is a platform for creating and running complex orchestration workflows that integrate your apps, data, services, and systems. [Learn more](/azure/logic-apps/logic-apps-overview).
+Logic Apps is a platform for creating and running complex orchestration workflows that integrate your apps, data, services, and systems. [Learn more](../logic-apps/logic-apps-overview.md).
- Allows you to build smart integrations between 1st party and 3rd party apps, services and systems running across on-premises, hybrid and cloud native. - Allows you to use managed connectors from a 450+ and growing Azure connectors ecosystem to use in your workflows.
Logic Apps is a platform for creating and running complex orchestration workflow
### Azure Automation - Process Automation
-Orchestrates repetitive processes using graphical, PowerShell, and Python runbooks in the cloud or hybrid environment. It provides persistent shared assets, including variables, connections, objects, that allows orchestration of complex jobs. [Learn more](/azure/automation/overview).
+Orchestrates repetitive processes using graphical, PowerShell, and Python runbooks in the cloud or hybrid environment. It provides persistent shared assets, including variables, connections, objects, that allows orchestration of complex jobs. [Learn more](./overview.md).
**Scenarios** | **Users** |
Orchestrates repetitive processes using graphical, PowerShell, and Python runboo
### Azure functions
-Provides a serverless event-driven compute platform for automation that allows you to write code to react to critical events from various sources, third-party services, and on-premises systems. For example, an HTTP trigger without worrying about the underlying platform [Learn more](/azure/azure-functions/functions-overview).
+Provides a serverless event-driven compute platform for automation that allows you to write code to react to critical events from various sources, third-party services, and on-premises systems. For example, an HTTP trigger without worrying about the underlying platform [Learn more](../azure-functions/functions-overview.md).
- You can use a variety of languages to write functions in a language of your choice such as C#, Java, JavaScript, PowerShell, or Python and focus on specific pieces of code. Functions runtime is an open source. - You can choose the hosting plan according to your function app scaling requirements, functionality, and resources required.
- - You can orchestrate complex workflows through [durable functions](/azure/azure-functions/durable/durable-functions-overview?tabs=csharp).
- - You should avoid large, and long-running functions that can cause unexpected timeout issues. [Learn more](/azure/azure-functions/functions-best-practices?tabs=csharp#write-robust-functions).
- - When you write Powershell scripts within the Function Apps, you must tweak the scripts to define how the function behaves such as - how it's triggered, its input and output parameters. [Learn more](/azure/azure-functions/functions-reference-powershell?tabs=portal).
+ - You can orchestrate complex workflows through [durable functions](../azure-functions/durable/durable-functions-overview.md?tabs=csharp).
+ - You should avoid large, and long-running functions that can cause unexpected timeout issues. [Learn more](../azure-functions/functions-best-practices.md?tabs=csharp#write-robust-functions).
+ - When you write Powershell scripts within the Function Apps, you must tweak the scripts to define how the function behaves such as - how it's triggered, its input and output parameters. [Learn more](../azure-functions/functions-reference-powershell.md?tabs=portal).
**Scenarios** | **Users** |
Provides a serverless event-driven compute platform for automation that allows y
## Next steps-- To learn on how to securely execute the automation jobs, see [best practices for security in Azure Automation](/azure/automation/automation-security-guidelines).
+- To learn on how to securely execute the automation jobs, see [best practices for security in Azure Automation](./automation-security-guidelines.md).
automation Automation Webhooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-webhooks.md
Consider the following strategies:
## Create a webhook > [!NOTE]
-> When you use the webhook with PowerShell 7 runbook, it auto-converts the webhook input parameter to an invalid JSON. For more information, see [Known issues - 7.1 (preview)](/azure/automation/automation-runbook-types#known-issues71-preview). We recommend that you use the webhook with PowerShell 5 runbook.
+> When you use the webhook with PowerShell 7 runbook, it auto-converts the webhook input parameter to an invalid JSON. For more information, see [Known issues - 7.1 (preview)](./automation-runbook-types.md#known-issues71-preview). We recommend that you use the webhook with PowerShell 5 runbook.
1. Create PowerShell runbook with the following code:
Automation webhooks can also be created using [Azure Resource Manager](../azure-
## Next steps
-* To trigger a runbook from an alert, see [Use an alert to trigger an Azure Automation runbook](automation-create-alert-triggered-runbook.md).
+* To trigger a runbook from an alert, see [Use an alert to trigger an Azure Automation runbook](automation-create-alert-triggered-runbook.md).
availability-zones Region Types Service Categories Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/region-types-service-categories-azure.md
As mentioned previously, Azure classifies services into three categories: founda
> | Azure Machine Learning | > | Azure Managed Instance for Apache Cassandra | > | Azure NetApp Files |
-> | Azure Purview |
+> | Microsoft Purview |
> | Azure Red Hat OpenShift | > | Azure Remote Rendering | > | Azure SignalR Service |
azure-app-configuration Howto Import Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-import-export-data.md
From the Azure portal, follow these steps:
| Separator | The separator is the character parsed in your imported configuration file to separate key-values which will be added to your configuration store. Select one of the following options: `.`, `,`,`:`, `;`, `/`, `-`. | : | | Prefix | Optional. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. | TestApp:Settings:Backgroundcolor | | Label | Optional. Select an existing label or enter a new label that will be assigned to your imported key-values. | prod |
- | Content type | Optional. Indicate if the file you're importing is a Key Vault reference or a JSON file. For more information about Key Vault references, go to [Use Key Vault references in an ASP.NET Core app](/azure/azure-app-configuration/use-key-vault-references-dotnet-core). | JSON (application/json) |
+ | Content type | Optional. Indicate if the file you're importing is a Key Vault reference or a JSON file. For more information about Key Vault references, go to [Use Key Vault references in an ASP.NET Core app](./use-key-vault-references-dotnet-core.md). | JSON (application/json) |
1. Select **Apply** to proceed with the import. ### [Azure CLI](#tab/azure-cli)
-Use the Azure CLI as explained below to import App Configuration data. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](/azure/cloud-shell/overview). Specify the source of the data: `appconfig`, `appservice` or `file`. Optionally specify a source label with `--src-label` and a label to apply with `--label`.
+Use the Azure CLI as explained below to import App Configuration data. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md). Specify the source of the data: `appconfig`, `appservice` or `file`. Optionally specify a source label with `--src-label` and a label to apply with `--label`.
Import all keys and feature flags from a file and apply test label.
From the [Azure portal](https://portal.azure.com), follow these steps:
### [Azure CLI](#tab/azure-cli)
-Use the Azure CLI as explained below to export configurations from App Configuration to another place. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](/azure/cloud-shell/overview). Specify the destination of the data: `appconfig`, `appservice` or `file`. Specify a label for the data you want to export with `--label` or export data with no label by not entering a label.
+Use the Azure CLI as explained below to export configurations from App Configuration to another place. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md). Specify the destination of the data: `appconfig`, `appservice` or `file`. Specify a label for the data you want to export with `--label` or export data with no label by not entering a label.
> [!IMPORTANT] > If the keys you want to export have labels, do select the corresponding labels. If you don't select a label, only keys without labels will be exported.
For more details and examples, go to [az appconfig kv export](/cli/azure/appconf
## Next steps > [!div class="nextstepaction"]
-> [Create an ASP.NET Core web app](./quickstart-aspnet-core-app.md)
+> [Create an ASP.NET Core web app](./quickstart-aspnet-core-app.md)
azure-arc Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connectivity.md
Azure Arc-enabled data services provides you the option to connect to Azure in t
The connectivity mode provides you the flexibility to choose how much data is sent to Azure and how users interact with the Arc Data Controller. Depending on the connectivity mode that is chosen, some functionality of Azure Arc-enabled data services may or may not be available.
-Importantly, if the Azure Arc-enabled data services are directly connected to Azure, then users can use [Azure Resource Manager APIs](/rest/api/resources/), the Azure CLI, and the Azure portal to operate the Azure Arc data services. The experience in directly connected mode is much like how you would use any other Azure service with provisioning/de-provisioning, scaling, configuring, and so on all in the Azure portal. If the Azure Arc-enabled data services are indirectly connected to Azure, then the Azure portal is a read-only view. You can see the inventory of SQL managed instances and Postgres Hyperscale instances that you have deployed and the details about them, but you cannot take action on them in the Azure portal. In the indirectly connected mode, all actions must be taken locally using Azure Data Studio, the appropriate CLI, or Kubernetes native tools like kubectl.
+Importantly, if the Azure Arc-enabled data services are directly connected to Azure, then users can use [Azure Resource Manager APIs](/rest/api/resources/), the Azure CLI, and the Azure portal to operate the Azure Arc data services. The experience in directly connected mode is much like how you would use any other Azure service with provisioning/de-provisioning, scaling, configuring, and so on all in the Azure portal. If the Azure Arc-enabled data services are indirectly connected to Azure, then the Azure portal is a read-only view. You can see the inventory of SQL managed instances and PostgreSQL Hyperscale instances that you have deployed and the details about them, but you cannot take action on them in the Azure portal. In the indirectly connected mode, all actions must be taken locally using Azure Data Studio, the appropriate CLI, or Kubernetes native tools like kubectl.
Additionally, Azure Active Directory and Azure Role-Based Access Control can be used in the directly connected mode only because there is a dependency on a continuous and direct connection to Azure to provide this functionality.
azure-arc View Arc Data Services Inventory In Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/view-arc-data-services-inventory-in-azure-portal.md
You can view your Azure Arc-enabled data services in the Azure portal or in your
## View resources in Azure portal
-After you upload your [metrics, logs](upload-metrics-and-logs-to-azure-monitor.md), or [usage](view-billing-data-in-azure.md), you can view your Azure Arc-enabled SQL managed instances or Azure Arc-enabled Postgres Hyperscale server groups in the Azure portal. To view your resource in the [Azure portal](https://portal.azure.com), follow these steps:
+After you upload your [metrics, logs](upload-metrics-and-logs-to-azure-monitor.md), or [usage](view-billing-data-in-azure.md), you can view your Azure Arc-enabled SQL managed instances or Azure Arc-enabled PostgreSQL Hyperscale server groups in the Azure portal. To view your resource in the [Azure portal](https://portal.azure.com), follow these steps:
1. Go to **All services**. 1. Search for your database instance type.
azure-arc View Data Controller In Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/view-data-controller-in-azure-portal.md
In the **indirect** connected mode, you must export and upload at least one type
## Azure portal
-After you complete your first [metrics or logs upload to Azure](upload-metrics-and-logs-to-azure-monitor.md) or [usage data upload](view-billing-data-in-azure.md), you can see the Azure Arc data controller and any Azure Arc-enabled SQL managed instances or Azure Arc-enabled Postgres Hyperscale server resources in the [Azure portal](https://portal.azure.com).
+After you complete your first [metrics or logs upload to Azure](upload-metrics-and-logs-to-azure-monitor.md) or [usage data upload](view-billing-data-in-azure.md), you can see the Azure Arc data controller and any Azure Arc-enabled SQL managed instances or Azure Arc-enabled PostgreSQL Hyperscale server resources in the [Azure portal](https://portal.azure.com).
To find your data controller, search for it by name in the search bar and then select it.
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
When attempting to onboard Kubernetes clusters to the Azure Arc platform, the lo
Cannot load native module 'Crypto.Hash._MD5' ```
-Sometimes, dependent modules fail to download successfully when adding the extensions `connectedk8s` and `k8s-configuration` through Azure CLI or Azure Powershell. To fix this problem, manually remove and then add the extensions in the local environment.
+Sometimes, dependent modules fail to download successfully when adding the extensions `connectedk8s` and `k8s-configuration` through Azure CLI or Azure PowerShell. To fix this problem, manually remove and then add the extensions in the local environment.
To remove the extensions, use:
Once above permissions are granted, you can now proceed to [enabling the custom
The following troubleshooting steps provide guidance on validating the deployment of all the Open Service Mesh extension components on your cluster.
-### 1. Check OSM Controller **Deployment**
+### Check OSM Controller **Deployment**
```bash kubectl get deployment -n arc-osm-system --selector app=osm-controller ```
NAME READY UP-TO-DATE AVAILABLE AGE
osm-controller 1/1 1 1 59m ```
-### 2. Check the OSM Controller **Pod**
+### Check the OSM Controller **Pod**
```bash kubectl get pods -n arc-osm-system --selector app=osm-controller ```
osm-controller-b5bd66db-wvl9w 1/1 Running 0 31m
Even though we had one controller _evicted_ at some point, we have another one which is `READY 1/1` and `Running` with `0` restarts. If the column `READY` is anything other than `1/1` the service mesh would be in a broken state.
-Column `READY` with `0/1` indicates the control plane container is crashing - we need to get logs. See `Get OSM Controller Logs from Azure Support Center` section below.
+Column `READY` with `0/1` indicates the control plane container is crashing - we need to get logs. Use the following command to inspect controller logs:
+```bash
+kubectl logs -n arc-osm-system -l app=osm-controller
+```
Column `READY` with a number higher than 1 after the `/` would indicate that there are sidecars installed. OSM Controller would most likely not work with any sidecars attached to it.
-### 3. Check OSM Controller **Service**
+### Check OSM Controller **Service**
```bash kubectl get service -n arc-osm-system osm-controller ```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AG
osm-controller ClusterIP 10.0.31.254 <none> 15128/TCP,9092/TCP 67m ```
-> Note: The `CLUSTER-IP` would be different. The service `NAME` and `PORT(S)` must be the same as seen in the output.
+> [!NOTE]
+> The `CLUSTER-IP` would be different. The service `NAME` and `PORT(S)` must be the same as seen in the output.
-### 4. Check OSM Controller **Endpoints**:
+### Check OSM Controller **Endpoints**
```bash kubectl get endpoints -n arc-osm-system osm-controller ```
osm-controller 10.240.1.115:9092,10.240.1.115:15128 69m
If the user's cluster has no `ENDPOINTS` for `osm-controller` this would indicate that the control plane is unhealthy. This may be caused by the OSM Controller pod crashing, or never deployed correctly.
-### 5. Check OSM Injector **Deployment**
+### Check OSM Injector **Deployment**
```bash kubectl get deployments -n arc-osm-system osm-injector ```
NAME READY UP-TO-DATE AVAILABLE AGE
osm-injector 1/1 1 1 73m ```
-### 6. Check OSM Injector **Pod**
+### Check OSM Injector **Pod**
```bash kubectl get pod -n arc-osm-system --selector app=osm-injector ```
osm-injector-5986c57765-vlsdk 1/1 Running 0 73m
The `READY` column must be `1/1`. Any other value would indicate an unhealthy osm-injector pod.
-### 7. Check OSM Injector **Service**
+### Check OSM Injector **Service**
```bash kubectl get service -n arc-osm-system osm-injector ```
osm-injector ClusterIP 10.0.39.54 <none> 9090/TCP 75m
Ensure the IP address listed for `osm-injector` service is `9090`. There should be no `EXTERNAL-IP`.
-### 8. Check OSM Injector **Endpoints**
+### Check OSM Injector **Endpoints**
```bash kubectl get endpoints -n arc-osm-system osm-injector ```
osm-injector 10.240.1.172:9090 75m
For OSM to function, there must be at least one endpoint for `osm-injector`. The IP address of your OSM Injector endpoints will be different. The port `9090` must be the same.
-### 9. Check **Validating** and **Mutating** webhooks:
+### Check **Validating** and **Mutating** webhooks
```bash kubectl get ValidatingWebhookConfiguration --selector app=osm-controller ``` If the Validating Webhook is healthy, you will get an output similar to the following output: ```
-NAME WEBHOOKS AGE
-arc-osm-webhook-osm 1 81m
+NAME WEBHOOKS AGE
+osm-validator-mesh-osm 1 81m
``` ```bash
-kubectl get MutatingWebhookConfiguration --selector app=osm-controller
+kubectl get MutatingWebhookConfiguration --selector app=osm-injector
``` If the Mutating Webhook is healthy, you will get an output similar to the following output: ```
-NAME WEBHOOKS AGE
-arc-osm-webhook-osm 1 102m
+NAME WEBHOOKS AGE
+arc-osm-webhook-osm 1 102m
```
-Check for the service and the CA bundle of the **Validating** webhook:
-```
-kubectl get ValidatingWebhookConfiguration arc-osm-webhook-osm -o json | jq '.webhooks[0].clientConfig.service'
+Check for the service and the CA bundle of the **Validating** webhook
+```bash
+kubectl get ValidatingWebhookConfiguration osm-validator-mesh-osm -o json | jq '.webhooks[0].clientConfig.service'
``` A well configured Validating Webhook Configuration would have the following output:
A well configured Validating Webhook Configuration would have the following outp
{ "name": "osm-config-validator", "namespace": "arc-osm-system",
- "path": "/validate-webhook",
+ "path": "/validate",
"port": 9093 } ```
-Check for the service and the CA bundle of the **Mutating** webhook:
+Check for the service and the CA bundle of the **Mutating** webhook
```bash kubectl get MutatingWebhookConfiguration arc-osm-webhook-osm -o json | jq '.webhooks[0].clientConfig.service' ```
A well configured Mutating Webhook Configuration would have the following output
Check whether OSM Controller has given the Validating (or Mutating) Webhook a CA Bundle by using the following command: ```bash
-kubectl get ValidatingWebhookConfiguration arc-osm-webhook-osm -o json | jq -r '.webhooks[0].clientConfig.caBundle' | wc -c
+kubectl get ValidatingWebhookConfiguration osm-validator-mesh-osm -o json | jq -r '.webhooks[0].clientConfig.caBundle' | wc -c
``` ```bash
Example output:
```bash 1845 ```
-The number in the output indicates the number of bytes, or the size of the CA Bundle. If this is empty, 0, or some number under a 1000, it would indicate that the CA Bundle is not correctly provisioned. Without a correct CA Bundle, the ValidatingWebhook would throw an error and prohibit you from making changes to the `osm-config` ConfigMap in the `arc-osm-system` namespace.
-
-Let's look at a sample error when the CA Bundle is incorrect:
-- An attempt to change the `osm-config` ConfigMap:
- ```bash
- kubectl patch ConfigMap osm-config -n arc-osm-system --type merge --patch '{"data":{"config_resync_interval":"2m"}}'
- ```
-- Error output:
- ```bash
- Error from server (InternalError): Internal error occurred: failed calling webhook "osm-config-webhook.k8s.io": Post https://osm-config-validator.arc-osm-system.svc:9093/validate-webhook?timeout=30s: x509: certificate signed by unknown authority
- ```
-
-Use one of the following workarounds when the **Validating** Webhook Configuration has a bad certificate:
-- Option 1. Restart OSM Controller - This will restart the OSM Controller. On start, it will overwrite the CA Bundle of both the Mutating and Validating webhooks.
- ```bash
- kubectl rollout restart deployment -n arc-osm-system osm-controller
- ```
--- Option 2. Delete the Validating Webhook - Removing the Validating Webhook makes mutations of the `osm-config` ConfigMap no longer validated. Any patch will go through. The OSM Controller may have to be restarted to quickly rewrite the CA Bundle.
- ```bash
- kubectl delete ValidatingWebhookConfiguration arc-osm-webhook-osm
- ```
+The number in the output indicates the number of bytes, or the size of the CA Bundle. If this is empty, 0, or some number under a 1000, it would indicate that the CA Bundle is not correctly provisioned. Without a correct CA Bundle, the ValidatingWebhook would throw an error.
-- Option 3. Delete and Patch: The following command will delete the validating webhook, allowing you to add any values, and will immediately try to apply a patch
- ```bash
- kubectl delete ValidatingWebhookConfiguration arc-osm-webhook-osm; kubectl patch ConfigMap osm-config -n arc-osm-system --type merge --patch '{"data":{"config_resync_interval":"15s"}}'
- ```
+### Check the `osm-mesh-config` resource
+Check for the existence:
-### 10. Check the `osm-config` **ConfigMap**
+```azurecli-interactive
+kubectl get meshconfig osm-mesh-config -n arc-osm-system
+```
->[!Note]
->The OSM Controller does not require `osm-config` ConfigMap to be present in the `arc-osm-system` namespace. The controller has reasonable default values for the config and can operate without it.
+Check the content of the OSM MeshConfig
-Check for the existence:
-```bash
-kubectl get ConfigMap -n arc-osm-system osm-config
+```azurecli-interactive
+kubectl get meshconfig osm-mesh-config -n arc-osm-system -o yaml
```
-Check the content of the `osm-config` ConfigMap:
-```bash
-kubectl get ConfigMap -n arc-osm-system osm-config -o json | jq '.data'
-```
-You will get the following output:
-```json
-{
- "egress": "false",
- "enable_debug_server": "false",
- "enable_privileged_init_container": "false",
- "envoy_log_level": "error",
- "permissive_traffic_policy_mode": "true",
- "prometheus_scraping": "true",
- "service_cert_validity_duration": "24h",
- "tracing_enable": "false",
- "use_https_ingress": "false",
-}
+```yaml
+apiVersion: config.openservicemesh.io/v1alpha1
+kind: MeshConfig
+metadata:
+ creationTimestamp: "0000-00-00A00:00:00A"
+ generation: 1
+ name: osm-mesh-config
+ namespace: arc-osm-system
+ resourceVersion: "2494"
+ uid: 6c4d67f3-c241-4aeb-bf4f-b029b08faa31
+spec:
+ certificate:
+ certKeyBitSize: 2048
+ serviceCertValidityDuration: 24h
+ featureFlags:
+ enableAsyncProxyServiceMapping: false
+ enableEgressPolicy: true
+ enableEnvoyActiveHealthChecks: false
+ enableIngressBackendPolicy: true
+ enableMulticlusterMode: false
+ enableRetryPolicy: false
+ enableSnapshotCacheMode: false
+ enableWASMStats: true
+ observability:
+ enableDebugServer: false
+ osmLogLevel: info
+ tracing:
+ enable: false
+ sidecar:
+ configResyncInterval: 0s
+ enablePrivilegedInitContainer: false
+ logLevel: error
+ resources: {}
+ traffic:
+ enableEgress: false
+ enablePermissiveTrafficPolicyMode: true
+ inboundExternalAuthorization:
+ enable: false
+ failureModeAllow: false
+ statPrefix: inboundExtAuthz
+ timeout: 1s
+ inboundPortExclusionList: []
+ outboundIPRangeExclusionList: []
+ outboundPortExclusionList: []
+kind: List
+metadata:
+ resourceVersion: ""
+ selfLink: ""
```
-Refer [OSM ConfigMap documentation](https://release-v0-8.docs.openservicemesh.io/docs/osm_config_map/) to understand `osm-config` ConfigMap values.
-
-### 11. Check Namespaces
+`osm-mesh-config` resource values:
+
+| Key | Type | Default Value | Kubectl Patch Command Examples |
+|--|||--|
+| spec.traffic.enableEgress | bool | `false` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"traffic":{"enableEgress":false}}}' --type=merge` |
+| spec.traffic.enablePermissiveTrafficPolicyMode | bool | `true` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge` |
+| spec.traffic.outboundPortExclusionList | array | `[]` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"traffic":{"outboundPortExclusionList":[6379,8080]}}}' --type=merge` |
+| spec.traffic.outboundIPRangeExclusionList | array | `[]` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"traffic":{"outboundIPRangeExclusionList":["10.0.0.0/32","1.1.1.1/24"]}}}' --type=merge` |
+| spec.traffic.inboundPortExclusionList | array | `[]` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"traffic":{"inboundPortExclusionList":[6379,8080]}}}' --type=merge` |
+| spec.certificate.serviceCertValidityDuration | string | `"24h"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"certificate":{"serviceCertValidityDuration":"24h"}}}' --type=merge` |
+| spec.observability.enableDebugServer | bool | `false` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"observability":{"enableDebugServer":false}}}' --type=merge` |
+| spec.observability.osmLogLevel | string | `"info"`| `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"observability":{"tracing":{"osmLogLevel": "info"}}}}' --type=merge` |
+| spec.observability.tracing.enable | bool | `false` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"observability":{"tracing":{"enable":true}}}}' --type=merge` |
+| spec.sidecar.enablePrivilegedInitContainer | bool | `false` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"sidecar":{"enablePrivilegedInitContainer":true}}}' --type=merge` |
+| spec.sidecar.logLevel | string | `"error"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"sidecar":{"logLevel":"error"}}}' --type=merge` |
+| spec.featureFlags.enableWASMStats | bool | `"true"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"featureFlags":{"enableWASMStats":"true"}}}' --type=merge` |
+| spec.featureFlags.enableEgressPolicy | bool | `"true"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"featureFlags":{"enableEgressPolicy":"true"}}}' --type=merge` |
+| spec.featureFlags.enableMulticlusterMode | bool | `"false"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"featureFlags":{"enableMulticlusterMode":"false"}}}' --type=merge` |
+| spec.featureFlags.enableSnapshotCacheMode | bool | `"false"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"featureFlags":{"enableSnapshotCacheMode":"false"}}}' --type=merge` |
+| spec.featureFlags.enableAsyncProxyServiceMapping | bool | `"false"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"featureFlags":{"enableAsyncProxyServiceMapping":"false"}}}' --type=merge` |
+| spec.featureFlags.enableIngressBackendPolicy | bool | `"true"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"featureFlags":{"enableIngressBackendPolicy":"true"}}}' --type=merge` |
+| spec.featureFlags.enableEnvoyActiveHealthChecks | bool | `"false"` | `kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"featureFlags":{"enableEnvoyActiveHealthChecks":"false"}}}' --type=merge` |
+
+### Check Namespaces
>[!Note] >The arc-osm-system namespace will never participate in a service mesh and will never be labeled and/or annotated with the key/values below.
When a kubernetes namespace is part of the mesh, the following must be true:
View the annotations of the namespace `bookbuyer`: ```bash
-kc get namespace bookbuyer -o json | jq '.metadata.annotations'
+kubectl get namespace bookbuyer -o json | jq '.metadata.annotations'
``` The following annotation must be present:
The following annotation must be present:
View the labels of the namespace `bookbuyer`: ```bash
-kc get namespace bookbuyer -o json | jq '.metadata.labels'
+kubectl get namespace bookbuyer -o json | jq '.metadata.labels'
``` The following label must be present:
The following label must be present:
Note that if you are not using `osm` CLI, you could also manually add these annotations to your namespaces. If a namespace is not annotated with `"openservicemesh.io/sidecar-injection": "enabled"` or not labeled with `"openservicemesh.io/monitored-by": "osm"` the OSM Injector will not add Envoy sidecars. >[!Note]
->After `osm namespace add` is called, only **new** pods will be injected with an Envoy sidecar. Existing pods must be restarted with `kubectl rollout restard deployment` command.
+>After `osm namespace add` is called, only **new** pods will be injected with an Envoy sidecar. Existing pods must be restarted with `kubectl rollout restart deployment` command.
-### 12. Verify the SMI CRDs
+### Verify the SMI CRDs
Check whether the cluster has the required CRDs: ```bash kubectl get crds ```
-Ensure that the CRDs correspond to the same OSM upstream version. E.g. if you are using v0.8.4, ensure that the CRDs match the ones that are available in the release branch v0.8.4 of [OSM OSS project](https://docs.openservicemesh.io/). Refer [OSM release notes](https://github.com/openservicemesh/osm/releases).
+Ensure that the CRDs correspond to the versions available in the release branch. For example, if you are using OSM-Arc v1.0.0-1, navigate to the [SMI supported versions page](https://docs.openservicemesh.io/docs/overview/smi/) and select v1.0 from the Releases dropdown to check which CRDs versions are in use.
Get the versions of the CRDs installed with the following command: ```bash
for x in $(kubectl get crds --no-headers | awk '{print $1}' | grep 'smi-spec.io'
done ```
-If CRDs are missing, use the following commands to install them on the cluster. Ensure that you replace the version in the command.
+If CRDs are missing, use the following commands to install them on the cluster. If you are using a version of OSM-Arc that is not v1.0, ensure that you replace the version in the command (ex: v1.1.0 would be release-v1.1).
+ ```bash
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.8.2/charts/osm/crds/access.yaml
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v1.0/cmd/osm-bootstrap/crds/smi_http_route_group.yaml
+
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v1.0/cmd/osm-bootstrap/crds/smi_tcp_route.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.8.2/charts/osm/crds/specs.yaml
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v1.0/cmd/osm-bootstrap/crds/smi_traffic_access.yaml
-kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v0.8.2/charts/osm/crds/split.yaml
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm/release-v1.0/cmd/osm-bootstrap/crds/smi_traffic_split.yaml
```
-### 13. Troubleshoot Certificate Management
+Refer to [OSM release notes](https://github.com/openservicemesh/osm/releases) to see CRD changes between releases.
+
+### Troubleshoot certificate management
Information on how OSM issues and manages certificates to Envoy proxies running on application pods can be found on the [OSM docs site](https://docs.openservicemesh.io/docs/guides/certificates/).
-### 14. Upgrade Envoy
-When a new pod is created in a namespace monitored by the add-on, OSM will inject an [envoy proxy sidecar](https://docs.openservicemesh.io/docs/guides/app_onboarding/sidecar_injection/) in that pod. If the envoy version needs to be updated, steps to do so can be found in the [Upgrade Guide](https://release-v0-11.docs.openservicemesh.io/docs/getting_started/upgrade/#envoy) on the OSM docs site.
+### Upgrade Envoy
+When a new pod is created in a namespace monitored by the add-on, OSM will inject an [Envoy proxy sidecar](https://docs.openservicemesh.io/docs/guides/app_onboarding/sidecar_injection/) in that pod. If the envoy version needs to be updated, steps to do so can be found in the [Upgrade Guide](https://docs.openservicemesh.io/docs/guides/upgrade/#envoy) on the OSM docs site.
azure-arc Tutorial Arc Enabled Open Service Mesh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-arc-enabled-open-service-mesh.md
Title: Azure Arc-enabled Open Service Mesh (Preview)
+ Title: Azure Arc-enabled Open Service Mesh
description: Open Service Mesh (OSM) extension on Azure Arc-enabled Kubernetes cluster Previously updated : 07/23/2021 Last updated : 04/07/2022
-# Azure Arc-enabled Open Service Mesh (Preview)
+# Azure Arc-enabled Open Service Mesh
[Open Service Mesh (OSM)](https://docs.openservicemesh.io/) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
OSM runs an Envoy-based control plane on Kubernetes, can be configured with [SMI
### Support limitations for Azure Arc-enabled Open Service Mesh - Only one instance of Open Service Mesh can be deployed on an Azure Arc-connected Kubernetes cluster.-- Public preview is available for Open Service Mesh version v0.8.4 and above. Find out the latest version of the release [here](https://github.com/Azure/osm-azure/releases). The supported release versions are appended with notes. Ignore the tags associated with intermediate releases.
+- Support is available for Azure Arc-enabled Open Service Mesh version v1.0.0-1 and above. Find the latest version [here](https://github.com/Azure/osm-azure/releases). Supported release versions are appended with notes. Ignore the tags associated with intermediate releases.
- The following Kubernetes distributions are currently supported: - AKS Engine - AKS on HCI
OSM runs an Envoy-based control plane on Kubernetes, can be configured with [SMI
- OpenShift Kubernetes Distribution - Amazon Elastic Kubernetes Service - VMware Tanzu Kubernetes Grid-- Azure Monitor integration with Azure Arc-enabled Open Service Mesh is available with [limited support](https://github.com/microsoft/Docker-Provider/blob/ci_dev/Documentation/OSMPrivatePreview/ReadMe.md).
+- Azure Monitor integration with Azure Arc-enabled Open Service Mesh is available with [limited support](#monitoring-application-using-azure-monitor-and-applications-insights).
[!INCLUDE [preview features note](./includes/preview/preview-callout.md)] ### Prerequisites - Ensure you have met all the common prerequisites for cluster extensions listed [here](extensions.md#prerequisites).-- Use az k8s-extension CLI version >= v0.4.0
+- Use az k8s-extension CLI version >= v1.0.4
-## Basic installation of Azure Arc-enabled OSM
+## Basic installation
+
+Arc-enabled Open Service Mesh can be deployed through Azure portal, an ARM template, a built-in Azure policy, and CLI.
+
+### Basic installation using Azure portal
+To deploy using Azure portal, once you have an Arc connected cluster, go to the cluster's **Open Service Mesh** section.
+
+[ ![Open Service Mesh located under Settings for Arc enabled Kubernetes cluster](media/tutorial-arc-enabled-open-service-mesh/osm-portal-install.jpg) ](media/tutorial-arc-enabled-open-service-mesh/osm-portal-install.jpg#lightbox)
+
+Simply select the **Install extension** button to deploy the latest version of the extension.
+
+Alternatively, you can use the CLI experience captured below. For at-scale onboarding, read further in this article about deployment using [ARM template](#install-azure-arc-enabled-osm-using-arm-template) and using [Azure Policy](#install-azure-arc-enabled-osm-using-built-in-policy).
+
+### Basic installation using Azure CLI
The following steps assume that you already have a cluster with a supported Kubernetes distribution connected to Azure Arc. Ensure that your KUBECONFIG environment variable points to the kubeconfig of the Arc-enabled Kubernetes cluster.
Ensure that your KUBECONFIG environment variable points to the kubeconfig of the
Set the environment variables: ```azurecli-interactive
-export VERSION=<osm-arc-version>
export CLUSTER_NAME=<arc-cluster-name> export RESOURCE_GROUP=<resource-group-name> ```
-While Azure Arc-enabled Open Service Mesh is in preview, the `az k8s-extension create` command only accepts `pilot` for the `--release-train` flag. `--auto-upgrade-minor-version` is always set to `false` and a version must be provided. If you are using an OpenShift cluster, use the steps in the [section](#install-osm-on-an-openshift-cluster).
+If you are using an OpenShift cluster, skip to the OpenShift installation steps [below](#install-osm-on-an-openshift-cluster).
+
+Create the extension:
+> [!NOTE]
+> If you would like to pin a specific version of OSM, add the `--version x.y.z` flag to the `create` command. Note that this will set the value for `auto-upgrade-minor-version` to false.
```azurecli-interactive
-az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --release-train pilot --name osm --version $VERSION
+az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm
```
-You should see output similar to the output shown below. It may take 3-5 minutes for the actual OSM helm chart to get deployed to the cluster. Until this deployment happens, you will continue to see installState as Pending.
+You should see output similar to the example below. It may take 3-5 minutes for the actual OSM helm chart to get deployed to the cluster. Until this deployment happens, you will continue to see installState as Pending.
```json {
- "autoUpgradeMinorVersion": false,
+ "autoUpgradeMinorVersion": true,
"configurationSettings": {}, "creationTime": "2021-04-29T17:50:11.4116524+00:00", "errorInfo": {
You should see output similar to the output shown below. It may take 3-5 minutes
"lastStatusTime": null, "location": null, "name": "osm",
- "releaseTrain": "pilot",
+ "releaseTrain": "stable",
"resourceGroup": "$RESOURCE_GROUP", "scope": { "cluster": {
You should see output similar to the output shown below. It may take 3-5 minutes
}, "statuses": [], "type": "Microsoft.KubernetesConfiguration/extensions",
- "version": "x.x.x"
+ "version": "x.y.z"
} ```
-## Custom installations of Azure Arc-enabled OSM
-The following sections describe certain custom installations of Azure Arc-enabled OSM. Custom installations require setting
+Next, [validate your installation](#validate-installation).
+
+## Custom installations
+The following sections describe certain custom installations of Azure Arc-enabled OSM. Custom installations require setting
values of OSM by in a JSON file and passing them into `k8s-extension create` CLI command as described below. ### Install OSM on an OpenShift cluster
values of OSM by in a JSON file and passing them into `k8s-extension create` CLI
"osm.osm.enablePrivilegedInitContainer": "true" } ```
-
+ 2. [Install OSM with custom values](#setting-values-during-osm-installation).
-
+ 3. Add the privileged [security context constraint](https://docs.openshift.com/container-platform/4.7/authentication/managing-security-context-constraints.html) to each service account for the applications in the mesh. ```azurecli-interactive oc adm policy add-scc-to-user privileged -z <service account name> -n <service account namespace>
values of OSM by in a JSON file and passing them into `k8s-extension create` CLI
It may take 3-5 minutes for the actual OSM helm chart to get deployed to the cluster. Until this deployment happens, you will continue to see installState as Pending.
-To ensure that the privileged init container setting is not reverted to the default, pass in the "osm.osm.enablePrivilegedInitContainer" : "true" configuration setting to all subsequent az k8s-extension create commands.
+To ensure that the privileged init container setting is not reverted to the default, pass in the `"osm.osm.enablePrivilegedInitContainer" : "true"` configuration setting to all subsequent az k8s-extension create commands.
### Enable High Availability features on installation OSM's control plane components are built with High Availability and Fault Tolerance in mind. This section describes how to
enable Horizontal Pod Autoscaling (HPA) and Pod Disruption Budget (PDB) during i
considerations of High Availability on OSM [here](https://docs.openservicemesh.io/docs/guides/ha_scale/high_availability/). #### Horizontal Pod Autoscaling (HPA)
-HPA automatically scales up or down control plane pods based on the average target CPU utilization (%) and average target
+HPA automatically scales up or down control plane pods based on the average target CPU utilization (%) and average target
memory utilization (%) defined by the user. To enable HPA and set applicable values on OSM control plane pods during installation, create or
-append to your existing JSON settings file as below, repeating the key/value pairs for each control plane pod
-(`osmController`, `injector`) that you want to enable HPA on.
+append to your existing JSON settings file as below, repeating the key/value pairs for each control plane pod
+(`osmController`, `injector`) that you want to enable HPA on.
```json {
append to your existing JSON settings file as below, repeating the key/value pai
Now, [install OSM with custom values](#setting-values-during-osm-installation). #### Pod Disruption Budget (PDB)
-In order to prevent disruptions during planned outages, control plane pods `osm-controller` and `osm-injector` have a PDB
+In order to prevent disruptions during planned outages, control plane pods `osm-controller` and `osm-injector` have a PDB
that ensures there is always at least 1 pod corresponding to each control plane application.
-To enable PDB, create or append to your existing JSON settings file as follows for each desired control plane pod
+To enable PDB, create or append to your existing JSON settings file as follows for each desired control plane pod
(`osmController`, `injector`): ```json {
To enable PDB, create or append to your existing JSON settings file as follows f
Now, [install OSM with custom values](#setting-values-during-osm-installation).
-### Install OSM with cert-manager for Certificate Management
+### Install OSM with cert-manager for certificate management
[cert-manager](https://cert-manager.io/) is a provider that can be used for issuing signed certificates to OSM without
-the need for storing private keys in Kubernetes. Refer to OSM's [cert-manager documentation](https://release-v0-11.docs.openservicemesh.io/docs/guides/certificates/)
+the need for storing private keys in Kubernetes. Refer to OSM's [cert-manager documentation](https://docs.openservicemesh.io/docs/guides/certificates/)
and [demo](https://docs.openservicemesh.io/docs/demos/cert-manager_integration/) to learn more. > [!NOTE]
-> Use the commands provided in the OSM GitHub documentation with caution. Ensure that you use the correct namespace name `arc-osm-system`.
-
-To install OSM with cert-manager as the certificate provider, create or append to your existing JSON settings file the `certificateProvider.kind`
+> Use the commands provided in the OSM GitHub documentation with caution. Ensure that you use the correct namespace in commands or specify with flag `--osm-namespace arc-osm-system`.
+To install OSM with cert-manager as the certificate provider, create or append to your existing JSON settings file the `certificateProvider.kind`
value set to cert-manager as shown below. If you would like to change from default cert-manager values specified in OSM documentation, also include and update the subsequent `certmanager.issuer` lines.
also include and update the subsequent `certmanager.issuer` lines.
Now, [install OSM with custom values](#setting-values-during-osm-installation).
-### Install OSM with Contour for Ingress
+### Install OSM with Contour for ingress
OSM provides multiple options to expose mesh services externally using ingress. OSM can use [Contour](https://projectcontour.io/), which works with the ingress controller installed outside the mesh and provisioned with a certificate to participate in the mesh. Refer to [OSM's ingress documentation](https://docs.openservicemesh.io/docs/guides/traffic_management/ingress/#1-using-contour-ingress-controller-and-gateway) and [demo](https://docs.openservicemesh.io/docs/demos/ingress_contour/) to learn more. > [!NOTE]
-> Use the commands provided in the OSM GitHub documentation with caution. Ensure that you use the correct namespace name `arc-osm-system`.
-
+> Use the commands provided in the OSM GitHub documentation with caution. Ensure that you use the correct namespace in commands or specify with flag `--osm-namespace arc-osm-system`.
To set required values for configuring Contour during OSM installation, append the following to your JSON settings file: ```json {
Now, [install OSM with custom values](#setting-values-during-osm-installation).
Any values that need to be set during OSM installation need to be saved to a single JSON file and passed in through the Azure CLI install command.
-Once you have created a JSON file with applicable values as described in above custom installation sections, set the
+Once you have created a JSON file with applicable values as described in above custom installation sections, set the
file path as an environment variable: ```azurecli-interactive export SETTINGS_FILE=<json-file-path> ```
-Run the `az k8s-extension create` command to create the OSM extension, passing in the settings file using the
-`--configuration-settings` flag:
+Run the `az k8s-extension create` command to create the OSM extension, passing in the settings file using the
+`--configuration-settings-file` flag:
```azurecli-interactive
- az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --release-train pilot --name osm --version $VERSION --configuration-settings-file $SETTINGS_FILE
+ az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm --configuration-settings-file $SETTINGS_FILE
``` ## Install Azure Arc-enabled OSM using ARM template
After connecting your cluster to Azure Arc, create a JSON file with the followin
} }, "ReleaseTrain": {
- "defaultValue": "Pilot",
+ "defaultValue": "Stable",
"type": "String", "metadata": { "description": "The release train."
az deployment group create --name $DEPLOYMENT_NAME --resource-group $RESOURCE_GR
You should now be able to view the OSM resources and use the OSM extension in your cluster.
+## Install Azure Arc-enabled OSM using built-in policy
+
+A built-in policy is available on Azure portal under the category of **Kubernetes** by the name of **Azure Arc-enabled Kubernetes clusters should have the Open Service Mesh extension installed**.
+This policy can be assigned at the scope of a subscription or a resource group. The default action of this policy is **Deploy if not exists**.
+However, you could choose to simply audit the clusters for extension installations by changing the parameters during assignment.
+You will also be prompted to specify the version you wish to install (v1.0.0-1 or above) as a parameter.
+ ## Validate installation Run the following command.
You should see a JSON output similar to the output below:
```json {
- "autoUpgradeMinorVersion": false,
+ "autoUpgradeMinorVersion": true,
"configurationSettings": {}, "creationTime": "2021-04-29T19:22:00.7649729+00:00", "errorInfo": {
You should see a JSON output similar to the output below:
"lastStatusTime": "2021-04-29T19:23:27.642+00:00", "location": null, "name": "osm",
- "releaseTrain": "pilot",
+ "releaseTrain": "stable",
"resourceGroup": "$RESOURCE_GROUP", "scope": { "cluster": {
You should see a JSON output similar to the output below:
}, "statuses": [], "type": "Microsoft.KubernetesConfiguration/extensions",
- "version": "x.x.x"
+ "version": "x.y.z"
} ``` ## OSM controller configuration
kubectl describe meshconfig osm-mesh-config -n arc-osm-system
The output would show the default values: ```azurecli-interactive
-Certificate:
+ Certificate:
+ Cert Key Bit Size: 2048
Service Cert Validity Duration: 24h Feature Flags:
- Enable Egress Policy: true
- Enable Multicluster Mode: false
- Enable WASM Stats: true
+ Enable Async Proxy Service Mapping: false
+ Enable Egress Policy: true
+ Enable Envoy Active Health Checks: false
+ Enable Ingress Backend Policy: true
+ Enable Multicluster Mode: false
+ Enable Retry Policy: false
+ Enable Snapshot Cache Mode: false
+ Enable WASM Stats: true
Observability: Enable Debug Server: false Osm Log Level: info Tracing:
- Address: jaeger.osm-system.svc.cluster.local
- Enable: false
- Endpoint: /api/v2/spans
- Port: 9411
+ Enable: false
Sidecar: Config Resync Interval: 0s Enable Privileged Init Container: false
- Envoy Image: mcr.microsoft.com/oss/envoyproxy/envoy:v1.18.3
- Init Container Image: mcr.microsoft.com/oss/openservicemesh/init:v0.9.1
Log Level: error
- Max Data Plane Connections: 0
Resources: Traffic: Enable Egress: false
Certificate:
Failure Mode Allow: false Stat Prefix: inboundExtAuthz Timeout: 1s
- Use HTTPS Ingress: false
+ Inbound Port Exclusion List:
+ Outbound IP Range Exclusion List:
+ Outbound Port Exclusion List:
```
-Refer to the [Config API reference](https://docs.openservicemesh.io/docs/api_reference/config/v1alpha1/) for more information. Notice that **spec.traffic.enablePermissiveTrafficPolicyMode** is set to **true**. Permissive traffic policy mode in OSM is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services.
+Refer to the [Config API reference](https://docs.openservicemesh.io/docs/api_reference/config/v1alpha1/) for more information. Notice that `spec.traffic.enablePermissiveTrafficPolicyMode` is set to `true`. When OSM is in permissive traffic policy mode, [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services.
+
+`osm-mesh-config` can also be viewed on Azure portal by selecting **Edit configuration** in the cluster's Open Service Mesh section.
+
+[ ![Edit configuration button located on top of the Open Service Mesh section](media/tutorial-arc-enabled-open-service-mesh/osm-portal-configuration.jpg) ](media/tutorial-arc-enabled-open-service-mesh/osm-portal-configuration.jpg#lightbox)
### Making changes to OSM controller configuration > [!NOTE] > Values in the MeshConfig `osm-mesh-config` are persisted across upgrades.- Changes to `osm-mesh-config` can be made using the kubectl patch command. In the following example, the permissive traffic policy mode is changed to false. ```azurecli-interactive
If an incorrect value is used, validations on the MeshConfig CRD will prevent th
```azurecli-interactive kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"traffic":{"enableEgress":"no"}}}' --type=merge- # Validations on the CRD will deny this change The MeshConfig "osm-mesh-config" is invalid: spec.traffic.enableEgress: Invalid value: "string": spec.traffic.enableEgress in body must be of type boolean: "string" ```
-## OSM controller configuration (version v0.8.4)
-
-Currently you can access and configure the OSM controller configuration via the ConfigMap. To view the OSM controller configuration settings, query the `osm-config` ConfigMap via `kubectl` to view its configuration settings.
-
-```azurecli-interactive
-kubectl get configmap osm-config -n arc-osm-system -o json
-```
-
-Output:
+Alternatively, to edit `osm-mesh-config` in Azure portal, select **Edit configuration** in the cluster's Open Service Mesh section.
-```json
-{
- "egress": "false",
- "enable_debug_server": "false",
- "enable_privileged_init_container": "false",
- "envoy_log_level": "error",
- "permissive_traffic_policy_mode": "true",
- "prometheus_scraping": "true",
- "service_cert_validity_duration": "24h",
- "tracing_enable": "false",
- "use_https_ingress": "false"
-}
-```
-
-Read [OSM ConfigMap documentation](https://release-v0-8.docs.openservicemesh.io/docs/osm_config_map/) to understand each of the available configurations.
-
-To make changes to the OSM ConfigMap for version v0.8.4, use the following guidance:
-
-1. Copy and save the changes you wish to make in a JSON file. In this example, we are going to change the permissive_traffic_policy_mode from true to false. Each time you make a change to `osm-config`, you will have to provide the full list of changes (compared to the default `osm-config`) in a JSON file.
- ```json
- {
- "osm.osm.enablePermissiveTrafficPolicy" : "false"
- }
- ```
-
- Set the file path as an environment variable:
-
- ```azurecli-interactive
- export SETTINGS_FILE=<json-file-path>
- ```
-
-2. Run the same `az k8s-extension create` command used to create the extension, but now pass in the configuration settings file:
- ```azurecli-interactive
- az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --release-train pilot --name osm --version $VERSION --configuration-settings-file $SETTINGS_FILE
- ```
-
- > [!NOTE]
- > To ensure that the ConfigMap changes are not reverted to the default, pass in the same configuration settings to all subsequent az k8s-extension create commands.
+[ ![Edit configuration button in the Open Service Mesh section](media/tutorial-arc-enabled-open-service-mesh/osm-portal-configuration-edit.jpg) ](media/tutorial-arc-enabled-open-service-mesh/osm-portal-configuration-edit.jpg#lightbox)
## Using Azure Arc-enabled OSM
Add namespaces to the mesh by running the following command:
```azurecli-interactive osm namespace add <namespace_name> ```
+Namespaces can be onboarded from Azure portal as well by selecting **+Add** in the cluster's Open Service Mesh section.
+
+[ ![+Add button located on top of the Open Service Mesh section](media/tutorial-arc-enabled-open-service-mesh/osm-portal-add-namespace.jpg) ](media/tutorial-arc-enabled-open-service-mesh/osm-portal-add-namespace.jpg#lightbox)
More information about onboarding services can be found [here](https://docs.openservicemesh.io/docs/guides/app_onboarding/#onboard-services). ### Configure OSM with Service Mesh Interface (SMI) policies
-You can start with a [demo application](https://release-v0-11.docs.openservicemesh.io/docs/getting_started/quickstart/manual_demo/#deploy-applications) or use your test environment to try out SMI policies.
-
-> [!NOTE]
-> Ensure that the version of the bookstore application you run matches the version of the OSM extension installed on your cluster. Ex: if you are using v0.8.4 of the OSM extension, use the bookstore demo from release-v0.8 branch of OSM upstream repository.
+You can start with a [sample application](https://docs.openservicemesh.io/docs/getting_started/install_apps/) or use your test environment to try out SMI policies.
+> [!NOTE]
+> If you are using a sample applications, ensure that their versions match the version of the OSM extension installed on your cluster. For example, if you are using v1.0.0 of the OSM extension, use the bookstore manifest from release-v1.0 branch of OSM upstream repository.
### Configuring your own Jaeger, Prometheus and Grafana instances
-The OSM extension does not install add-ons like [Flagger](https://docs.flagger.app/), [Jaeger](https://www.jaegertracing.io/docs/getting-started/), [Prometheus](https://prometheus.io/docs/prometheus/latest/installation/) and [Grafana](https://grafana.com/docs/grafana/latest/installation/) so that users can integrate OSM with their own running instances of those tools instead. To integrate with your own instances, check the following documentation:
+The OSM extension does not install add-ons like [Jaeger](https://www.jaegertracing.io/docs/getting-started/), [Prometheus](https://prometheus.io/docs/prometheus/latest/installation/), [Grafana](https://grafana.com/docs/grafana/latest/installation/) and [Flagger](https://docs.flagger.app/) so that users can integrate OSM with their own running instances of those tools instead. To integrate with your own instances, refer to the following documentation:
> [!NOTE]
-> Use the commands provided in the OSM GitHub documentation with caution. Ensure that you use the correct namespace name 'arc-osm-system' when making changes to `osm-mesh-config`.
-
+> Use the commands provided in the OSM GitHub documentation with caution. Ensure that you use the correct namespace name `arc-osm-system` when making changes to `osm-mesh-config`.
- [BYO-Jaeger instance](https://docs.openservicemesh.io/docs/guides/observability/tracing/#byo-bring-your-own)-- [BYO-Prometheus instance](https://docs.openservicemesh.io/docs/guides/observability/metrics/#byo-prometheus)-- [BYO-Grafana dashboard](https://docs.openservicemesh.io/docs/guides/observability/metrics/#importing-dashboards-on-a-byo-grafana-instance)
+- [BYO-Prometheus instance](https://docs.openservicemesh.io/docs/guides/observability/metrics/#prometheus)
+- [BYO-Grafana dashboard](https://docs.openservicemesh.io/docs/guides/observability/metrics/#grafana)
- [OSM Progressive Delivery with Flagger](https://docs.flagger.app/tutorials/osm-progressive-delivery) ## Monitoring application using Azure Monitor and Applications Insights
-Both Azure Monitor and Azure Application Insights helps you maximize the availability and performance of your applications and services by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments.
+Both Azure Monitor and Azure Application Insights help you maximize the availability and performance of your applications and services by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments.
-Azure Arc-enabled Open Service Mesh will have deep integrations into both of these Azure services, and provide a seemless Azure experience for viewing and responding to critical KPIs provided by OSM metrics. Follow the steps below to allow Azure Monitor to scrape prometheus endpoints for collecting application metrics.
+Azure Arc-enabled Open Service Mesh will have deep integrations into both of these Azure services, and provide a seamless Azure experience for viewing and responding to critical KPIs provided by OSM metrics. Follow the steps below to allow Azure Monitor to scrape Prometheus endpoints for collecting application metrics.
-1. Ensure that the application namespaces that you wish to be monitored are onboarded to the mesh. Follow the guidance [available here](#onboard-namespaces-to-the-service-mesh).
+1. Follow the guidance available [here](#onboard-namespaces-to-the-service-mesh) to ensure that the application namespaces that you wish to be monitored are onboarded to the mesh.
-2. Expose the prometheus endpoints for application namespaces.
+2. Expose the Prometheus endpoints for application namespaces.
```azurecli-interactive osm metrics enable --namespace <namespace1> osm metrics enable --namespace <namespace2> ```
- For v0.8.4, ensure that `prometheus_scraping` is set to `true` in the `osm-config` ConfigMap.
3. Install the Azure Monitor extension using the guidance available [here](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json).
-4. Add the namespaces you want to monitor in container-azm-ms-osmconfig ConfigMap. Download the ConfigMap from [here](https://github.com/microsoft/Docker-Provider/blob/ci_prod/kubernetes/container-azm-ms-osmconfig.yaml).
- ```azurecli-interactive
- monitor_namespaces = ["namespace1", "namespace2"]
- ```
+4. Create a Configmap in the `kube-system` namespace that enables Azure Monitor to monitor your namespaces. For example, create a `container-azm-ms-osmconfig.yaml` with the following to monitor `<namespace1>` and `<namespace2>`:
+
+ ```yaml
+ kind: ConfigMap
+ apiVersion: v1
+ data:
+ schema-version: v1
+ config-version: ver1
+ osm-metric-collection-configuration: |-
+ # OSM metric collection settings
+ [osm_metric_collection_configuration]
+ [osm_metric_collection_configuration.settings]
+ # Namespaces to monitor
+ monitor_namespaces = ["<namespace1>", "<namespace2>"]
+ metadata:
+ name: container-azm-ms-osmconfig
+ namespace: kube-system
+ ```
5. Run the following kubectl command ```azurecli-interactive
InsightsMetrics
| extend t=parse_json(Tags) | where t.app == "namespace1" ```
-Read more about integration with Azure Monitor [here](https://github.com/microsoft/Docker-Provider/blob/ci_dev/Documentation/OSMPrivatePreview/ReadMe.md).
-
-### Navigating the OSM dashboard
-
-1. Access your Azure Arc-connected Kubernetes cluster using this [link](https://aka.ms/azmon/osmarcux).
-2. Go to Azure Monitor and navigate to the Reports tab to access the OSM workbook.
-3. Select the time-range & namespace to scope your services.
-
-![OSM workbook](./media/tutorial-arc-enabled-open-service-mesh/osm-workbook.jpg)
#### Requests tab
Read more about integration with Azure Monitor [here](https://github.com/microso
#### Connections tab - This tab provides you a summary of all the connections between your services in Open Service Mesh.-- Outbound connections: Total number of connections between Source and destination services.-- Outbound active connections: Last count of active connections between source and destination in selected time range.-- Outbound failed connections: Total number of failed connections between source and destination service
+- Outbound connections: total number of connections between Source and destination services.
+- Outbound active connections: last count of active connections between source and destination in selected time range.
+- Outbound failed connections: total number of failed connections between source and destination service.
-## Upgrade the OSM extension instance to a specific version
+## Upgrade to a specific version of OSM
There may be some downtime of the control plane during upgrades. The data plane will only be affected during CRD upgrades.
-### Supported Upgrades
-
-The OSM extension can be upgraded up to the next minor version. Downgrades and major version upgrades are not supported at this time.
-
-### CRD Upgrades
-
-The OSM extension cannot be upgraded to a new version if that version contains CRD version updates without deleting the existing CRDs first. You can check if an OSM upgrade also includes CRD version updates by checking the CRD Updates section of the [OSM release notes](https://github.com/openservicemesh/osm/releases).
+### Supported upgrades
-Make sure to back up your Custom Resources prior to deleting the CRDs so that they can be easily recreated after upgrading. Afterwards, follow the upgrade instructions captured below.
+The OSM extension can be upgraded manually across minor and major versions. However, auto-upgrades (if enabled) will only work across minor versions.
-> [!NOTE]
-> Upgrading the CRDs will affect the data plane as the SMI policies won't exist between the time they're deleted and the time they're created again.
+### Upgrade to a specific OSM version manually
-### Upgrade instructions
+The following command will upgrade the OSM-Arc extension to a specific version:
-1. Delete the old CRDs and custom resources (Run from the root of the [OSM repo](https://github.com/openservicemesh/osm)). Ensure that the tag of the [OSM CRDs](https://github.com/openservicemesh/osm/tree/main/cmd/osm-bootstrap/crds) corresponds to the new version of the chart.
- ```azurecli-interactive
- kubectl delete --ignore-not-found --recursive -f ./charts/osm/crds/
+```azurecli-interactive
+az k8s-extension update --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --release-train stable --name osm --version x.y.z
+```
-2. Install the updated CRDs.
- ```azurecli-interactive
- kubectl apply -f charts/osm/crds/
- ```
+### Enable auto-upgrades
-3. Set the new chart version as an environment variable:
- ```azurecli-interactive
- export VERSION=<chart version>
- ```
-
-4. Run az k8s-extension create with the new chart version
- ```azurecli-interactive
- az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --release-train pilot --name osm --version $VERSION --configuration-settings-file $SETTINGS_FILE
- ```
+If auto-upgrades are not enabled by default, the following command can be run to enable auto-upgrades. The current value of `--auto-upgrade-minor-version` can be verified by running the `az k8s-extension show` command as detailed in the [Validate installation](#validate-installation) stage.
-5. Recreate Custom Resources using new CRDs
+```azurecli-interactive
+az k8s-extension update --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --release-train stable --name osm --auto-upgrade-minor-version true
+```
## Uninstall Azure Arc-enabled OSM
Verify that the extension instance has been deleted:
az k8s-extension list --cluster-type connectedClusters --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP ```
-This output should not include OSM. If you don't have any other extensions installed on your cluster, it will just be an empty array.
+This output should not include OSM. If you do not have any other extensions installed on your cluster, it will just be an empty array.
When you use the az k8s-extension command to delete the OSM extension, the arc-osm-system namespace is not removed, and the actual resources within the namespace (like mutating webhook configuration and osm-controller pod) will take around ~10 minutes to delete.
-> [!NOTE]
+> [!NOTE]
> Use the az k8s-extension CLI to uninstall OSM components managed by Arc. Using the OSM CLI to uninstall is not supported by Arc and can result in undesirable behavior.- ## Troubleshooting Refer to the troubleshooting guide [available here](troubleshooting.md#azure-arc-enabled-open-service-mesh).
+## Frequently asked questions
+
+### Is the extension of Azure Arc-enabled OSM zone redundant?
+Yes, all components of Azure Arc-enabled OSM are deployed on availability zones and are hence zone redundant.
++ ## Next steps > **Just want to try things out?**
-> Get started quickly with an [Azure Arc Jumpstart](https://aka.ms/arc-jumpstart-osm) scenario using Cluster API.
+> Get started quickly with an [Azure Arc Jumpstart](https://aka.ms/arc-jumpstart-osm) scenario using Cluster API.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/overview.md
Currently, Azure Arc allows you to manage the following resource types hosted ou
* [Azure data services](dat): Run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. SQL Managed Instance and PostgreSQL Hyperscale (preview) services are currently available. * [SQL Server](/sql/sql-server/azure-arc/overview): Extend Azure services to SQL Server instances hosted outside of Azure.
-* Virtual machines (preview): Provision, resize, delete and manage virtual machines based on [VMware vSphere](/azure/azure-arc/vmware-vsphere/overview) or [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines) and enable VM self-service through role-based access.
+* Virtual machines (preview): Provision, resize, delete and manage virtual machines based on [VMware vSphere](./vmware-vsphere/overview.md) or [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines) and enable VM self-service through role-based access.
## Key features and benefits
Some of the key scenarios that Azure Arc supports are:
* Create [custom locations](./kubernetes/custom-locations.md) on top of your [Azure Arc-enabled Kubernetes](./kubernetes/overview.md) clusters, using them as target locations for deploying Azure services instances. Deploy your Azure service cluster extensions for [Azure Arc-enabled Data Services](./dat).
-* Perform virtual machine lifecycle and management operations for [VMware vSphere](/azure/azure-arc/vmware-vsphere/overview) and [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines) environments.
+* Perform virtual machine lifecycle and management operations for [VMware vSphere](./vmware-vsphere/overview.md) and [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines) environments.
* A unified experience viewing your Azure Arc-enabled resources, whether you are using the Azure portal, the Azure CLI, Azure PowerShell, or Azure REST API.
For information, see the [Azure pricing page](https://azure.microsoft.com/pricin
* Learn about [Azure Arc-enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services/). * Learn about [SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview). * Learn about [Azure Arc-enabled VMware vSphere](vmware-vsphere/overview.md) and [Azure Arc-enabled Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines)
-* Experience Azure Arc-enabled services by exploring the [Jumpstart proof of concept](https://azurearcjumpstart.io/azure_arc_jumpstart/).
+* Experience Azure Arc-enabled services by exploring the [Jumpstart proof of concept](https://azurearcjumpstart.io/azure_arc_jumpstart/).
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
The following versions of the Windows and Linux operating system are officially
* Amazon Linux 2 * Oracle Linux 7 and 8
+> [!NOTE]
+> On Linux, Azure Arc-enabled servers installs several daemon processes. We only support using systemd to manage these processes. In some environments, systemd may not be installed or available, in which case Arc-enabled servers is not supported, even if the distribution is otherwise supported. These environments include **Windows Subsystem for Linux** (WSL) and most container-based systems, such as Kubernetes or Docker. The Azure Connected Machine agent can be installed on the node that runs the containers but not inside the containers themselves.
++ > [!WARNING] > If the Linux hostname or Windows computer name uses a reserved word or trademark, attempting to register the connected machine with Azure will fail. For a list of reserved words, see [Resolve reserved resource name errors](../../azure-resource-manager/templates/error-reserved-resource-name.md).
azure-arc Ssh Arc Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-troubleshoot.md
Error:
- "Failed to create ssh key file with error: \<ERROR\>." - "Failed to run ssh command with error: \<ERROR\>." - "Failed to get certificate info with error: \<ERROR\>."
+ - "Failed to create ssh key file with error: [WinError 2] The system cannot find the file specified."
+ - "Failed to create ssh key file with error: [Errno 2] No such file or directory: 'ssh-keygen'."
Resolution: - Provide the path to the folder that contains the SSH client executables by using the ```--ssh-client-folder``` parameter.
Resolution:
## Disable SSH to Arc-enabled servers This functionality can be disabled by completing the following actions: - Remove the SSH port from the allowedincoming ports: ```azcmagent config set incomingconnections.ports <other open ports,...>```
- - Delete the default connectivity endpoint: ```az rest --method delete --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview```
+ - Delete the default connectivity endpoint: ```az rest --method delete --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview```
azure-arc Day2 Operations Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/day2-operations-resource-bridge.md
az connectedvmware vcenter connect --custom-location <name of the custom locatio
## Collecting logs from the Arc resource bridge
-For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [`Az arcappliance log`](https://docs.microsoft.com/cli/azure/arcappliance/logs?#az-arcappliance-logs-vmware) command.
+For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [`Az arcappliance log`](/cli/azure/arcappliance/logs#az-arcappliance-logs-vmware) command.
The `az arcappliance log` command must be run from a workstation that can communicate with the Arc resource bridge either via the cluster configuration IP address or the IP address of the Arc resource bridge VM.
If you're running this command from a different workstation, you must make sure
## Next steps
-[Troubleshoot common issues related to resource bridge](../resource-bridge/troubleshoot-resource-bridge.md)
+[Troubleshoot common issues related to resource bridge](../resource-bridge/troubleshoot-resource-bridge.md)
azure-cache-for-redis Cache Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-administration.md
On the left, **Schedule updates** allows you to choose a maintenance window for
:::image type="content" source="media/cache-administration/redis-schedule-updates-2.png" alt-text="Screenshot showing schedule updates":::
-To specify a maintenance window, check the days you want and specify the maintenance window start hour for each day. Then, select **OK**. The maintenance window time is in UTC.
+To specify a maintenance window, check the days you want and specify the maintenance window start hour for each day. Then, select **OK**. The maintenance window time is in UTC and can only be configured on an hourly basis.
The default, and minimum, maintenance window for updates is five hours. This value isn't configurable from the Azure portal, but you can configure it in PowerShell using the `MaintenanceWindow` parameter of the [New-AzRedisCacheScheduleEntry](/powershell/module/az.rediscache/new-azrediscachescheduleentry) cmdlet. For more information, see [Can I manage scheduled updates using PowerShell, CLI, or other management tools?](#can-i-manage-scheduled-updates-using-powershell-cli-or-other-management-tools)
azure-cache-for-redis Cache High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-high-availability.md
Because your cache data is stored in memory, a rare and unplanned failure of mul
### Storage account for persistence
-Consider choosing a geo-redundant storage account to ensure high availability of persisted data. For more information, see [Azure Storage redundancy](/azure/storage/common/storage-redundancy?toc=/azure/storage/blobs/toc.json).
+Consider choosing a geo-redundant storage account to ensure high availability of persisted data. For more information, see [Azure Storage redundancy](../storage/common/storage-redundancy.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
## Import/Export
Azure cache for Redis supports the option to import and export Redis Database (R
### Storage account for export
-Consider choosing a geo-redundant storage account to ensure high availability of your exported data. For more information, see [Azure Storage redundancy](/azure/storage/common/storage-redundancy?toc=/azure/storage/blobs/toc.json).
+Consider choosing a geo-redundant storage account to ensure high availability of your exported data. For more information, see [Azure Storage redundancy](../storage/common/storage-redundancy.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
## Geo-replication Applicable tiers: **Premium** [Geo-replication](cache-how-to-geo-replication.md) is a mechanism for linking two or more Azure Cache for Redis instances, typically spanning two Azure regions. Geo-replication is designed mainly for disaster recovery. Two Premium tier cache instances are connected through geo-replication in away that provides reads and writes to your primary cache, and that data is replicated to the secondary cache.
-For more information on how to set it up, see [Configure geo-replication for Premium Azure Cache for Redis instances](/azure/azure-cache-for-redis/cache-how-to-geo-replication).
+For more information on how to set it up, see [Configure geo-replication for Premium Azure Cache for Redis instances](./cache-how-to-geo-replication.md).
If the region hosting the primary cache goes down, youΓÇÖll need to start the failover by: first, unlinking the secondary cache, and then, updating your application to point to the secondary cache for reads and writes.
Applicable tiers: **Standard**, **Premium**, **Enterprise**, **Enterprise Flash*
If you experience a regional outage, consider recreating your cache in a different region and updating your application to connect to the new cache instead. It's important to understand that data will be lost during a regional outage. Your application code should be resilient to data loss.
-Once the affected region is restored, your unavailable Azure Cache for Redis is automatically restored and available for use again. For more strategies for moving your cache to a different region, see [Move Azure Cache for Redis instances to different regions](/azure/azure-cache-for-redis/cache-moving-resources).
+Once the affected region is restored, your unavailable Azure Cache for Redis is automatically restored and available for use again. For more strategies for moving your cache to a different region, see [Move Azure Cache for Redis instances to different regions](./cache-moving-resources.md).
## Next steps
Learn more about how to configure Azure Cache for Redis high-availability option
- [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers) - [Add replicas to Azure Cache for Redis](cache-how-to-multi-replicas.md) - [Enable zone redundancy for Azure Cache for Redis](cache-how-to-zone-redundancy.md)-- [Set up geo-replication for Azure Cache for Redis](cache-how-to-geo-replication.md)
+- [Set up geo-replication for Azure Cache for Redis](cache-how-to-geo-replication.md)
azure-cache-for-redis Cache Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-troubleshoot-connectivity.md
Steps to check your private endpoint configuration:
1. If you're trying to connect to your cache private endpoint from outside your virtual network of your cache, `Public Network Access` needs to be enabled. 1. If you've deleted your private endpoint, ensure that the public network access is enabled. 1. Verify if your private endpoint is configured correctly. For more information, see [Create a private endpoint with a new Azure Cache for Redis instance](cache-private-link.md#create-a-private-endpoint-with-a-new-azure-cache-for-redis-instance).-
+1. Verify if your application is connecting to `<cachename>.redis.cache.windows.net` on port 6380. We recommend avoiding the use of `<cachename>.privatelink.redis.cache.windows.net` in the configuration or the connection string.
+1. Run a command like `nslookup <hostname>` from within the VNet that is linked to the private endpoint to verify that the command resolves to the private IP address for the cache.
+
### Firewall rules If you have a firewall configured for your Azure Cache For Redis, ensure that your client IP address is added to the firewall rules. You can check **Firewall** on the Resource menu under **Settings** on the Azure portal.
azure-functions Durable Functions Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-bindings.md
async def main(msg: func.QueueMessage, starter: str) -> None:
**run.ps1** ```powershell
-param($[string] $input, $TriggerMetadata)
+param([string] $input, $TriggerMetadata)
$InstanceId = Start-DurableOrchestration -FunctionName $FunctionName -Input $input ```
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
The Azure SQL bindings for Azure Functions are open-source and available on the
- [Save data to a database (Output binding)](./functions-bindings-azure-sql-output.md) - [Review ToDo API sample with Azure SQL bindings](/samples/azure-samples/azure-sql-binding-func-dotnet-todo/todo-backend-dotnet-azure-sql-bindings-azure-functions/) - [Learn how to connect Azure Function to Azure SQL with managed identity](./functions-identity-access-azure-sql-with-managed-identity.md)-- [Use SQL bindings in Azure Stream Analytics](/azure/stream-analytics/sql-database-upsert#option-1-update-by-key-with-the-azure-function-sql-binding)
+- [Use SQL bindings in Azure Stream Analytics](../stream-analytics/sql-database-upsert.md#option-1-update-by-key-with-the-azure-function-sql-binding)
azure-functions Functions Create Maven Intellij https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-maven-intellij.md
Title: Create a Java function in Azure Functions using IntelliJ
-description: Learn how to use IntelliJ to create a simple HTTP-triggered Java function, which you then publish to run in a serverless environment in Azure.
+description: Learn how to use IntelliJ to create an HTTP-triggered Java function and then run it in a serverless environment in Azure.
+ Previously updated : 07/01/2018 Last updated : 03/28/2022+ ms.devlang: java # Create your first Java function in Azure using IntelliJ
-This article shows you:
+This article shows you how to use Java and IntelliJ to create an Azure function.
+
+Specifically, this article shows you:
+ - How to create an HTTP-triggered Java function in an IntelliJ IDEA project. - Steps for testing and debugging the project in the integrated development environment (IDE) on your own computer.-- Instructions for deploying the function project to Azure Functions
+- Instructions for deploying the function project to Azure Functions.
<!-- TODO ![Access a Hello World function from the command line with cURL](media/functions-create-java-maven/hello-azure.png) --> -
-## Set up your development environment
-
-To create and publish Java functions to Azure using IntelliJ, install the following software:
+## Prerequisites
-+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-+ An [Azure supported Java Development Kit (JDK)](/azure/developer/java/fundamentals/java-support-on-azure) for Java 8
-+ An [IntelliJ IDEA](https://www.jetbrains.com/idea/download/) Ultimate Edition or Community Edition installed
-+ [Maven 3.5.0+](https://maven.apache.org/download.cgi)
-+ Latest [Function Core Tools](https://github.com/Azure/azure-functions-core-tools)
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+- An [Azure supported Java Development Kit (JDK)](/azure/developer/java/fundamentals/java-support-on-azure) for Java, version 8 or 11
+- An [IntelliJ IDEA](https://www.jetbrains.com/idea/download/) Ultimate Edition or Community Edition installed
+- [Maven 3.5.0+](https://maven.apache.org/download.cgi)
+- Latest [Function Core Tools](https://github.com/Azure/azure-functions-core-tools)
+## Install plugin and sign in
-## Installation and sign in
+To install the Azure Toolkit for IntelliJ and then sign in, follow these steps:
-1. In IntelliJ IDEA's Settings/Preferences dialog (Ctrl+Alt+S), select **Plugins**. Then, find the **Azure Toolkit for IntelliJ** in the **Marketplace** and click **Install**. After installed, click **Restart** to activate the plugin.
+1. In IntelliJ IDEA's **Settings/Preferences** dialog (Ctrl+Alt+S), select **Plugins**. Then, find the **Azure Toolkit for IntelliJ** in the **Marketplace** and click **Install**. After it's installed, click **Restart** to activate the plugin.
- ![Azure Toolkit for IntelliJ plugin in Marketplace][marketplace]
+ :::image type="content" source="media/functions-create-first-java-intellij/marketplace.png" alt-text="Azure Toolkit for IntelliJ plugin in Marketplace." lightbox="media/functions-create-first-java-intellij/marketplace.png":::
-2. To sign in to your Azure account, open sidebar **Azure Explorer**, and then click the **Azure Sign In** icon in the bar on top (or from IDEA menu **Tools/Azure/Azure Sign in**).
- ![The IntelliJ Azure Sign In command][intellij-azure-login]
+2. To sign in to your Azure account, open the **Azure Explorer** sidebar, and then click the **Azure Sign In** icon in the bar on top (or from the IDEA menu, select **Tools > Azure > Azure Sign in**).
-3. In the **Azure Sign In** window, select **Device Login**, and then click **Sign in** ([other sign in options](/azure/developer/java/toolkit-for-intellij/sign-in-instructions)).
+ :::image type="content" source="media/functions-create-first-java-intellij/intellij-azure-login.png" alt-text="The IntelliJ Azure Sign In command." lightbox="media/functions-create-first-java-intellij/intellij-azure-login.png":::
- ![The Azure Sign In window with device login selected][intellij-azure-popup]
+3. In the **Azure Sign In** window, select **OAuth 2.0**, and then click **Sign in**. For other sign-in options, see [Sign-in instructions for the Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/sign-in-instructions).
-4. Click **Copy&Open** in **Azure Device Login** dialog .
+ :::image type="content" source="media/functions-create-first-java-intellij/intellij-azure-login-popup.png" alt-text="The Azure Sign In window with device login selected." lightbox="media/functions-create-first-java-intellij/intellij-azure-login-popup.png":::
- ![The Azure Login Dialog window][intellij-azure-copycode]
+4. In the browser, sign in with your account and then go back to IntelliJ. In the **Select Subscriptions** dialog box, click on the subscriptions that you want to use, then click **Select**.
-5. In the browser, paste your device code (which has been copied when you click **Copy&Open** in last step) and then click **Next**.
+ :::image type="content" source="media/functions-create-first-java-intellij/intellij-azure-login-selectsubs.png" alt-text="The Select Subscriptions dialog box." lightbox="media/functions-create-first-java-intellij/intellij-azure-login-selectsubs.png":::
- ![The device login browser][intellij-azure-link-ms-account]
-
-6. In the **Select Subscriptions** dialog box, select the subscriptions that you want to use, and then click **Select**.
-
- ![The Select Subscriptions dialog box][intellij-azure-login-select-subs]
-
## Create your local project
-In this section, you use Azure Toolkit for IntelliJ to create a local Azure Functions project. Later in this article, you'll publish your function code to Azure.
+To use Azure Toolkit for IntelliJ to create a local Azure Functions project, follow these steps:
-1. Open IntelliJ Welcome dialog, select *Create New Project* to open a new Project wizard, select *Azure Functions*.
+1. Open IntelliJ IDEA's **Welcome** dialog, select **New Project** to open a new project wizard, then select **Azure Functions**.
- ![Create function project](media/functions-create-first-java-intellij/create-functions-project.png)
+ :::image type="content" source="media/functions-create-first-java-intellij/create-functions-project.png" alt-text="Create function project." lightbox="media/functions-create-first-java-intellij/create-functions-project.png":::
-1. Select *Http Trigger*, then click *Next* and follow the wizard to go through all the configurations in the following pages; confirm your project location then click *Finish*; Intellj IDEA will then open your new project.
+1. Select **Http Trigger**, then click **Next** and follow the wizard to go through all the configurations in the following pages. Confirm your project location, then click **Finish**. Intellj IDEA will then open your new project.
- ![Create function project finish](media/functions-create-first-java-intellij/create-functions-project-finish.png)
+ :::image type="content" source="media/functions-create-first-java-intellij/create-functions-project-finish.png" alt-text="Create function project finish." lightbox="media/functions-create-first-java-intellij/create-functions-project-finish.png":::
## Run the project locally
-1. Navigate to `src/main/java/org/example/functions/HttpTriggerFunction.java` to see the code generated. Beside the line *17*, you will notice that there is a green *Run* button, click it and select *Run 'azure-function-exam...'*, you will see that your function app is running locally with a few logs.
+To run the project locally, follow these steps:
- ![Local run project](media/functions-create-first-java-intellij/local-run-functions-project.png)
+1. Navigate to *src/main/java/org/example/functions/HttpTriggerFunction.java* to see the code generated. Beside the line *24*, you'll notice that there's a green **Run** button. Click it and select **Run 'Functions-azur...'**. You'll see that your function app is running locally with a few logs.
- ![Local run project output](media/functions-create-first-java-intellij/local-run-functions-output.png)
+ :::image type="content" source="media/functions-create-first-java-intellij/local-run-functions-project.png" alt-text="Local run project." lightbox="media/functions-create-first-java-intellij/local-run-functions-project.png":::
-1. You can try the function by accessing the printed endpoint from browser, like `http://localhost:7071/api/HttpTrigger-Java?name=Azure`.
+ :::image type="content" source="media/functions-create-first-java-intellij/local-run-functions-output.png" alt-text="Local run project output." lightbox="media/functions-create-first-java-intellij/local-run-functions-output.png":::
- ![Local run function test result](media/functions-create-first-java-intellij/local-run-functions-test.png)
+1. You can try the function by accessing the displayed endpoint from browser, such as `http://localhost:7071/api/HttpExample?name=Azure`.
-1. The log is also printed out in your IDEA, now, stop the function app by clicking the *stop* button.
+ :::image type="content" source="media/functions-create-first-java-intellij/local-run-functions-test.png" alt-text="Local run function test result." lightbox="media/functions-create-first-java-intellij/local-run-functions-test.png":::
- ![Local run function test log](media/functions-create-first-java-intellij/local-run-functions-log.png)
+1. The log is also displayed in your IDEA. Stop the function app by clicking the **Stop** button.
+
+ :::image type="content" source="media/functions-create-first-java-intellij/local-run-functions-log.png" alt-text="Local run function test log." lightbox="media/functions-create-first-java-intellij/local-run-functions-log.png":::
## Debug the project locally
-1. To debug the function code in your project locally, select the *Debug* button in the toolbar. If you don't see the toolbar, enable it by choosing **View** > **Appearance** > **Toolbar**.
+To debug the project locally, follow these steps:
+
+1. Select the **Debug** button in the toolbar. If you don't see the toolbar, enable it by choosing **View** > **Appearance** > **Toolbar**.
- ![Local debug function app button](media/functions-create-first-java-intellij/local-debug-functions-button.png)
+ :::image type="content" source="media/functions-create-first-java-intellij/local-debug-functions-button.png" alt-text="Local debug function app button." lightbox="media/functions-create-first-java-intellij/local-debug-functions-button.png":::
-1. Click on line *20* of the file `src/main/java/org/example/functions/HttpTriggerFunction.java` to add a breakpoint, access the endpoint `http://localhost:7071/api/HttpTrigger-Java?name=Azure` again , you will find the breakpoint is hit, you can try more debug features like *step*, *watch*, *evaluation*. Stop the debug session by click the stop button.
+1. Click on line *31* of the file *src/main/java/org/example/functions/HttpTriggerFunction.java* to add a breakpoint. Access the endpoint `http://localhost:7071/api/HttpTrigger-Java?name=Azure` again and you'll find the breakpoint is hit. You can then try more debug features like **Step**, **Watch**, and **Evaluation**. Stop the debug session by clicking the **Stop** button.
- ![Local debug function app break](media/functions-create-first-java-intellij/local-debug-functions-break.png)
+ :::image type="content" source="media/functions-create-first-java-intellij/local-debug-functions-break.png" alt-text="Local debug function app break." lightbox="media/functions-create-first-java-intellij/local-debug-functions-break.png":::
## Deploy your project to Azure
-1. Right click your project in IntelliJ Project explorer, select *Azure -> Deploy to Azure Functions*
+To deploy your project to Azure, follow these steps:
- ![Deploy project to Azure](media/functions-create-first-java-intellij/deploy-functions-to-azure.png)
+1. Right click your project in IntelliJ Project explorer, then select **Azure -> Deploy to Azure Functions**.
-1. If you don't have any Function App yet, click *+* in the *Function* line. Type in the function app name and choose proper platform, here we can simply accept default. Click *OK* and the new function app you just created will be automatically selected. Click *Run* to deploy your functions.
+ :::image type="content" source="media/functions-create-first-java-intellij/deploy-functions-to-azure.png" alt-text="Deploy project to Azure." lightbox="media/functions-create-first-java-intellij/deploy-functions-to-azure.png":::
- ![Create function app in Azure](media/functions-create-first-java-intellij/deploy-functions-create-app.png)
+1. If you don't have any Function App yet, click **+** in the *Function* line. Type in the function app name and choose proper platform. Here you can accept the default value. Click **OK** and the new function app you created will be automatically selected. Click **Run** to deploy your functions.
- ![Deploy function app to Azure log](media/functions-create-first-java-intellij/deploy-functions-log.png)
+ :::image type="content" source="media/functions-create-first-java-intellij/deploy-functions-create-app.png" alt-text="Create function app in Azure." lightbox="media/functions-create-first-java-intellij/deploy-functions-create-app.png":::
+
+ :::image type="content" source="media/functions-create-first-java-intellij/deploy-functions-log.png" alt-text="Deploy function app to Azure log." lightbox="media/functions-create-first-java-intellij/deploy-functions-log.png":::
## Manage function apps from IDEA
-1. You can manage your function apps with *Azure Explorer* in your IDEA, click on *Function App*, you will see all your function apps here.
+To manage your function apps with **Azure Explorer** in your IDEA, follow these steps:
+
+1. Click on **Function App** and you'll see all your function apps listed.
- ![View function apps in explorer](media/functions-create-first-java-intellij/explorer-view-functions.png)
+ :::image type="content" source="media/functions-create-first-java-intellij/explorer-view-functions.png" alt-text="View function apps in explorer." lightbox="media/functions-create-first-java-intellij/explorer-view-functions.png":::
-1. Click to select on one of your function apps, and right click, select *Show Properties* to open the detail page.
+1. Click to select on one of your function apps, then right click and select **Show Properties** to open the detail page.
- ![Show function app properties](media/functions-create-first-java-intellij/explorer-functions-show-properties.png)
+ :::image type="content" source="media/functions-create-first-java-intellij/explorer-functions-show-properties.png" alt-text="Show function app properties." lightbox="media/functions-create-first-java-intellij/explorer-functions-show-properties.png":::
-1. Right click on your *HttpTrigger-Java* function app, and select *Trigger Function*, you will see that the browser is opened with the trigger URL.
+1. Right click on your **HttpTrigger-Java** function app, then select **Trigger Function in Browser**. You'll see that the browser is opened with the trigger URL.
- ![Screenshot shows a browser with the U R L.](media/functions-create-first-java-intellij/explorer-trigger-functions.png)
+ :::image type="content" source="media/functions-create-first-java-intellij/explorer-trigger-functions.png" alt-text="Screenshot shows a browser with the U R L." lightbox="media/functions-create-first-java-intellij/explorer-trigger-functions.png":::
## Add more functions to the project
-1. Right click on the package *org.example.functions* and select *New -> Azure Function Class*.
+To add more functions to your project, follow these steps:
+
+1. Right click on the package **org.example.functions** and select **New -> Azure Function Class**.
+
+ :::image type="content" source="media/functions-create-first-java-intellij/add-functions-entry.png" alt-text="Add functions to the project entry." lightbox="media/functions-create-first-java-intellij/add-functions-entry.png":::
- ![Add functions to the project entry](media/functions-create-first-java-intellij/add-functions-entry.png)
+1. Fill in the class name **HttpTest** and select **HttpTrigger** in the create function class wizard, then click **OK** to create. In this way, you can create new functions as you want.
-1. Fill in the class name *HttpTest* and select *HttpTrigger* in the create function class wizard, click *OK* to create, in this way, you can create new functions as you want.
+ :::image type="content" source="media/functions-create-first-java-intellij/add-functions-trigger.png" alt-text="Screenshot shows the Create Function Class dialog box." lightbox="media/functions-create-first-java-intellij/add-functions-trigger.png":::
- ![Screenshot shows the Create Function Class dialog box.](media/functions-create-first-java-intellij/add-functions-trigger.png)
-
- ![Add functions to the project output](media/functions-create-first-java-intellij/add-functions-output.png)
+ :::image type="content" source="media/functions-create-first-java-intellij/add-functions-output.png" alt-text="Add functions to the project output." lightbox="media/functions-create-first-java-intellij/add-functions-output.png":::
## Cleaning up functions
-1. Deleting functions in Azure Explorer
-
- ![Screenshot shows Delete selected from a context menu.](media/functions-create-first-java-intellij/delete-function.png)
-
+Select one of your function apps using **Azure Explorer** in your IDEA, then right-click and select **Delete**. This command might take several minutes to run. When it's done, the status will refresh in **Azure Explorer**.
+ ## Next steps
-You've created a Java project with an HTTP triggered function, run it on your local machine, and deployed it to Azure. Now, extend your function by...
+You've created a Java project with an HTTP triggered function, run it on your local machine, and deployed it to Azure. Now, extend your function by continuing to the following article:
> [!div class="nextstepaction"] > [Adding an Azure Storage queue output binding](./functions-add-output-binding-storage-queue-java.md)--
-[marketplace]:./media/functions-create-first-java-intellij/marketplace.png
-[intellij-azure-login]: media/functions-create-first-java-intellij/intellij-azure-login.png
-[intellij-azure-popup]: media/functions-create-first-java-intellij/intellij-azure-login-popup.png
-[intellij-azure-copycode]: media/functions-create-first-java-intellij/intellij-azure-login-copyopen.png
-[intellij-azure-link-ms-account]: media/functions-create-first-java-intellij/intellij-azure-login-linkms-account.png
-[intellij-azure-login-select-subs]: media/functions-create-first-java-intellij/intellij-azure-login-selectsubs.png
azure-functions Functions How To Use Azure Function App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md
In this script, replace `<SUBSCRIPTION_ID>` and `<APP_NAME>` with the ID of your
## Platform features
-Function apps run in, and are maintained by, the Azure App Service platform. As such, your function apps have access to most of the features of Azure's core web hosting platform. The left pane is where you access the many features of the App Service platform that you can use in your function apps.
+Function apps run in, and are maintained by, the Azure App Service platform. As such, your function apps have access to most of the features of Azure's core web hosting platform. When working in the [Azure portal](https://portal.azure.com), the left pane is where you access the many features of the App Service platform that you can use in your function apps.
-> [!NOTE]
-> Not all App Service features are available when a function app runs on the Consumption hosting plan.
+The following matrix indicates portal feature support by hosting plan and operating system:
-The rest of this article focuses on the following App Service features in the Azure portal that are useful for Functions:
+| Feature | Consumption plan | Premium plan | Dedicated plan |
+| | | | |
+| [Advanced tools (Kudu)](#kudu) | Windows: Γ£ö <br/>Linux: **X** | Γ£ö | Γ£ö|
+| [App Service editor](#editor) | Windows: Γ£ö <br/>Linux: **X** | Windows: Γ£ö <br/>Linux: **X** | Windows: Γ£ö <br/>Linux: **X**|
+| [Backups](../app-service/manage-backup.md) |**X** |**X** | Γ£ö|
+| [Console](#console) | Windows: command-line <br/>Linux: **X** | Windows: command-line <br/>Linux: SSH | Windows: command-line <br/>Linux: SSH |
+
+The rest of this article focuses on the following features in the portal that are useful for your function apps:
+ [App Service editor](#editor) + [Console](#console)
azure-functions Functions Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-premium-plan.md
See the complete regional availability of Functions on the [Azure web site](http
|--| -- | -- | |Australia Central| 100 | Not Available | |Australia Central 2| 100 | Not Available |
-|Australia East| 100 | 20 |
+|Australia East| 100 | 40 |
|Australia Southeast | 100 | 20 | |Brazil South| 100 | 20 | |Canada Central| 100 | 20 |
See the complete regional availability of Functions on the [Azure web site](http
|China North 2| 100 | 20 | |East Asia| 100 | 20 | |East US | 100 | 60 |
-|East US 2| 100 | 20 |
+|East US 2| 100 | 40 |
|France Central| 100 | 20 | |Germany West Central| 100 | 20 | |Japan East| 100 | 20 |
See the complete regional availability of Functions on the [Azure web site](http
|North Europe| 100 | 40 | |Norway East| 100 | 20 | |South Africa North| 100 | 20 |
-|South Central US| 100 | 20 |
+|South Central US| 100 | 40 |
|South India | 100 | Not Available | |Southeast Asia| 100 | 20 | |Switzerland North| 100 | 20 |
azure-functions Functions Proxies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-proxies.md
This section shows you how to create a proxy in the Functions portal.
> Not all languages and operating system combinations support in-portal editing. If you're unable to create a proxy in the portal, you can instead manually create a _proxies.json_ file in the root of your function app project folder. To learn more about portal editing support, see [Language support details](functions-create-function-app-portal.md#language-support-details). 1. Open the [Azure portal], and then go to your function app.
-2. In the left pane, select **New proxy**.
+2. In the left pane, select **Proxies** and then select **+Add**.
3. Provide a name for your proxy. 4. Configure the endpoint that's exposed on this function app by specifying the **route template** and **HTTP methods**. These parameters behave according to the rules for [HTTP triggers]. 5. Set the **backend URL** to another endpoint. This endpoint could be a function in another function app, or it could be any other API. The value does not need to be static, and it can reference [application settings] and [parameters from the original client request].
azure-functions Functions Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-scale.md
Title: Azure Functions scale and hosting
description: Learn how to choose between Azure Functions Consumption plan and Premium plan. ms.assetid: 5b63649c-ec7f-4564-b168-e0a74cb7e0f3 Previously updated : 08/17/2020 Last updated : 03/24/2022
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
Then, proceed with the instructions below to create and associate them to a Moni
#### 1. Assign ΓÇÿMonitored Object ContributorΓÇÖ role to the operator This step grants the ability to create and link a monitored object to a user.
-**Permissions required:** Since MO is a tenant level resource, the scope of the permission would be higher than a subscription scope. Therefore, an Azure tenant admin may be needed to perform this step. [Follow these steps to elevate Azure AD Tenant Admin as Azure Tenant Admin](/azure/role-based-access-control/elevate-access-global-admin). It will give the Azure AD admin 'owner' permissions at the root scope.
+**Permissions required:** Since MO is a tenant level resource, the scope of the permission would be higher than a subscription scope. Therefore, an Azure tenant admin may be needed to perform this step. [Follow these steps to elevate Azure AD Tenant Admin as Azure Tenant Admin](../../role-based-access-control/elevate-access-global-admin.md). It will give the Azure AD admin 'owner' permissions at the root scope.
**Request URI** ```HTTP
Make sure to start the installer on administrator command prompt. Silent install
## Questions and feedback
-Take this [quick survey](https://forms.microsoft.com/r/CBhWuT1rmM) or share your feedback/questions regarding the preview on the [Azure Monitor Agent User Community](https://teams.microsoft.com/l/team/19%3af3f168b782f64561b52abe75e59e83bc%40thread.tacv2/conversations?groupId=770d6aa5-c2f7-4794-98a0-84fd6ae7f193&tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47).
+Take this [quick survey](https://forms.microsoft.com/r/CBhWuT1rmM) or share your feedback/questions regarding the preview on the [Azure Monitor Agent User Community](https://teams.microsoft.com/l/team/19%3af3f168b782f64561b52abe75e59e83bc%40thread.tacv2/conversations?groupId=770d6aa5-c2f7-4794-98a0-84fd6ae7f193&tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47).
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
This article describes how to configure the collection of file-based text logs,
To complete this procedure, you need the following: - Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#manage-access-using-azure-permissions) .-- [Permissions to create Data Collection Rule objects](/azure/azure-monitor/essentials/data-collection-rule-overview#permissions) in the workspace.
+- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
- An agent with supported log file as described in the next section. ## Log files supported
The final step is to create a data collection association that associates the da
- Learn more about the [Azure Monitor agent](azure-monitor-agent-overview.md). - Learn more about [data collection rules](../essentials/data-collection-rule-overview.md).-- Learn more about [data collection endpoints](../essentials/data-collection-endpoint-overview.md).
+- Learn more about [data collection endpoints](../essentials/data-collection-endpoint-overview.md).
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
You can, for example:
## Troubleshooting ### Delayed telemetry, overloading network, or inefficient transmission
-System.Diagnostics.Tracing has an [Autoflush feature](https://docs.microsoft.com/dotnet/api/system.diagnostics.trace.autoflush). This causes SDK to flush with every telemetry item, which is undesirable, and can cause logging adapter issues like delayed telemetry, overloading network, inefficient transmission, etc.
+System.Diagnostics.Tracing has an [Autoflush feature](/dotnet/api/system.diagnostics.trace.autoflush). This causes SDK to flush with every telemetry item, which is undesirable, and can cause logging adapter issues like delayed telemetry, overloading network, inefficient transmission, etc.
If your application sends voluminous amounts of data and you're using the Applic
[exceptions]: asp-net-exceptions.md [portal]: https://portal.azure.com/ [qna]: ../faq.yml
-[start]: ./app-insights-overview.md
+[start]: ./app-insights-overview.md
azure-monitor Auto Instrumentation Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/auto-instrumentation-troubleshoot.md
- Title: Troubleshoot Azure Application Insights auto-instrumentation
-description: Troubleshoot auto-instrumentation in Azure Application Insights
- Previously updated : 02/28/2022--
-# Troubleshooting Azure Application Insights auto-instrumentation
-
-This article will help you troubleshoot problems with auto-instrumentation in Azure Application Insights.
-
-> [!NOTE]
-> Auto-instrumentation used to be known as "codeless attach" before October 2021.
-
-## Telemetry data isn't reported after enabling auto-instrumentation
-
-Review these common scenarios if you've enabled Azure Application Insights auto-instrumentation for your app service but don't see telemetry data reported.
-
-### The Application Insights SDK was previously installed
-
-Auto-instrumentation will fail when .NET and .NET Core apps were already instrumented with the SDK.
-
-Remove the Application Insights SDK if you would like to auto-instrument your app.
-
-### An app was published using an unsupported version of .NET or .NET Core
-
-Verify a supported version of .NET or .NET Core was used to build and publish applications.
-
-Refer to the .NET or .NET core documentation to determine if your version is supported.
--- [Application Monitoring for Azure App Service and ASP.NET Core](azure-web-apps-net-core.md#application-monitoring-for-azure-app-service-and-aspnet-core)-
-### A diagnostics library was detected
-
-Auto-instrumentation will fail if it detects the following libraries.
--- System.Diagnostics.DiagnosticSource-- Microsoft.AspNet.TelemetryCorrelation-- Microsoft.ApplicationInsights-
-These libraries will need to be removed for auto-instrumentation to succeed.
-
-## More help
-
-If you have questions about Azure Application Insights auto-instrumentation, you can post a question in our [Microsoft Q&A question page](/answers/topics/azure-monitor.html).
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
This article will cover how to create an Azure Function with TrackAvailability()
> [!NOTE] > This example is designed solely to show you the mechanics of how the TrackAvailability() API call works within an Azure Function. Not how to write the underlying HTTP Test code/business logic that would be required to turn this into a fully functional availability test. By default if you walk through this example you will be creating a basic availability HTTP GET test.
-> To follow these instructions, you must use the [dedicated plan](https://docs.microsoft.com/azure/azure-functions/dedicated-plan) to allow editing code in App Service Editor.
+> To follow these instructions, you must use the [dedicated plan](../../azure-functions/dedicated-plan.md) to allow editing code in App Service Editor.
## Create a timer trigger function
You can use Logs(analytics) to view you availability results, dependencies, and
## Next steps - [Application Map](./app-map.md)-- [Transaction diagnostics](./transaction-diagnostics.md)
+- [Transaction diagnostics](./transaction-diagnostics.md)
azure-monitor Ilogger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ilogger.md
The Application Insights extension in Azure Web Apps uses the new provider. You
### I can't see some of the logs from my application in the workspace.
-This may happen because of adaptive sampling. Adaptive sampling is enabled by default in all the latest versions of the Application Insights ASP.NET and ASP.NET Core Software Development Kits (SDKs). See the [Sampling in Application Insights](/azure/azure-monitor/app/sampling) for more details.
+This may happen because of adaptive sampling. Adaptive sampling is enabled by default in all the latest versions of the Application Insights ASP.NET and ASP.NET Core Software Development Kits (SDKs). See the [Sampling in Application Insights](./sampling.md) for more details.
## Next steps * [Logging in .NET](/dotnet/core/extensions/logging) * [Logging in ASP.NET Core](/aspnet/core/fundamentals/logging)
-* [.NET trace logs in Application Insights](./asp-net-trace-logs.md)
+* [.NET trace logs in Application Insights](./asp-net-trace-logs.md)
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
In this example, the connection string specifies the South Central US region.
- The regional service URIs are based on the explicit override values: - Ingestion: `https://southcentralus.in.applicationinsights.azure.com/`
-Run the following command in the [Azure Command-Line Interface (CLI)](https://docs.microsoft.com/cli/azure/account?view=azure-cli-latest#az-account-list-locations) to list available regions.
+Run the following command in the [Azure Command-Line Interface (CLI)](/cli/azure/account?view=azure-cli-latest#az-account-list-locations) to list available regions.
`az account list-locations -o table`
Get started at development time with:
* [ASP.NET Core](./asp-net-core.md) * [Java](./java-in-process-agent.md) * [Node.js](./nodejs.md)
-* [Python](./opencensus-python.md)
+* [Python](./opencensus-python.md)
azure-monitor Snapshot Collector Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-collector-release-notes.md
A point release to address user-reported bugs.
### Bug fixes - Fix [Hide the IDMS dependency from dependency tracker.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/17) - Fix [ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/19)
-<br>Snapshot Collector used via SDK is not supported when Interop feature is enabled. [See more not supported scenarios.](https://docs.microsoft.com/azure/azure-monitor/app/snapshot-debugger-troubleshoot#not-supported-scenarios)
+<br>Snapshot Collector used via SDK is not supported when Interop feature is enabled. [See more not supported scenarios.](./snapshot-debugger-troubleshoot.md#not-supported-scenarios)
## [1.4.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.2) A point release to address a user-reported bug.
Augmented usage telemetry
## [1.1.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.1.0) ### Changes - Added host memory protection. This feature reduces the impact on the host machine's memory.-- Improve the Azure portal snapshot viewing experience.
+- Improve the Azure portal snapshot viewing experience.
azure-monitor Troubleshoot Portal Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/troubleshoot-portal-connectivity.md
-- Title: Application Insights portal connectivity troubleshooting
-description: Troubleshooting guide for Application Insights portal connectivity issues
-- Previously updated : 03/09/2022----
-# "Error retrieving data" message on Application Insights portal
-
-This is a troubleshooting guide for the Application Insights portal when encountering connectivity errors similar to `Error retrieving data` or `Missing localization resource`.
-
-![image Portal connectivity error](./media/troubleshoot-portal-connectivity/troubleshoot-portal-connectivity.png)
-
-The source of the issue is likely third-party browser plugins that interfere with the portal's connectivity.
-
-To confirm that this is the source of the issue and to identify which plugin is interfering:
--- Open the portal in an InPrivate or Incognito window and verify the site functions correctly.--- Attempt disabling plugins to identify the one that is causing the connectivity issue.
azure-monitor Azure Monitor Monitoring Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-monitoring-reference.md
This section lists all the platform metrics collected automatically for Azure Mo
|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics | |-|--|
-| [Autoscale behaviors for VMs and AppService](/azure/azure-monitor/autoscale/autoscale-overview) | [microsoft.insights/autoscalesettings](/azure/azure-monitor/platform/metrics-supported#microsoftinsightsautoscalesettings) |
+| [Autoscale behaviors for VMs and AppService](./autoscale/autoscale-overview.md) | [microsoft.insights/autoscalesettings](/azure/azure-monitor/platform/metrics-supported#microsoftinsightsautoscalesettings) |
While technically not about Azure Monitor operations, the following metrics are collected into Azure Monitor namespaces. |Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics | |-|--|
-| Log Analytics agent gathered data for the [Metric alerts on logs](/azure/azure-monitor/alerts/alerts-metric-logs#metrics-and-dimensions-supported-for-logs) feature | [Microsoft.OperationalInsights/workspaces](/azure/azure-monitor/platform/metrics-supported##microsoftoperationalinsightsworkspaces)
-| [Application Insights availability tests](/azure/azure-monitor/app/availability-overview) | [Microsoft.Insights/Components](/azure/azure-monitor/essentials/metrics-supported#microsoftinsightscomponents)
+| Log Analytics agent gathered data for the [Metric alerts on logs](./alerts/alerts-metric-logs.md#metrics-and-dimensions-supported-for-logs) feature | [Microsoft.OperationalInsights/workspaces](/azure/azure-monitor/platform/metrics-supported##microsoftoperationalinsightsworkspaces)
+| [Application Insights availability tests](./app/availability-overview.md) | [Microsoft.Insights/Components](./essentials/metrics-supported.md#microsoftinsightscomponents)
See a complete list of [platform metrics for other resources types](/azure/azure-monitor/platform/metrics-supported).
This section lists all the Azure Monitor resource log category types collected.
|Resource Log Type | Resource Provider / Type Namespace<br/> and link | |-|--|
-| [Autoscale for VMs and AppService](/azure/azure-monitor/autoscale/autoscale-overview) | [Microsoft.insights/autoscalesettings](/azure/azure-monitor/essentials/resource-logs-categories#microsoftinsightsautoscalesettings)|
-| [Application Insights availability tests](/azure/azure-monitor/app/availability-overview) | [Microsoft.insights/Components](/azure/azure-monitor/essentials/resource-logs-categories#microsoftinsightscomponents) |
+| [Autoscale for VMs and AppService](./autoscale/autoscale-overview.md) | [Microsoft.insights/autoscalesettings](./essentials/resource-logs-categories.md#microsoftinsightsautoscalesettings)|
+| [Application Insights availability tests](./app/availability-overview.md) | [Microsoft.insights/Components](./essentials/resource-logs-categories.md#microsoftinsightscomponents) |
For additional reference, see a list of [all resource logs category types supported in Azure Monitor](/azure/azure-monitor/platform/resource-logs-schema).
This section refers to all of the Azure Monitor Logs Kusto tables relevant to Az
|Resource Type | Notes | |--|-|
-| [Autoscale for VMs and AppService](/azure/azure-monitor/autoscale/autoscale-overview) | [Autoscale Tables](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-monitor-autoscale-settings) |
+| [Autoscale for VMs and AppService](./autoscale/autoscale-overview.md) | [Autoscale Tables](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-monitor-autoscale-settings) |
## Activity log
-For a partial list of entires that the Azure Monitor services writes to the activity log, see [Azure resource provider operations](/azure/role-based-access-control/resource-provider-operations#monitor). There may be other entires not listed here.
+For a partial list of entires that the Azure Monitor services writes to the activity log, see [Azure resource provider operations](../role-based-access-control/resource-provider-operations.md#monitor). There may be other entires not listed here.
-For more information on the schema of Activity Log entries, see [Activity Log schema](/azure/azure-monitor/essentials/activity-log-schema).
+For more information on the schema of Activity Log entries, see [Activity Log schema](./essentials/activity-log-schema.md).
## Schemas
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
After completing your analysis to determine which source or sources are generati
The following are examples of what changes you can apply to your cluster by modifying the ConfigMap file to help control cost.
-1. Disable stdout logs across all namespaces in the cluster by modifying the following in the ConfigMap file:
+1. Disable stdout logs across all namespaces in the cluster by modifying the following in the ConfigMap file for the Azure Container Insights service pulling the metrics:
``` [log_collection_settings]
azure-monitor Activity Logs Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-logs-insights.md
Activity logs insights let you view information about changes to resources and resource groups in your Azure subscription. It uses information from the [Activity log](activity-log.md) to also present data about which users or services performed particular activities in the subscription. This includes which administrators deleted, updated or created resources, and whether the activities failed or succeeded. This article explains how to enable and use Activity log insights. ## Enable Activity log insights
-The only requirement to enable Activity log insights is to [configure the Activity log to export to a Log Analytics workspace](activity-log.md#send-to-log-analytics-workspace). Pre-built [workbooks](/azure/azure-monitor/visualize/workbooks-overview) curate this data, which is stored in the [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity) table in the workspace.
+The only requirement to enable Activity log insights is to [configure the Activity log to export to a Log Analytics workspace](activity-log.md#send-to-log-analytics-workspace). Pre-built [workbooks](../visualize/workbooks-overview.md) curate this data, which is stored in the [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity) table in the workspace.
:::image type="content" source="media/activity-log/activity-logs-insights-main.png" lightbox="media/activity-log/activity-logs-insights-main.png" alt-text="A screenshot showing Azure Activity logs insights dashboards":::
To view Activity logs insights on a resource level:
1. At the top of the **Activity Logs Insights** page, select: 1. A time range for which to view data from the **TimeRange** dropdown.
- * **Azure Activity Logs Entries** shows the count of Activity log records in each [activity log category](/azure/azure-monitor/essentials/activity-log-schema#categories).
+ * **Azure Activity Logs Entries** shows the count of Activity log records in each [activity log category](./activity-log-schema.md#categories).
:::image type="content" source="media/activity-log/activity-logs-insights-category-value.png" lightbox= "media/activity-log/activity-logs-insights-category-value.png" alt-text="Azure Activity Logs by Category Value":::
To view Activity logs insights on a resource level:
Learn more about: * [Platform logs](./platform-logs-overview.md) * [Activity log event schema](activity-log-schema.md)
-* [Creating a diagnostic setting to send Activity logs to other destinations](./diagnostic-settings.md)
+* [Creating a diagnostic setting to send Activity logs to other destinations](./diagnostic-settings.md)
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|Average_% Available Memory|Yes|% Available Memory|Count|Average|Average_% Available Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Available Swap Space|Yes|% Available Swap Space|Count|Average|Average_% Available Swap Space. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Committed Bytes In Use|Yes|% Committed Bytes In Use|Count|Average|Average_% Committed Bytes In Use. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% DPC Time|Yes|% DPC Time|Count|Average|Average_% DPC Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Free Inodes|Yes|% Free Inodes|Count|Average|Average_% Free Inodes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Free Space|Yes|% Free Space|Count|Average|Average_% Free Space. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Idle Time|Yes|% Idle Time|Count|Average|Average_% Idle Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Interrupt Time|Yes|% Interrupt Time|Count|Average|Average_% Interrupt Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% IO Wait Time|Yes|% IO Wait Time|Count|Average|Average_% IO Wait Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Nice Time|Yes|% Nice Time|Count|Average|Average_% Nice Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Privileged Time|Yes|% Privileged Time|Count|Average|Average_% Privileged Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Processor Time|Yes|% Processor Time|Count|Average|Average_% Processor Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Inodes|Yes|% Used Inodes|Count|Average|Average_% Used Inodes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Memory|Yes|% Used Memory|Count|Average|Average_% Used Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Space|Yes|% Used Space|Count|Average|Average_% Used Space. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% Used Swap Space|Yes|% Used Swap Space|Count|Average|Average_% Used Swap Space. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_% User Time|Yes|% User Time|Count|Average|Average_% User Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Available MBytes|Yes|Available MBytes|Count|Average|Average_Available MBytes. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Available MBytes Memory|Yes|Available MBytes Memory|Count|Average|Average_Available MBytes Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Available MBytes Swap|Yes|Available MBytes Swap|Count|Average|Average_Available MBytes Swap. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Avg. Disk sec/Read|Yes|Avg. Disk sec/Read|Count|Average|Average_Avg. Disk sec/Read. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Avg. Disk sec/Transfer|Yes|Avg. Disk sec/Transfer|Count|Average|Average_Avg. Disk sec/Transfer. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Avg. Disk sec/Write|Yes|Avg. Disk sec/Write|Count|Average|Average_Avg. Disk sec/Write. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Bytes Received/sec|Yes|Bytes Received/sec|Count|Average|Average_Bytes Received/sec. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Bytes Sent/sec|Yes|Bytes Sent/sec|Count|Average|Average_Bytes Sent/sec. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Bytes Total/sec|Yes|Bytes Total/sec|Count|Average|Average_Bytes Total/sec. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Current Disk Queue Length|Yes|Current Disk Queue Length|Count|Average|Average_Current Disk Queue Length. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Read Bytes/sec|Yes|Disk Read Bytes/sec|Count|Average|Average_Disk Read Bytes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Reads/sec|Yes|Disk Reads/sec|Count|Average|Average_Disk Reads/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Transfers/sec|Yes|Disk Transfers/sec|Count|Average|Average_Disk Transfers/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Write Bytes/sec|Yes|Disk Write Bytes/sec|Count|Average|Average_Disk Write Bytes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Disk Writes/sec|Yes|Disk Writes/sec|Count|Average|Average_Disk Writes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Megabytes|Yes|Free Megabytes|Count|Average|Average_Free Megabytes. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Physical Memory|Yes|Free Physical Memory|Count|Average|Average_Free Physical Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Space in Paging Files|Yes|Free Space in Paging Files|Count|Average|Average_Free Space in Paging Files. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Free Virtual Memory|Yes|Free Virtual Memory|Count|Average|Average_Free Virtual Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Logical Disk Bytes/sec|Yes|Logical Disk Bytes/sec|Count|Average|Average_Logical Disk Bytes/sec. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Page Reads/sec|Yes|Page Reads/sec|Count|Average|Average_Page Reads/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Page Writes/sec|Yes|Page Writes/sec|Count|Average|Average_Page Writes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Pages/sec|Yes|Pages/sec|Count|Average|Average_Pages/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Pct Privileged Time|Yes|Pct Privileged Time|Count|Average|Average_Pct Privileged Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Pct User Time|Yes|Pct User Time|Count|Average|Average_Pct User Time. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Physical Disk Bytes/sec|Yes|Physical Disk Bytes/sec|Count|Average|Average_Physical Disk Bytes/sec. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Processes|Yes|Processes|Count|Average|Average_Processes. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Processor Queue Length|Yes|Processor Queue Length|Count|Average|Average_Processor Queue Length. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Size Stored In Paging Files|Yes|Size Stored In Paging Files|Count|Average|Average_Size Stored In Paging Files. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Bytes|Yes|Total Bytes|Count|Average|Average_Total Bytes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Bytes Received|Yes|Total Bytes Received|Count|Average|Average_Total Bytes Received. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Bytes Transmitted|Yes|Total Bytes Transmitted|Count|Average|Average_Total Bytes Transmitted. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Collisions|Yes|Total Collisions|Count|Average|Average_Total Collisions. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Packets Received|Yes|Total Packets Received|Count|Average|Average_Total Packets Received. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Packets Transmitted|Yes|Total Packets Transmitted|Count|Average|Average_Total Packets Transmitted. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Rx Errors|Yes|Total Rx Errors|Count|Average|Average_Total Rx Errors. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Total Tx Errors|Yes|Total Tx Errors|Count|Average|Average_Total Tx Errors. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Uptime|Yes|Uptime|Count|Average|Average_Uptime. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Used MBytes Swap Space|Yes|Used MBytes Swap Space|Count|Average|. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Used Memory kBytes|Yes|Used Memory kBytes|Count|Average|Average_Used Memory kBytes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Used Memory MBytes|Yes|Used Memory MBytes|Count|Average|Average_Used Memory MBytes. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Users|Yes|Users|Count|Average|Average_Users. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Average_Virtual Shared Memory|Yes|Virtual Shared Memory|Count|Average|Average_Virtual Shared Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
-|Event|Yes|Event|Count|Average|Event. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Source, EventLog, Computer, EventCategory, EventLevel, EventLevelName, EventID|
-|Heartbeat|Yes|Heartbeat|Count|Total|Heartbeat. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, OSType, Version, SourceComputerId|
-|Update|Yes|Update|Count|Average|Update. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric).|Computer, Product, Classification, UpdateState, Optional, Approved|
+|Average_% Available Memory|Yes|% Available Memory|Count|Average|Average_% Available Memory. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Available Swap Space|Yes|% Available Swap Space|Count|Average|Average_% Available Swap Space. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Committed Bytes In Use|Yes|% Committed Bytes In Use|Count|Average|Average_% Committed Bytes In Use. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% DPC Time|Yes|% DPC Time|Count|Average|Average_% DPC Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Free Inodes|Yes|% Free Inodes|Count|Average|Average_% Free Inodes. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Free Space|Yes|% Free Space|Count|Average|Average_% Free Space. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Idle Time|Yes|% Idle Time|Count|Average|Average_% Idle Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Interrupt Time|Yes|% Interrupt Time|Count|Average|Average_% Interrupt Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% IO Wait Time|Yes|% IO Wait Time|Count|Average|Average_% IO Wait Time. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Nice Time|Yes|% Nice Time|Count|Average|Average_% Nice Time. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Privileged Time|Yes|% Privileged Time|Count|Average|Average_% Privileged Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Processor Time|Yes|% Processor Time|Count|Average|Average_% Processor Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Inodes|Yes|% Used Inodes|Count|Average|Average_% Used Inodes. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Memory|Yes|% Used Memory|Count|Average|Average_% Used Memory. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Space|Yes|% Used Space|Count|Average|Average_% Used Space. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% Used Swap Space|Yes|% Used Swap Space|Count|Average|Average_% Used Swap Space. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_% User Time|Yes|% User Time|Count|Average|Average_% User Time. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Available MBytes|Yes|Available MBytes|Count|Average|Average_Available MBytes. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Available MBytes Memory|Yes|Available MBytes Memory|Count|Average|Average_Available MBytes Memory. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Available MBytes Swap|Yes|Available MBytes Swap|Count|Average|Average_Available MBytes Swap. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Avg. Disk sec/Read|Yes|Avg. Disk sec/Read|Count|Average|Average_Avg. Disk sec/Read. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Avg. Disk sec/Transfer|Yes|Avg. Disk sec/Transfer|Count|Average|Average_Avg. Disk sec/Transfer. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Avg. Disk sec/Write|Yes|Avg. Disk sec/Write|Count|Average|Average_Avg. Disk sec/Write. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Bytes Received/sec|Yes|Bytes Received/sec|Count|Average|Average_Bytes Received/sec. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Bytes Sent/sec|Yes|Bytes Sent/sec|Count|Average|Average_Bytes Sent/sec. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Bytes Total/sec|Yes|Bytes Total/sec|Count|Average|Average_Bytes Total/sec. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Current Disk Queue Length|Yes|Current Disk Queue Length|Count|Average|Average_Current Disk Queue Length. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Read Bytes/sec|Yes|Disk Read Bytes/sec|Count|Average|Average_Disk Read Bytes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Reads/sec|Yes|Disk Reads/sec|Count|Average|Average_Disk Reads/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Transfers/sec|Yes|Disk Transfers/sec|Count|Average|Average_Disk Transfers/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Write Bytes/sec|Yes|Disk Write Bytes/sec|Count|Average|Average_Disk Write Bytes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Disk Writes/sec|Yes|Disk Writes/sec|Count|Average|Average_Disk Writes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Megabytes|Yes|Free Megabytes|Count|Average|Average_Free Megabytes. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Physical Memory|Yes|Free Physical Memory|Count|Average|Average_Free Physical Memory. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Space in Paging Files|Yes|Free Space in Paging Files|Count|Average|Average_Free Space in Paging Files. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Free Virtual Memory|Yes|Free Virtual Memory|Count|Average|Average_Free Virtual Memory. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Logical Disk Bytes/sec|Yes|Logical Disk Bytes/sec|Count|Average|Average_Logical Disk Bytes/sec. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Page Reads/sec|Yes|Page Reads/sec|Count|Average|Average_Page Reads/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Page Writes/sec|Yes|Page Writes/sec|Count|Average|Average_Page Writes/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Pages/sec|Yes|Pages/sec|Count|Average|Average_Pages/sec. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Pct Privileged Time|Yes|Pct Privileged Time|Count|Average|Average_Pct Privileged Time. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Pct User Time|Yes|Pct User Time|Count|Average|Average_Pct User Time. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Physical Disk Bytes/sec|Yes|Physical Disk Bytes/sec|Count|Average|Average_Physical Disk Bytes/sec. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Processes|Yes|Processes|Count|Average|Average_Processes. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Processor Queue Length|Yes|Processor Queue Length|Count|Average|Average_Processor Queue Length. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Size Stored In Paging Files|Yes|Size Stored In Paging Files|Count|Average|Average_Size Stored In Paging Files. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Bytes|Yes|Total Bytes|Count|Average|Average_Total Bytes. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Bytes Received|Yes|Total Bytes Received|Count|Average|Average_Total Bytes Received. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Bytes Transmitted|Yes|Total Bytes Transmitted|Count|Average|Average_Total Bytes Transmitted. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Collisions|Yes|Total Collisions|Count|Average|Average_Total Collisions. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Packets Received|Yes|Total Packets Received|Count|Average|Average_Total Packets Received. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Packets Transmitted|Yes|Total Packets Transmitted|Count|Average|Average_Total Packets Transmitted. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Rx Errors|Yes|Total Rx Errors|Count|Average|Average_Total Rx Errors. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Total Tx Errors|Yes|Total Tx Errors|Count|Average|Average_Total Tx Errors. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Uptime|Yes|Uptime|Count|Average|Average_Uptime. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Used MBytes Swap Space|Yes|Used MBytes Swap Space|Count|Average|. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Used Memory kBytes|Yes|Used Memory kBytes|Count|Average|Average_Used Memory kBytes. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Used Memory MBytes|Yes|Used Memory MBytes|Count|Average|Average_Used Memory MBytes. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Users|Yes|Users|Count|Average|Average_Users. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Average_Virtual Shared Memory|Yes|Virtual Shared Memory|Count|Average|Average_Virtual Shared Memory. Supported for: Linux. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, ObjectName, InstanceName, CounterPath, SourceSystem|
+|Event|Yes|Event|Count|Average|Event. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Source, EventLog, Computer, EventCategory, EventLevel, EventLevelName, EventID|
+|Heartbeat|Yes|Heartbeat|Count|Total|Heartbeat. Supported for: Linux, Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, OSType, Version, SourceComputerId|
+|Update|Yes|Update|Count|Average|Update. Supported for: Windows. Part of [metric alerts for logs feature](../alerts/alerts-metric-logs.md).|Computer, Product, Classification, UpdateState, Optional, Approved|
## Microsoft.Peering/peerings
This latest update adds a new column and reorders the metrics to be alphabetical
- [Read about metrics in Azure Monitor](../data-platform.md) - [Create alerts on metrics](../alerts/alerts-overview.md)-- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)
+- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)
azure-monitor Network Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/network-insights-overview.md
Here are some links to troubleshooting articles for frequently used services. Fo
* [Azure VPN Gateway](../../vpn-gateway/vpn-gateway-troubleshoot.md) * [Azure ExpressRoute](../../expressroute/expressroute-troubleshooting-expressroute-overview.md) * [Azure Load Balancer](../../load-balancer/load-balancer-troubleshoot.md)
-* [Azure NAT Gateway](/azure/virtual-network/nat-gateway/troubleshoot-nat)
+* [Azure NAT Gateway](../../virtual-network/nat-gateway/troubleshoot-nat.md)
### Why don't I see the resources for all the subscriptions I've selected?
You can edit the workbook you see in any side-panel or detailed metric view by u
## Next steps - Learn more about network monitoring: [What is Azure Network Watcher?](../../network-watcher/network-watcher-monitoring-overview.md)-- Learn the scenarios workbooks are designed to support, how to create reports and customize existing reports, and more: [Create interactive reports with Azure Monitor workbooks](../visualize/workbooks-overview.md)
+- Learn the scenarios workbooks are designed to support, how to create reports and customize existing reports, and more: [Create interactive reports with Azure Monitor workbooks](../visualize/workbooks-overview.md)
azure-monitor Sql Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/sql-insights-troubleshoot.md
description: Learn how to troubleshoot SQL insights in Azure Monitor.
Previously updated : 1/3/2022 Last updated : 4/19/2022 # Troubleshoot SQL insights (preview)
For common cases, we provide troubleshooting tips in our logs view:
During preview of SQL Insights, you may encounter the following known issues. * **'Login failed' error connecting to server or database**. Using certain special characters in SQL authentication passwords saved in the monitoring VM configuration or in Key Vault may prevent the monitoring VM from connecting to a SQL server or database. This set of characters includes parentheses, square and curly brackets, the dollar sign, forward and back slashes, and dot (`[ { ( ) } ] $ \ / .`).
+* Spaces in the database connection string attributes may be replaced with special characters, leading to database connection failures. For example, if the space in the `User Id` attribute is replaced with a special character, connections will fail with the **Login failed for user ''** error. To resolve, edit the monitoring profile configuration, and delete every special character appearing in place of a space. Some special characters may look indistinguishable from a space, thus you may want to delete every space character, type it again, and save the configuration.
## Best practices
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
Some solutions have more specific policies about free data ingestion. For exampl
See the documentation for different services and solutions for any unique billing calculations. ## Commitment Tiers
-In addition to the Pay-As-You-Go model, Log Analytics has **Commitment Tiers**, which can save you as much as 30 percent compared to the Pay-As-You-Go price. With commitment tier pricing, you can commit to buy data ingestion starting at 100 GB/day at a lower price than Pay-As-You-Go pricing. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected.
+In addition to the Pay-As-You-Go model, Log Analytics has **Commitment Tiers**, which can save you as much as 30 percent compared to the Pay-As-You-Go price. With commitment tier pricing, you can commit to buy data ingestion for a workspace, starting at 100 GB/day, at a lower price than Pay-As-You-Go pricing. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected.
- During the commitment period, you can change to a higher commitment tier (which restarts the 31-day commitment period), but you can't move back to Pay-As-You-Go or to a lower commitment tier until after you finish the commitment period. - At the end of the commitment period, the workspace retains the selected commitment tier, and the workspace can be moved to Pay-As-You-Go or to a different commitment tier at any time.
-Billing for the commitment tiers is done on a daily basis. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for a detailed listing of the commitment tiers and their prices.
+Billing for the commitment tiers is done per workspace on a daily basis. If the workspace is part of a [dedicated cluster](#dedicated-clusters), the billing is done for the cluster (see below). See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for a detailed listing of the commitment tiers and their prices.
> [!TIP] > The **Usage and estimated costs** menu item for each Log Analytics workspace hows an estimate of your monthly charges at each commitment level. You should periodically review this information to determine if you can reduce your charges by moving to another tier. See [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs) for information on this view. -
-> [!NOTE]
-> Starting June 2, 2021, **Capacity Reservations** were renamed to **Commitment Tiers**. Data collected above your commitment tier level (overage) is now billed at the same price-per-GB as the current commitment tier level, lowering costs compared to the old method of billing at the Pay-As-You-Go rate, and reducing the need for users with large data volumes to fine-tune their commitment level. Three new commitment tiers were also added: 1000, 2000, and 5000 GB/day.
- ## Dedicated clusters An [Azure Monitor Logs dedicated cluster](logs-dedicated-clusters.md) is a collection of workspaces in a single managed Azure Data Explorer cluster. Dedicated clusters support advanced features such as [customer-managed keys](customer-managed-keys.md) and use the same commitment tier pricing model as workspaces although they must have a commitment level of at least 500 GB/day. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. There is no Pay-As-You-Go option for clusters.
azure-monitor Tutorial Custom Logs Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-custom-logs-api.md
In this tutorial, you learn to:
To complete this tutorial, you need the following: - Log Analytics workspace where you have at least [contributor rights](manage-access.md#manage-access-using-azure-permissions) .-- [Permissions to create Data Collection Rule objects](/azure/azure-monitor/essentials/data-collection-rule-overview#permissions) in the workspace.
+- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
## Collect workspace details Start by gathering information that you'll need from your workspace.
The cache that drives IntelliSense may take up to 24 hours to update.
- [Complete a similar tutorial using the Azure portal.](tutorial-custom-logs.md) - [Read more about custom logs.](custom-logs-overview.md)-- [Learn more about writing transformation queries](../essentials/data-collection-rule-transformations.md)
+- [Learn more about writing transformation queries](../essentials/data-collection-rule-transformations.md)
azure-monitor Tutorial Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-custom-logs.md
In this tutorial, you learn to:
To complete this tutorial, you need the following: - Log Analytics workspace where you have at least [contributor rights](manage-access.md#manage-access-using-azure-permissions) .-- [Permissions to create Data Collection Rule objects](/azure/azure-monitor/essentials/data-collection-rule-overview#permissions) in the workspace.
+- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
## Overview of tutorial
Following is sample data that you can use for the tutorial. Alternatively, you c
- [Complete a similar tutorial using the Azure portal.](tutorial-custom-logs-api.md) - [Read more about custom logs.](custom-logs-overview.md)-- [Learn more about writing transformation queries](../essentials/data-collection-rule-transformations.md)
+- [Learn more about writing transformation queries](../essentials/data-collection-rule-transformations.md)
azure-monitor Monitor Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-azure-monitor.md
Last updated 04/07/2022
When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
-This article describes the monitoring data generated by Azure Monitor. Azure Monitor uses [itself](/azure/azure-monitor/overview) to monitor certain parts of its own functionality. You can monitor:
+This article describes the monitoring data generated by Azure Monitor. Azure Monitor uses [itself](./overview.md) to monitor certain parts of its own functionality. You can monitor:
- Autoscale operations - Monitoring operations in the audit log
- If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
+ If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](./essentials/monitor-azure-resource.md).
For an overview showing where autoscale and the audit log fit into Azure Monitor, see [Introduction to Azure Monitor](overview.md).
The **Overview** page in the Azure portal for Azure Monitor shows links and tuto
## Monitoring data
-Azure Monitor collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-Azure-resources).
+Azure Monitor collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](./essentials/monitor-azure-resource.md#monitoring-data-from-azure-resources).
See [Monitoring *Azure Monitor* data reference](azure-monitor-monitoring-reference.md) for detailed information on the metrics and logs metrics created by Azure Monitor.
The metrics and logs you can collect are discussed in the following sections.
## Analyzing metrics
-You can analyze metrics for *Azure Monitor* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started) for details on using this tool.
+You can analyze metrics for *Azure Monitor* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](./essentials/metrics-getting-started.md) for details on using this tool.
For a list of the platform metrics collected for Azure Monitor into itself, see [Azure Monitor monitoring data reference](azure-monitor-monitoring-reference.md#metrics).
-For reference, you can see a list of [all resource metrics supported in Azure Monitor](/azure/azure-monitor/essentials/metrics-supported).
+For reference, you can see a list of [all resource metrics supported in Azure Monitor](./essentials/metrics-supported.md).
<!-- Optional: Call out additional information to help your customers. For example, you can include additional information here about how to use metrics explorer specifically for your service. Remember that the UI is subject to change quite often so you will need to maintain these screenshots yourself if you add them in. -->
For reference, you can see a list of [all resource metrics supported in Azure Mo
Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema) The schemas for autoscale resource logs are found in the [Azure Monitor Data Reference](azure-monitor-monitoring-reference.md#resource-logs)
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](./essentials/resource-logs-schema.md) The schemas for autoscale resource logs are found in the [Azure Monitor Data Reference](azure-monitor-monitoring-reference.md#resource-logs)
-The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+The [Activity log](./essentials/activity-log.md) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
For a list of the types of resource logs collected for Azure Monitor, see [Monitoring Azure Monitor data reference](azure-monitor-monitoring-reference.md#resource-logs).
These are now listed in the [Log Analytics user interface](./logs/queries.md).
## Alerts
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-metric-overview), [logs](/azure/azure-monitor/alerts/alerts-unified-log), and the [activity log](/azure/azure-monitor/alerts/activity-log-alerts). Different types of alerts have benefits and drawbacks.
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](./alerts/alerts-metric-overview.md), [logs](./alerts/alerts-unified-log.md), and the [activity log](./alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks.
-For an in-depth discussion of using alerts with autoscale, see [Troubleshoot Azure autoscale](/azure/azure-monitor/autoscale/autoscale-troubleshoot).
+For an in-depth discussion of using alerts with autoscale, see [Troubleshoot Azure autoscale](./autoscale/autoscale-troubleshoot.md).
## Next steps - See [Monitoring Azure Monitor data reference](azure-monitor-monitoring-reference.md) for a reference of the metrics, logs, and other important values created by Azure Monitor to monitor itself.-- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](./essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
The following table lists Azure services and the data they collect into Azure Mo
| [Microsoft Power BI](/power-bi/power-bi-overview) | Microsoft.PowerBI/tenants | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftpowerbitenants) | | | | [Microsoft Power BI](/power-bi/power-bi-overview) | Microsoft.PowerBI/tenants/workspaces | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftpowerbitenantsworkspaces) | | | | [Power BI Embedded](/azure/power-bi-embedded/) | Microsoft.PowerBIDedicated/capacities | [**Yes**](./essentials/metrics-supported.md#microsoftpowerbidedicatedcapacities) | [**Yes**](./essentials/resource-logs-categories.md#microsoftpowerbidedicatedcapacities) | | |
- | [Azure Purview](../purview/index.yml) | Microsoft.Purview/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftpurviewaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftpurviewaccounts) | | |
+ | [Microsoft Purview](../purview/index.yml) | Microsoft.Purview/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftpurviewaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftpurviewaccounts) | | |
| [Azure Site Recovery](../site-recovery/index.yml) | Microsoft.RecoveryServices/vaults | [**Yes**](./essentials/metrics-supported.md#microsoftrecoveryservicesvaults) | [**Yes**](./essentials/resource-logs-categories.md#microsoftrecoveryservicesvaults) | | | | [Azure Relay](../azure-relay/relay-what-is-it.md) | Microsoft.Relay/namespaces | [**Yes**](./essentials/metrics-supported.md#microsoftrelaynamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftrelaynamespaces) | | | | [Azure Resource Manager](../azure-resource-manager/index.yml) | Microsoft.Resources/subscriptions | [**Yes**](./essentials/metrics-supported.md#microsoftresourcessubscriptions) | No | | |
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
### Application Insights
-**New articles**
--- [Error retrieving data message on Application Insights portal](app/troubleshoot-portal-connectivity.md)-- [Troubleshooting Azure Application Insights auto-instrumentation](app/auto-instrumentation-troubleshoot.md)- **Updated articles** - [Application Insights API for custom events and metrics](app/api-custom-events-metrics.md)
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The following table applies to v1, v2, Standard, and WAF SKUs unless otherwise s
[!INCLUDE [notification-hub-limits](../../../includes/notification-hub-limits.md)]
-## Azure Purview limits
+## Microsoft Purview limits
-The latest values for Azure Purview quotas can be found in the [Azure Purview quota page](../../purview/how-to-manage-quotas.md).
+The latest values for Microsoft Purview quotas can be found in the [Microsoft Purview quota page](../../purview/how-to-manage-quotas.md).
## Service Bus limits
azure-signalr Concept Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-connection-string.md
Besides access key, SignalR service also supports other types of authentication
### Azure Active Directory Application
-You can use [Azure AD application](/azure/active-directory/develop/app-objects-and-service-principals) to connect to SignalR service. As long as the application has the right permission to access SignalR service, no access key is needed.
+You can use [Azure AD application](../active-directory/develop/app-objects-and-service-principals.md) to connect to SignalR service. As long as the application has the right permission to access SignalR service, no access key is needed.
To use Azure AD authentication, you need to remove `AccessKey` from connection string and add `AuthType=aad`. You also need to specify the credentials of your Azure AD application, including client ID, client secret and tenant ID. The connection string will look as follows:
For more information about how to authenticate using Azure AD application, see t
### Managed identity
-You can also use [managed identity](/azure/active-directory/managed-identities-azure-resources/overview) to authenticate with SignalR service.
+You can also use [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate with SignalR service.
There are two types of managed identities, to use system assigned identity, you just need to add `AuthType=aad` to the connection string:
For more information about how to configure managed identity, see this [article]
Connection string contains the HTTP endpoint for app server to connect to SignalR service. This is also the endpoint server will return to clients in negotiate response, so client can also connect to the service.
-But in some applications there may be an additional component in front of SignalR service and all client connections need to go through that component first (to gain additional benefits like network security, [Azure Application Gateway](/azure/application-gateway/overview) is a common service that provides such functionality).
+But in some applications there may be an additional component in front of SignalR service and all client connections need to go through that component first (to gain additional benefits like network security, [Azure Application Gateway](../application-gateway/overview.md) is a common service that provides such functionality).
In such case, the client will need to connect to an endpoint different than SignalR service. Instead of manually replace the endpoint at client side, you can add `ClientEndpoint` to connecting string:
In a local development environment, the config is usually stored in file (appset
* Use .NET secret manager (`dotnet user-secrets set Azure:SignalR:ConnectionString "<connection_string>"`) * Set connection string to environment variable named `Azure__SignalR__ConnectionString` (colon needs to replaced with double underscore in [environment variable config provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider)).
-In production environment, you can use other Azure services to manage config/secrets like Azure [Key Vault](/azure/key-vault/general/overview) and [App Configuration](/azure/azure-app-configuration/overview). See their documentation to learn how to set up config provider for those services.
+In production environment, you can use other Azure services to manage config/secrets like Azure [Key Vault](../key-vault/general/overview.md) and [App Configuration](../azure-app-configuration/overview.md). See their documentation to learn how to set up config provider for those services.
> [!NOTE] > Even you're directly setting connection string using code, it's not recommended to hardcode the connection string in source code, so you should still first read the connection string from a secret store like key vault and pass it to `AddAzureSignalR()`.
azure-signalr Signalr Concept Internals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-internals.md
Once the application server is started,
- For ASP.NET Core SignalR, Azure SignalR Service SDK opens 5 WebSocket connections per hub to SignalR Service. - For ASP.NET SignalR, Azure SignalR Service SDK opens 5 WebSocket connections per hub to SignalR Service, and one per application WebSocket connection.
-5 WebSocket connections is the default value that can be changed in [configuration](https://github.com/Azure/azure-signalr/blob/dev/docs/run-asp-net-core.md#connectioncount).
+5 WebSocket connections is the default value that can be changed in [configuration](https://github.com/Azure/azure-signalr/blob/dev/docs/run-asp-net-core.md#connectioncount). Please note that this configures the initial server connection count the SDK starts. While the app server is connected to the SignalR service, the Azure SignalR service might send load-balancing messages to the server and the SDK will start new server connections to the service for better performance.
Messages to and from clients will be multiplexed into these connections.
azure-signalr Signalr Howto Scale Multi Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-scale-multi-instances.md
private class CustomRouter : EndpointRouterDecorator
## Dynamic Scale ServiceEndpoints
-From SDK version 1.5.0, we're enabling dynamic scale ServiceEndpoints for ASP.NET Core version first. So you don't have to restart app server when you need to add/remove a ServiceEndpoint. As ASP.NET Core is supporting default configuration like `appsettings.json` with `reloadOnChange: true`, you don't need to change a code and it's supported by nature. And if you'd like to add some customized configuration and work with hot-reload, please refer to [this](https://docs.microsoft.com/aspnet/core/fundamentals/configuration/?view=aspnetcore-3.1).
+From SDK version 1.5.0, we're enabling dynamic scale ServiceEndpoints for ASP.NET Core version first. So you don't have to restart app server when you need to add/remove a ServiceEndpoint. As ASP.NET Core is supporting default configuration like `appsettings.json` with `reloadOnChange: true`, you don't need to change a code and it's supported by nature. And if you'd like to add some customized configuration and work with hot-reload, please refer to [this](/aspnet/core/fundamentals/configuration/?view=aspnetcore-3.1).
> [!NOTE] >
In this guide, you learned about how to configure multiple instances in the same
Multiple endpoints supports can also be used in high availability and disaster recovery scenarios. > [!div class="nextstepaction"]
-> [Setup SignalR Service for disaster recovery and high availability](./signalr-concept-disaster-recovery.md)
+> [Setup SignalR Service for disaster recovery and high availability](./signalr-concept-disaster-recovery.md)
azure-sql Active Directory Interactive Connect Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/active-directory-interactive-connect-azure-sql-db.md
Last updated 04/06/2022
This article provides a C# program that connects to Azure SQL Database. The program uses interactive mode authentication, which supports [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md).
-For more information about Multi-Factor Authentication support for SQL tools, see [Using multi-factor Azure Active Directory authentication](/azure/azure-sql/database/authentication-mfa-ssms-overview).
+For more information about Multi-Factor Authentication support for SQL tools, see [Using multi-factor Azure Active Directory authentication](./authentication-mfa-ssms-overview.md).
## Multi-Factor Authentication for Azure SQL Database
For more information about Azure AD admins and users for Azure SQL Database, see
The C# example relies on the [Microsoft.Data.SqlClient](/sql/connect/ado-net/introduction-microsoft-data-sqlclient-namespace) namespace. For more information, see [Using Azure Active Directory authentication with SqlClient](/sql/connect/ado-net/sql/azure-active-directory-authentication). > [!NOTE]
-> [System.Data.SqlClient](/dotnet/api/system.data.sqlclient) uses the Azure Active Directory Authentication Library (ADAL), which will be deprecated. If you're using the [System.Data.SqlClient](/dotnet/api/system.data.sqlclient) namespace for Azure Active Directory authentication, migrate applications to [Microsoft.Data.SqlClient](/sql/connect/ado-net/introduction-microsoft-data-sqlclient-namespace) and the [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-migration). For more information about using Azure AD authentication with SqlClient, see [Using Azure Active Directory authentication with SqlClient](/sql/connect/ado-net/sql/azure-active-directory-authentication).
+> [System.Data.SqlClient](/dotnet/api/system.data.sqlclient) uses the Azure Active Directory Authentication Library (ADAL), which will be deprecated. If you're using the [System.Data.SqlClient](/dotnet/api/system.data.sqlclient) namespace for Azure Active Directory authentication, migrate applications to [Microsoft.Data.SqlClient](/sql/connect/ado-net/introduction-microsoft-data-sqlclient-namespace) and the [Microsoft Authentication Library (MSAL)](../../active-directory/develop/msal-migration.md). For more information about using Azure AD authentication with SqlClient, see [Using Azure Active Directory authentication with SqlClient](/sql/connect/ado-net/sql/azure-active-directory-authentication).
## Verify with SQL Server Management Studio
azure-sql Analyze Prevent Deadlocks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/analyze-prevent-deadlocks.md
GO
## Use Azure Storage Explorer
-[Azure Storage Explorer](/azure/vs-azure-tools-storage-manage-with-storage-explorer) is a standalone application that simplifies working with event file targets stored in blobs in Azure Storage. You can use Storage Explorer to:
+[Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md) is a standalone application that simplifies working with event file targets stored in blobs in Azure Storage. You can use Storage Explorer to:
-- [Create a blob container](/azure/vs-azure-tools-storage-explorer-blobs#create-a-blob-container) to hold XEvent session data.-- [Get the shared access signature (SAS)](/azure/vs-azure-tools-storage-explorer-blobs#get-the-sas-for-a-blob-container) for a blob container.
+- [Create a blob container](../../vs-azure-tools-storage-explorer-blobs.md#create-a-blob-container) to hold XEvent session data.
+- [Get the shared access signature (SAS)](../../vs-azure-tools-storage-explorer-blobs.md#get-the-sas-for-a-blob-container) for a blob container.
- As mentioned in [Collect deadlock graphs in Azure SQL Database with Extended Events](#collect-deadlock-graphs-in-azure-sql-database-with-extended-events), the read, write, and list permissions are required. - Remove any leading `?` character from the `Query string` to use the value as the secret when [creating a database scoped credential](?tabs=event-file#create-a-database-scoped-credential).-- [View and download](/azure/vs-azure-tools-storage-explorer-blobs#view-a-blob-containers-contents) extended event files from a blob container.
+- [View and download](../../vs-azure-tools-storage-explorer-blobs.md#view-a-blob-containers-contents) extended event files from a blob container.
[Download Azure Storage Explorer.](https://azure.microsoft.com/features/storage-explorer/).
Learn more about performance in Azure SQL Database:
- [SET TRANSACTION ISOLATION LEVEL](/sql/t-sql/statements/set-transaction-isolation-level-transact-sql) - [Azure SQL Database: Improving Performance Tuning with Automatic Tuning](/Shows/Data-Exposed/Azure-SQL-Database-Improving-Performance-Tuning-with-Automatic-Tuning) - [Deliver consistent performance with Azure SQL](/learn/modules/azure-sql-performance/)-- [Retry logic for transient errors](troubleshoot-common-connectivity-issues.md#retry-logic-for-transient-errors).
+- [Retry logic for transient errors](troubleshoot-common-connectivity-issues.md#retry-logic-for-transient-errors).
azure-sql Authentication Aad Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-aad-configure.md
For more information about CLI commands, see [az sql server](/cli/azure/sql/serv
## Configure your client computers > [!NOTE]
-> [System.Data.SqlClient](/dotnet/api/system.data.sqlclient) uses the Azure Active Directory Authentication Library (ADAL), which will be deprecated. If you're using the [System.Data.SqlClient](/dotnet/api/system.data.sqlclient) namespace for Azure Active Directory authentication, migrate applications to [Microsoft.Data.SqlClient](/sql/connect/ado-net/introduction-microsoft-data-sqlclient-namespace) and the [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-migration). For more information about using Azure AD authentication with SqlClient, see [Using Azure Active Directory authentication with SqlClient](/sql/connect/ado-net/sql/azure-active-directory-authentication).
+> [System.Data.SqlClient](/dotnet/api/system.data.sqlclient) uses the Azure Active Directory Authentication Library (ADAL), which will be deprecated. If you're using the [System.Data.SqlClient](/dotnet/api/system.data.sqlclient) namespace for Azure Active Directory authentication, migrate applications to [Microsoft.Data.SqlClient](/sql/connect/ado-net/introduction-microsoft-data-sqlclient-namespace) and the [Microsoft Authentication Library (MSAL)](../../active-directory/develop/msal-migration.md). For more information about using Azure AD authentication with SqlClient, see [Using Azure Active Directory authentication with SqlClient](/sql/connect/ado-net/sql/azure-active-directory-authentication).
> > SSMS and SSDT still uses the Azure Active Directory Authentication Library (ADAL). If you want to continue using *ADAL.DLL* in your applications, you can use the links in this section to install the latest SSMS, ODBC, and OLE DB driver that contains the latest *ADAL.DLL* library. On all client machines, from which your applications or users connect to SQL Database or Azure Synapse using Azure AD identities, you must install the following software: - .NET Framework 4.6 or later from [https://msdn.microsoft.com/library/5a4x27ek.aspx](/dotnet/framework/install/guide-for-developers).-- [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-migration) or Azure Active Directory Authentication Library for SQL Server (*ADAL.DLL*). Below are the download links to install the latest SSMS, ODBC, and OLE DB driver that contains the *ADAL.DLL* library.
+- [Microsoft Authentication Library (MSAL)](../../active-directory/develop/msal-migration.md) or Azure Active Directory Authentication Library for SQL Server (*ADAL.DLL*). Below are the download links to install the latest SSMS, ODBC, and OLE DB driver that contains the *ADAL.DLL* library.
- [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) - [ODBC Driver 17 for SQL Server](/sql/connect/odbc/download-odbc-driver-for-sql-server?view=sql-server-ver15&preserve-view=true) - [OLE DB Driver 18 for SQL Server](/sql/connect/oledb/download-oledb-driver-for-sql-server?view=sql-server-ver15&preserve-view=true)
Guidance on troubleshooting issues with Azure AD authentication can be found in
[11]: ./media/authentication-aad-configure/active-directory-integrated.png [12]: ./media/authentication-aad-configure/12connect-using-pw-auth2.png
-[13]: ./media/authentication-aad-configure/13connect-to-db2.png
+[13]: ./media/authentication-aad-configure/13connect-to-db2.png
azure-sql Data Discovery And Classification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/data-discovery-and-classification-overview.md
You can use the following SQL drivers to retrieve classification metadata:
## FAQ - Advanced classification capabilities
-**Question**: Will [Azure Purview](../../purview/overview.md) replace SQL Data Discovery & Classification or will SQL Data Discovery & Classification be retired soon?
-**Answer**: We continue to support SQL Data Discovery & Classification and encourage you to adopt [Azure Purview](../../purview/overview.md) which has richer capabilities to drive advanced classification capabilities and data governance. If we decide to retire any service, feature, API or SKU, you will receive advance notice including a migration or transition path. Learn more about Microsoft Lifecycle policies here.
+**Question**: Will [Microsoft Purview](../../purview/overview.md) replace SQL Data Discovery & Classification or will SQL Data Discovery & Classification be retired soon?
+**Answer**: We continue to support SQL Data Discovery & Classification and encourage you to adopt [Microsoft Purview](../../purview/overview.md) which has richer capabilities to drive advanced classification capabilities and data governance. If we decide to retire any service, feature, API or SKU, you will receive advance notice including a migration or transition path. Learn more about Microsoft Lifecycle policies here.
## Next steps - Consider configuring [Azure SQL Auditing](../../azure-sql/database/auditing-overview.md) for monitoring and auditing access to your classified sensitive data. - For a presentation that includes data Discovery & Classification, see [Discovering, classifying, labeling & protecting SQL data | Data Exposed](https://www.youtube.com/watch?v=itVi9bkJUNc).-- To classify your Azure SQL Databases and Azure Synapse Analytics with Azure Purview labels using T-SQL commands, see [Classify your Azure SQL data using Azure Purview labels](../../sql-database/scripts/sql-database-import-purview-labels.md).
+- To classify your Azure SQL Databases and Azure Synapse Analytics with Microsoft Purview labels using T-SQL commands, see [Classify your Azure SQL data using Microsoft Purview labels](../../sql-database/scripts/sql-database-import-purview-labels.md).
azure-sql Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/maintenance-window.md
Previously updated : 04/04/2022 Last updated : 04/19/2022 # Maintenance window
Choosing a maintenance window other than the default is currently available in t
| Switzerland North | Yes | Yes | | | Switzerland West | Yes | | | | UAE Central | Yes | | |
-| UAE North | Yes | | |
+| UAE North | Yes | Yes | |
| UK South | Yes | Yes | Yes | | UK West | Yes | Yes | | | US Gov Arizona | Yes | | |
azure-sql Troubleshoot Memory Errors Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/troubleshoot-memory-errors-issues.md
ORDER BY max_query_max_used_memory DESC, avg_query_max_used_memory DESC;
### Extended events In addition to the previous information, it may be helpful to capture a trace of the activities on the server to thoroughly investigate an out of memory issue in Azure SQL Database.
-There are two ways to capture traces in SQL Server; Extended Events (XEvents) and Profiler Traces. However, [SQL Server Profiler](/sql/tools/sql-server-profiler/sql-server-profiler) is deprecated trace technology not supported for Azure SQL Database. [Extended Events](/sql/relational-databases/extended-events/extended-events) is the newer tracing technology that allows more versatility and less impact to the observed system, and its interface is integrated into SQL Server Management Studio (SSMS). For more information on querying extended events in Azure SQL Database, see [Extended events in Azure SQL Database](/azure/azure-sql/database/xevent-db-diff-from-svr).
+There are two ways to capture traces in SQL Server; Extended Events (XEvents) and Profiler Traces. However, [SQL Server Profiler](/sql/tools/sql-server-profiler/sql-server-profiler) is deprecated trace technology not supported for Azure SQL Database. [Extended Events](/sql/relational-databases/extended-events/extended-events) is the newer tracing technology that allows more versatility and less impact to the observed system, and its interface is integrated into SQL Server Management Studio (SSMS). For more information on querying extended events in Azure SQL Database, see [Extended events in Azure SQL Database](./xevent-db-diff-from-svr.md).
Refer to the document that explains how to use the [Extended Events New Session Wizard](/sql/relational-databases/extended-events/quick-start-extended-events-in-sql-server) in SSMS. For Azure SQL databases however, SSMS provides an Extended Events subfolder under each database in Object Explorer. Use an Extended Events session to capture these useful events, and identify the queries generating them:
If out of memory errors persist in Azure SQL Database, file an Azure support req
- [Troubleshooting connectivity issues and other errors with Azure SQL Database and Azure SQL Managed Instance](troubleshoot-common-errors-issues.md) - [Troubleshoot transient connection errors in SQL Database and SQL Managed Instance](troubleshoot-common-connectivity-issues.md) - [Demonstrating Intelligent Query Processing](https://github.com/Microsoft/sql-server-samples/tree/master/samples/features/intelligent-query-processing) -- [Resource management in Azure SQL Database](resource-limits-logical-server.md#memory).
+- [Resource management in Azure SQL Database](resource-limits-logical-server.md#memory).
azure-sql Winauth Azuread Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-overview.md
Last updated 03/01/2022
## Key capabilities and scenarios
-As customers modernize their infrastructure, application, and data tiers, they also modernize their identity management capabilities by shifting to Azure AD. Azure SQL offers multiple [Azure AD Authentication](/azure/azure-sql/database/authentication-aad-overview) options:
+As customers modernize their infrastructure, application, and data tiers, they also modernize their identity management capabilities by shifting to Azure AD. Azure SQL offers multiple [Azure AD Authentication](../database/authentication-aad-overview.md) options:
- 'Azure Active Directory - Password' offers authentication with Azure AD credentials - 'Azure Active Directory - Universal with MFA' adds multi-factor authentication
azure-sql Winauth Azuread Run Trace Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-run-trace-managed-instance.md
To use Windows Authentication to connect to and run a trace against a managed in
- To create or modify extended events sessions, ensure that your account has the [server permission](/sql/t-sql/statements/grant-server-permissions-transact-sql) of ALTER ANY EVENT SESSION on the managed instance. - To create or modify traces in SQL Server Profiler, ensure that your account has the [server permission](/sql/t-sql/statements/grant-server-permissions-transact-sql) of ALTER TRACE on the managed instance.
-If you have not yet enabled Windows authentication for Azure AD principals against your managed instance, you may run a trace against a managed instance using an [Azure AD Authentication](/azure/azure-sql/database/authentication-aad-overview) option, including:
+If you have not yet enabled Windows authentication for Azure AD principals against your managed instance, you may run a trace against a managed instance using an [Azure AD Authentication](../database/authentication-aad-overview.md) option, including:
- 'Azure Active Directory - Password' - 'Azure Active Directory - Universal with MFA'
azure-sql Winauth Azuread Setup Incoming Trust Based Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-setup-incoming-trust-based-flow.md
To implement the incoming trust-based authentication flow, first ensure that the
|Prerequisite |Description | ||| |Client must run Windows 10, Windows Server 2012, or a higher version of Windows. | |
-|Clients must be joined to AD. The domain must have a functional level of Windows Server 2012 or higher. | You can determine if the client is joined to AD by running the [dsregcmd command](/azure/active-directory/devices/troubleshoot-device-dsregcmd): `dsregcmd.exe /status` |
+|Clients must be joined to AD. The domain must have a functional level of Windows Server 2012 or higher. | You can determine if the client is joined to AD by running the [dsregcmd command](../../active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
|Azure AD Hybrid Authentication Management Module. | This PowerShell module provides management features for on-premises setup. | |Azure tenant. | | |Azure subscription under the same Azure AD tenant you plan to use for authentication.| |
Install-Module -Name AzureADHybridAuthenticationManagement -AllowClobber
- Enter the password for your Azure AD global administrator account. - If your organization uses other modern authentication methods such as MFA (Azure Multi-Factor Authentication) or Smart Card, follow the instructions as requested for sign in.
- If this is the first time you're configuring Azure AD Kerberos settings, the [Get-AzureAdKerberosServer cmdlet](/azure/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises#view-and-verify-the-azure-ad-kerberos-server) will display empty information, as in the following sample output:
+ If this is the first time you're configuring Azure AD Kerberos settings, the [Get-AzureAdKerberosServer cmdlet](../../active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md#view-and-verify-the-azure-ad-kerberos-server) will display empty information, as in the following sample output:
``` ID :
Install-Module -Name AzureADHybridAuthenticationManagement -AllowClobber
1. Add the Trusted Domain Object.
- Run the [Set-AzureAdKerberosServer PowerShell cmdlet](/azure/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises#create-a-kerberos-server-object) to add the Trusted Domain Object. Be sure to include `-SetupCloudTrust` parameter. If there is no Azure AD service account, this command will create a new Azure AD service account. If there is an Azure AD service account already, this command will only create the requested Trusted Domain object.
+ Run the [Set-AzureAdKerberosServer PowerShell cmdlet](../../active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md#create-a-kerberos-server-object) to add the Trusted Domain Object. Be sure to include `-SetupCloudTrust` parameter. If there is no Azure AD service account, this command will create a new Azure AD service account. If there is an Azure AD service account already, this command will only create the requested Trusted Domain object.
```powershell Set-AzureAdKerberosServer -Domain $domain `
Install-Module -Name AzureADHybridAuthenticationManagement -AllowClobber
## Configure the Group Policy Object (GPO)
-1. Identify your [Azure AD tenant ID](/azure/active-directory/fundamentals/active-directory-how-to-find-tenant).
+1. Identify your [Azure AD tenant ID](../../active-directory/fundamentals/active-directory-how-to-find-tenant.md).
1. Deploy the following Group Policy setting to client machines using the incoming trust-based flow:
Learn more about implementing Windows Authentication for Azure AD principals on
- [Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory (Preview)](winauth-azuread-kerberos-managed-instance.md) - [What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance? (Preview)](winauth-azuread-overview.md)-- [How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and Kerberos (Preview)](winauth-azuread-setup.md)
+- [How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and Kerberos (Preview)](winauth-azuread-setup.md)
azure-sql Winauth Azuread Setup Modern Interactive Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-setup-modern-interactive-flow.md
There is no AD to Azure AD set up required for enabling software running on Azur
|Prerequisite |Description | ||| |Clients must run Windows 10 20H1, Windows Server 2022, or a higher version of Windows. | |
-|Clients must be joined to Azure AD or Hybrid Azure AD. | You can determine if this prerequisite is met by running the [dsregcmd command](/azure/active-directory/devices/troubleshoot-device-dsregcmd): `dsregcmd.exe /status` |
+|Clients must be joined to Azure AD or Hybrid Azure AD. | You can determine if this prerequisite is met by running the [dsregcmd command](../../active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
|Application must connect to the managed instance via an interactive session. | This supports applications such as SQL Server Management Studio (SSMS) and web applications, but won't work for applications that run as a service. | |Azure AD tenant. | | |Azure AD Connect installed. | Hybrid environments where identities exist both in Azure AD and AD. |
Learn more about implementing Windows Authentication for Azure AD principals on
- [How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory and Kerberos (Preview)](winauth-implementation-aad-kerberos.md) - [How to set up Windows Authentication for Azure AD with the incoming trust-based flow (Preview)](winauth-azuread-setup-incoming-trust-based-flow.md) - [Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory (Preview)](winauth-azuread-kerberos-managed-instance.md)-- [Troubleshoot Windows Authentication for Azure AD principals on Azure SQL Managed Instance](winauth-azuread-troubleshoot.md)
+- [Troubleshoot Windows Authentication for Azure AD principals on Azure SQL Managed Instance](winauth-azuread-troubleshoot.md)
azure-sql Winauth Azuread Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-setup.md
Following this, a system administrator configures authentication flows. Two auth
### Synchronize AD with Azure AD
-Customers should first implement [Azure AD Connect](/azure/active-directory/hybrid/whatis-azure-ad-connect) to integrate on-premises directories with Azure AD.
+Customers should first implement [Azure AD Connect](../../active-directory/hybrid/whatis-azure-ad-connect.md) to integrate on-premises directories with Azure AD.
### Select which authentication flow(s) you will implement
The following prerequisites are required to implement the modern interactive aut
|Prerequisite |Description | ||| |Clients must run Windows 10 20H1, Windows Server 2022, or a higher version of Windows. | |
-|Clients must be joined to Azure AD or Hybrid Azure AD. | You can determine if this prerequisite is met by running the [dsregcmd command](/azure/active-directory/devices/troubleshoot-device-dsregcmd): `dsregcmd.exe /status` |
+|Clients must be joined to Azure AD or Hybrid Azure AD. | You can determine if this prerequisite is met by running the [dsregcmd command](../../active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
|Application must connect to the managed instance via an interactive session. | This supports applications such as SQL Server Management Studio (SSMS) and web applications, but won't work for applications that run as a service. | |Azure AD tenant. | | |Azure AD Connect installed. | Hybrid environments where identities exist both in Azure AD and AD. |
The following prerequisites are required to implement the incoming trust-based a
|Prerequisite |Description | ||| |Client must run Windows 10, Windows Server 2012, or a higher version of Windows. | |
-|Clients must be joined to AD. The domain must have a functional level of Windows Server 2012 or higher. | You can determine if the client is joined to AD by running the [dsregcmd command](/azure/active-directory/devices/troubleshoot-device-dsregcmd): `dsregcmd.exe /status` |
+|Clients must be joined to AD. The domain must have a functional level of Windows Server 2012 or higher. | You can determine if the client is joined to AD by running the [dsregcmd command](../../active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
|Azure AD Hybrid Authentication Management Module. | This PowerShell module provides management features for on-premises setup. | |Azure tenant. | | |Azure subscription under the same Azure AD tenant you plan to use for authentication.| |
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
Before you begin checking off the prerequisites, verify the following actions ha
The following items are needed to ensure you're set up to begin the onboarding process to deploy Arc for Azure VMware Solution (Preview). - A jump box virtual machine (VM) with network access to the Azure VMware Solution vCenter.
- - From the jump-box VM, verify you have access to [vCenter and NSX-T portals](/azure/azure-vmware/tutorial-configure-networking).
+ - From the jump-box VM, verify you have access to [vCenter and NSX-T portals](./tutorial-configure-networking.md).
- Verify that your Azure subscription has been enabled or you have connectivity to Azure end points, mentioned in the [Appendices](#appendices). - Resource group in the subscription where you have owner or contributor role. - A minimum of three free non-overlapping IPs addresses.
The following items are needed to ensure you're set up to begin the onboarding p
At this point, you should have already deployed an Azure VMware Solution private cluster. You need to have a connection from your on-prem environment or your native Azure Virtual Network to the Azure VMware Solution private cloud.
-For Network planning and setup, use the [Network planning checklist - Azure VMware Solution | Microsoft Docs](/azure/azure-vmware/tutorial-network-checklist)
+For Network planning and setup, use the [Network planning checklist - Azure VMware Solution | Microsoft Docs](./tutorial-network-checklist.md)
### Registration to Arc for Azure VMware Solution feature set
The guest management must be enabled on the VMware virtual machine (VM) before y
>[!NOTE] > The following conditions are necessary to enable guest management on a VM. -- The machine must be running a [Supported operating system](/azure/azure-arc/servers/agent-overview).-- The machine needs to connect through the firewall to communicate over the Internet. Make sure the [URLs](/azure/azure-arc/servers/agent-overview) listed aren't blocked.
+- The machine must be running a [Supported operating system](../azure-arc/servers/agent-overview.md).
+- The machine needs to connect through the firewall to communicate over the Internet. Make sure the [URLs](../azure-arc/servers/agent-overview.md) listed aren't blocked.
- The machine can't be behind a proxy, it's not supported yet. - If you're using Linux VM, the account must not prompt to sign in on pseudo commands.
Use the following tips as a self-help guide.
**Where can I find more information related to Azure Arc resource bridge?** -- For more information, go to [Azure Arc resource bridge (preview) overview](/azure/azure-arc/resource-bridge/overview)
+- For more information, go to [Azure Arc resource bridge (preview) overview](../azure-arc/resource-bridge/overview.md)
## Appendices
Appendix 1 shows proxy URLs required by the Azure Arc-enabled private cloud. The
**Additional URL resources** - [Google Container Registry](http://gcr.io/)-- [Red Hat Quay.io](http://quay.io/)---
+- [Red Hat Quay.io](http://quay.io/)
azure-vmware Ecosystem App Monitoring Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-app-monitoring-solutions.md
Title: Application performance monitoring and troubleshooting solutions for Azure VMware Solution description: Learn about leading application monitoring and troubleshooting solutions for your Azure VMware Solution private cloud. Previously updated : 12/10/2021 Last updated : 04/11/2022 # Application performance monitoring and troubleshooting solutions for Azure VMware Solution A key objective of Azure VMware Solution is to maintain the performance and security of applications and services across VMware on Azure and on-premises. Getting there requires visibility into complex infrastructures and quickly pinpointing the root cause of service disruptions across the hybrid cloud.
-Our application performance monitoring and troubleshooting partners have industry-leading solutions in VMware-based environments that assure the availability, reliability, and responsiveness of applications and services. Our customers have adopted many of these solutions integrated with VMware NSX-T for their on-premises deployments. As one of our key principles, we want to enable them to continue to use their investments and VMware solutions running on Azure. Many of these Independent Software Vendors (ISV) have validated their solutions with Azure VMware Solution.
+Our application performance monitoring and troubleshooting partners have industry-leading solutions in VMware-based environments that assure the availability, reliability, and responsiveness of applications and services. Our customers have adopted many of these solutions integrated with VMware NSX-T Data Center for their on-premises deployments. As one of our key principles, we want to enable them to continue to use their investments and VMware solutions running on Azure. Many of these Independent Software Vendors (ISV) have validated their solutions with Azure VMware Solution.
You can find more information about these solutions here: - [NETSCOUT](https://www.netscout.com/technology-partners/microsoft-azure)-- [Turbonomic](https://blog.turbonomic.com/turbonomic-announces-partnership-and-support-for-azure-vmware-service)
+- [Turbonomic](https://blog.turbonomic.com/turbonomic-announces-partnership-and-support-for-azure-vmware-service)
azure-vmware Ecosystem Os Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-os-vms.md
Title: Operating system support for Azure VMware Solution virtual machines description: Learn about operating system support for your Azure VMware Solution virtual machines. Previously updated : 03/13/2022 Last updated : 04/11/2022 # Operating system support for Azure VMware Solution virtual machines
Azure VMware Solution supports a wide range of operating systems to be used in t
Check the list of operating systems and configurations supported in the [VMware Compatibility Guide](https://www.vmware.com/resources/compatibility/search.php?deviceCategory=software), create a query for ESXi 6.7 Update 3 and select all operating systems and vendors.
-Additionally to the supported operating systems by VMware on vSphere we have worked with Red Hat, SUSE and Canonical to extend the support model currently in place for Azure Virtual Machines to the workloads running on Azure VMware Solution, given that it is a first-party Azure service. You can check the following sites of vendors for more information about the benefits of running their operating system on Azure.
+Additionally to the supported operating systems by VMware for vSphere, we have worked with Red Hat, SUSE and Canonical to extend the support model currently in place for Azure Virtual Machines to the workloads running on Azure VMware Solution, given that it is a first-party Azure service. You can check the following sites of vendors for more information about the benefits of running their operating system on Azure.
- [Red Hat Enterprise Linux](https://access.redhat.com/ecosystem/microsoft-azure) - [Ubuntu Server](https://ubuntu.com/azure)-- [SUSE Enterprise Linux Server](https://www.suse.com/partners/alliance/microsoft/)
+- [SUSE Enterprise Linux Server](https://www.suse.com/partners/alliance/microsoft/)
azure-vmware Ecosystem Security Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-security-solutions.md
Title: Security solutions for Azure VMware Solution description: Learn about leading security solutions for your Azure VMware Solution private cloud. Previously updated : 09/15/2021 Last updated : 04/11/2022 # Security solutions for Azure VMware Solution A fundamental part of Azure VMware Solution is security. It allows customers to run their VMware-based workloads in a safe and trustable environment.
-Our security partners have industry-leading solutions in VMware-based environments that cover many aspects of the security ecosystem like threat protection and security scanning. Our customers have adopted many of these solutions integrated with VMware NSX-T for their on-premises deployments. As one of our key principles, we want to enable them to continue to use their investments and VMware solutions running on Azure. Many of these Independent Software Vendors (ISV) have validated their solutions with Azure VMware Solution.
+Our security partners have industry-leading solutions in VMware-based environments that cover many aspects of the security ecosystem like threat protection and security scanning. Our customers have adopted many of these solutions integrated with VMware NSX-T Data Center for their on-premises deployments. As one of our key principles, we want to enable them to continue to use their investments and VMware solutions running on Azure. Many of these Independent Software Vendors (ISV) have validated their solutions with Azure VMware Solution.
You can find more information about these solutions here: - [Bitdefender](https://businessinsights.bitdefender.com/expanding-security-support-for-azure-vmware-solution) - [Trend Micro Deep Security](https://www.trendmicro.com/en_us/business/products/hybrid-cloud/deep-security.html)-- [Check Point](https://www.checkpoint.com/cloudguard/cloud-network-security/iaas-public-cloud-security/)
+- [Check Point](https://www.checkpoint.com/cloudguard/cloud-network-security/iaas-public-cloud-security/)
bastion Bastion Vm Copy Paste https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-vm-copy-paste.md
description: Learn how copy and paste to and from a Windows VM using Bastion.
Previously updated : 04/18/2022 Last updated : 04/19/2022 # Customer intent: I want to copy and paste to and from VMs using Azure Bastion.
Before you proceed, make sure you have the following items.
## <a name="configure"></a> Configure the bastion host
-By default, Azure Bastion is automatically enabled to allow copy and paste for all sessions connected through the bastion resource. You don't need to configure anything additional. This applies to both the Basic and the Standard SKU tier. If you want to disable the copy and paste feature, the Standard SKU is required.
+By default, Azure Bastion is automatically enabled to allow copy and paste for all sessions connected through the bastion resource. You don't need to configure anything additional. This applies to both the Basic and the Standard SKU tier. If you want to disable this feature, you can disable it for web-based clients on the configuration page of your Bastion resource.
1. To view or change your configuration, in the portal, go to your Bastion resource. 1. Go to the **Configuration** page.
bastion Vm About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/vm-about.md
description: Learn about VM connections and features when connecting using Azure
Previously updated : 04/18/2022 Last updated : 04/19/2022
You can use a variety of different methods to connect to a target VM. Some conne
## <a name="copy-paste"></a>Copy and paste
-You can copy and paste text between your local device and the remote session. Only text copy/paste is supported. By default, this feature is enabled. If you want to disable this feature, you can change the setting on the configuration page for your bastion host. To disable, your bastion host must be configured with the Standard SKU tier.
+You can copy and paste text between your local device and the remote session. Only text copy/paste is supported. By default, this feature is enabled. If you want to disable this feature for web-based clients, you can change the setting on the configuration page for your bastion host. To disable, your bastion host must be configured with the Standard SKU tier.
For steps and more information, see [Copy and paste - Windows VMs](bastion-vm-copy-paste.md).
cdn Cdn Custom Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-custom-ssl.md
If your CNAME record is in the correct format, DigiCert automatically verifies y
Automatic validation typically takes a few hours. If you donΓÇÖt see your domain validated in 24 hours, open a support ticket. >[!NOTE]
->If you have a Certificate Authority Authorization (CAA) record with your DNS provider, it must include DigiCert as a valid CA. A CAA record allows domain owners to specify with their DNS providers which CAs are authorized to issue certificates for their domain. If a CA receives an order for a certificate for a domain that has a CAA record and that CA is not listed as an authorized issuer, it is prohibited from issuing the certificate to that domain or subdomain. For information about managing CAA records, see [Manage CAA records](https://support.dnsimple.com/articles/manage-caa-record/). For a CAA record tool, see [CAA Record Helper](https://sslmate.com/caa/).
+>If you have a Certificate Authority Authorization (CAA) record with your DNS provider, it must include the appropriate CA(s) for authorization. DigiCert is the CA for Microsoft and Verizon profiles. Akamai profile obtains certificates from three CAs: GeoTrust, Let's Encrypt and DigiCert. If a CA receives an order for a certificate for a domain that has a CAA record and that CA is not listed as an authorized issuer, it is prohibited from issuing the certificate to that domain or subdomain. For information about managing CAA records, see [Manage CAA records](https://support.dnsimple.com/articles/manage-caa-record/). For a CAA record tool, see [CAA Record Helper](https://sslmate.com/caa/).
### Custom domain isn't mapped to your CDN endpoint
cdn Cdn Verizon Premium Rules Engine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-verizon-premium-rules-engine.md
Previously updated : 05/31/2019 Last updated : 04/13/2022
To access the rules engine, you must first select **Manage** from the top of the
Select the **HTTP Large** tab, then select **Rules Engine**.
- ![Rules engine for HTTP](./media/cdn-rules-engine/cdn-http-rules-engine.png)
+ :::image type="content" source="./media/cdn-rules-engine/cdn-http-rules-engine.png" alt-text="Screenshot of rules engine for HTTP.":::
- Endpoints optimized for DSA:
To access the rules engine, you must first select **Manage** from the top of the
ADN is a term used by Verizon to specify DSA content. Any rules you create here are ignored by any endpoints in your profile that are not optimized for DSA.
- ![Rules engine for DSA](./media/cdn-rules-engine/cdn-dsa-rules-engine.png)
+ :::image type="content" source="./media/cdn-rules-engine/cdn-dsa-rules-engine.png" alt-text="Screenshot of rules engine for DSA.":::
## Tutorial
-1. From the **CDN profile** page, select **Manage**.
-
- ![CDN profile Manage button](./media/cdn-rules-engine/cdn-manage-btn.png)
-
- The CDN management portal opens.
+1. From the **CDN profile** page, select **Manage** to open the CDN management portal.
-2. Select the **HTTP Large** tab, then select **Rules Engine**.
-
- The options for a new rule are displayed.
-
- ![CDN new rule options](./media/cdn-rules-engine/cdn-new-rule.png)
+ :::image type="content" source="./media/cdn-rules-engine/cdn-manage-btn.png" alt-text="Screenshot of the manage button from the CDN profile.":::
+
+1. Select the **HTTP Large** tab, then select **Rules Engine**.
+
+1. Select **+ New** to create a new draft policy.
+
+ :::image type="content" source="./media/cdn-rules-engine/new-draft.png" alt-text="Screenshot of the create a new policy button.":::
+1. Give the policy a name. Select **Continue**, then select **+ Rule**.
+
+ :::image type="content" source="./media/cdn-rules-engine/new-draft-2.png" alt-text="Screenshot of the policy creation page.":::
+ > [!IMPORTANT] > The order in which multiple rules are listed affects how they are handled. A subsequent rule may override the actions specified by a previous rule. For example, if you have a rule that allows access to a resource based on a request property and a rule that denies access to all requests, the second rule overrides the first one. Rules will override earlier rules only if they interact with the same properties. >
-3. Enter a name in the **Name / Description** textbox.
+1. Enter a name in the **Name / Description** textbox.
-4. Identify the type of requests the rule applies to. Use the default match condition, **Always**.
+1. Select the **+** button and then select **Match** or **Select First Match** for the match logic. The difference between the two is described in [Request Identification](https://docs.edgecast.com/cdn/https://docsupdatetracker.net/index.html#HRE/MatchesConcept.htm?).
+
+1. Identify the type of requests the rule applies to. Use the default match condition, **Always**.
- ![CDN rule match condition](./media/cdn-rules-engine/cdn-request-type.png)
+ :::image type="content" source="./media/cdn-rules-engine/cdn-request-type.png" alt-text="Screenshot of the CDN rule match condition.":::
> [!NOTE] > Multiple match conditions are available in the dropdown list. For information about the currently selected match condition, select the blue informational icon to its left. >
- > For a detailed list of conditional expressions, see [Rules engine conditional expressions](cdn-verizon-premium-rules-engine-reference-match-conditions.md).
+ > For a detailed list of conditional expressions, see [Rules engine conditional expressions](cdn-verizon-premium-rules-engine-reference-match-conditions.md).
> > For a detailed list of match conditions, see [Rules engine match conditions](cdn-verizon-premium-rules-engine-reference-match-conditions.md). > >
-5. To add a new feature, select the **+** button next to **Features**. In the dropdown on the left, select **Force Internal Max-Age**. In the textbox that appears, enter **300**. Do not change the remaining default values.
+1. To add a new feature, select the **+** button in the conditional statement.
- ![CDN rule feature](./media/cdn-rules-engine/cdn-new-feature.png)
+ :::image type="content" source="./media/cdn-rules-engine/cdn-new-feature.png" alt-text="Screenshot of the CDN rules feature in a rule.":::
+1. From the *category* drop-down, select **Caching**. Then from the *feature* drop-down, select **Force Internal Max-Age**. In the text box enter the value **300**. Leave the rest of the settings as default and select **Save** to complete the configuration of the rule.
+ > [!NOTE] > Multiple features are available in the dropdown list. For information about the currently selected feature, select the blue informational icon to its left. >
To access the rules engine, you must first select **Manage** from the top of the
> >
-6. Select **Add** to save the new rule. The new rule is now awaiting approval. After it has been approved, the status changes from **Pending XML** to **Active XML**.
-
- > [!IMPORTANT]
- > Rules changes can take up to 10 minutes to propagate through Azure CDN.
- >
- >
+1. Select **Lock Draft as Policy**. Once you lock the draft into a policy, you won't be able to add or update any rules within that policy.
+
+ :::image type="content" source="./media/cdn-rules-engine/policy-builder.png" alt-text="Screenshot of the CDN policy builder.":::
+
+1. Select **Deploy Request**.
+
+ :::image type="content" source="./media/cdn-rules-engine/policy-builder-2.png" alt-text="Screenshot of the deploy request button in policy builder.":::
+
+1. If this CDN profile is new with no previous rules or production traffic, you can select the environment as **Production** in the drop-down menu. Enter a description of the environment and then select **Create Deploy Request**.
+
+ :::image type="content" source="./media/cdn-rules-engine/policy-builder-environment.png" alt-text="Screenshot of the CDN policy builder environment.":::
+
+ > [!NOTE]
+ > Once the policy has been deployed, it will take about 30 mins for it propagate. If you want to add or update more rules, you'll need to duplicate the current rule and deploy the new policy.
+
+## Add rules to an existing policy deployed in production
+
+1. Select the policy that is deployed in production.
+
+ :::image type="content" source="./media/cdn-rules-engine/policy-production-overview.png" alt-text="Screenshot of the policy production overview page.":::
+
+1. Select **Duplicate** to clone the existing policy in production.
+
+ :::image type="content" source="./media/cdn-rules-engine/policy-production-duplicate.png" alt-text="Screenshot of the duplicate button on the policy overview page.":::
+
+1. Select the pencil icon to edit an existing rule or select **+ Rule** to add a new rule to the policy.
+
+ :::image type="content" source="./media/cdn-rules-engine/policy-production-edit.png" alt-text="Screenshot of the edit button and new rule for duplicate policy." lightbox="./media/cdn-rules-engine/policy-production-edit-expanded.png":::
+
+1. Once you're happy with the updates, follow steps 10-12 in the last section to deploy the policy.
+
+## Rules Engine staging environment
+
+* The staging environment provides a sandbox where you can test the new CDN configuration end to end without impacting the production environment. This configuration allows you to replicate traffic flow through your staging network to an origin server.
+* The staging environment is designed for functional testing and is at a smaller scale than the production CDN environment. Therefore, you shouldn't use this environment for scale, high volume or throughput testing.
+* Traffic should be kept under 50 Mbps or 500 requests per second.
+* Changes made to the staging environment will not affect your live site environment.
+* Testing HTTPS traffic using the staging environment will result in a TLS certificate mismatch.
+* Testing mechanism:
+ * After locking a draft into a policy, select **Deploy Request**. Select the environment as **Staging** and then select **Create Deploy Request**.
+
+ :::image type="content" source="./media/cdn-rules-engine/policy-staging.png" alt-text="Screenshot of a staging policy." lightbox="./media/cdn-rules-engine/policy-staging-expanded.png":::
+
+ * Edit your local host file to create an A record for your endpoint or custom domain.
+ * Check the test asset for the custom domain in the browser and proceed without using HTTPS.
+
+ > [!NOTE]
+ > Once a policy is deployed in the staging environment, it will take 15 mins to propagate.
+ >
## See also
To access the rules engine, you must first select **Manage** from the top of the
- [Rules engine match conditions](cdn-verizon-premium-rules-engine-reference-match-conditions.md) - [Rules engine conditional expressions](cdn-verizon-premium-rules-engine-reference-conditional-expressions.md) - [Rules engine features](cdn-verizon-premium-rules-engine-reference-features.md)-- [Azure Fridays: Azure CDN's powerful new premium features](https://azure.microsoft.com/documentation/videos/azure-cdns-powerful-new-premium-features/) (video)
+- [Azure Fridays: Azure CDN's powerful new premium features](https://azure.microsoft.com/documentation/videos/azure-cdns-powerful-new-premium-features/) (video)
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md
The **Read** call takes images and documents as its input. They have the followi
* Supported file formats: JPEG, PNG, BMP, PDF, and TIFF * For PDF and TIFF files, up to 2000 pages (only first two pages for the free tier) are processed.
-* The file size must be less than 50 MB (6 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels.
+* The file size must be less than 50 MB (4 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels.
* The minimum height of the text to be extracted is 12 pixels for a 1024X768 image. This corresponds to about 8 font point text at 150 DPI. ## Supported languages
cognitive-services Luis Reference Prebuilt Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-prebuilt-domains.md
Previously updated : 09/27/2019 Last updated : 04/18/2022 #source: https://raw.githubusercontent.com/Microsoft/luis-prebuilt-domains/master/README.md #acrolinx bug for exception: https://mseng.visualstudio.com/TechnicalContent/_workitems/edit/1518317 # Prebuilt domain reference for your LUIS app+ This reference provides information about the [prebuilt domains](./howto-add-prebuilt-models.md), which are prebuilt collections of intents and entities that LUIS offers. [Custom domains](luis-how-to-start-new-app.md), by contrast, start with no intents and models. You can add any prebuilt domain intents and entities to a custom model.
This reference provides information about the [prebuilt domains](./howto-add-pre
The table below summarizes the currently supported domains. Support for English is usually more complete than others.
-| Entity Type | EN-US | ZH-CN | DE | FR | ES | IT | PT-BR | JP | KO | NL | TR |
-|::|:--:|:--:|:--:|:--:|:--:|:--:|:|:|:|:|:|
-| Calendar | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Communication | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Email | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| HomeAutomation | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Notes | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Places | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| RestaurantReservation | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| ToDo | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Utilities | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Weather | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Web | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Entity Type | EN-US | ZH-CN | DE | FR | ES | IT | PT-BR | KO | NL | TR |
+|::|:--:|:--:|:--:|:--:|:--:|:--:|:|:|:|:|
+| Calendar | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Communication | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Email | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| HomeAutomation | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Notes | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Places | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| RestaurantReservation | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| ToDo | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Utilities | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Weather | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Web | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
Prebuilt domains are **not supported** in: * French Canadian * Hindi * Spanish Mexican
+* Japanese
## Next steps
cognitive-services Reference Pattern Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/reference-pattern-syntax.md
description: Create entities to extract key data from user utterances in Languag
Previously updated : 04/14/2020- Last updated : 04/18/2022 # Pattern syntax
Pattern syntax is a template for an utterance. The template should contain words
> [!CAUTION] > Patterns only include machine-learning entity parents, not subentities.- Entities in patterns are surrounded by curly brackets, `{}`. Patterns can include entities, and entities with roles. [Pattern.any](concepts/entities.md#patternany-entity) is an entity only used in patterns. Pattern syntax supports the following syntax:
The **optional** syntax, with square brackets, can be nested two levels. For exa
|is a new form|matches outer optional word and non-optional words in pattern| |a new form|matches required words only|
-The **grouping** syntax, with parentheses, can be nested two levels. For example: `(({Entity1.RoleName1} | {Entity1.RoleName2} ) | {Entity2} )`. This feature allows any of the three entities to be matched.
+The **grouping** syntax, with parentheses, can be nested two levels. For example: `(({Entity1:RoleName1} | {Entity1:RoleName2} ) | {Entity2} )`. This feature allows any of the three entities to be matched.
If Entity1 is a Location with roles such as origin (Seattle) and destination (Cairo) and Entity 2 is a known building name from a list entity (RedWest-C), the following utterances would map to this pattern:
A combination of **grouping** with **or-ing** syntax has a limit of 2 vertical b
|No|( test1 &#x7c; test2 &#x7c; test3 &#x7c; ( test4 &#x7c; test5 ) ) | ## Syntax to add an entity to a pattern template+ To add an entity into the pattern template, surround the entity name with curly braces, such as `Who does {Employee} manage?`. |Pattern with entity|
To add an entity into the pattern template, surround the entity name with curly
|`Who does {Employee} manage?`| ## Syntax to add an entity and role to a pattern template+ An entity role is denoted as `{entity:role}` with the entity name followed by a colon, then the role name. To add an entity with a role into the pattern template, surround the entity name and role name with curly braces, such as `Book a ticket from {Location:Origin} to {Location:Destination}`. |Pattern with entity roles|
An entity role is denoted as `{entity:role}` with the entity name followed by a
|`Book a ticket from {Location:Origin} to {Location:Destination}`| ## Syntax to add a pattern.any to pattern template+ The Pattern.any entity allows you to add an entity of varying length to the pattern. As long as the pattern template is followed, the pattern.any can be any length. To add a **Pattern.any** entity into the pattern template, surround the Pattern.any entity with the curly braces, such as `How much does {Booktitle} cost and what format is it available in?`.
In the preceding table, the subject should be `the man from La Mancha` (a book t
To fix this exception to the pattern, add `the man from la mancha` as an explicit list match for the {subject} entity using the [authoring API for explicit list](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5ade550bd5b81c209ce2e5a8). ## Syntax to mark optional text in a template utterance+ Mark optional text in the utterance using the regular expression square bracket syntax, `[]`. The optional text can nest square brackets up to two brackets only. |Pattern with optional text|Meaning|
Learn more about patterns:
* [How to add pattern.any entity](how-to/entities.md#create-a-patternany-entity) * [Patterns Concepts](luis-concept-patterns.md)
-Understand how [sentiment](luis-reference-prebuilt-sentiment.md) is returned in the .json response.
+Understand how [sentiment](luis-reference-prebuilt-sentiment.md) is returned in the .json response.
cognitive-services Captioning Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/captioning-concepts.md
The following are aspects to consider when using captioning:
* Consider output formats such as SRT (SubRip Subtitle) and WebVTT (Web Video Text Tracks). These can be loaded onto most video players such as VLC, automatically adding the captions on to your video. > [!TIP]
-> Try the [Azure Video Analyzer for Media](/azure/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-overview) as a demonstration of how you can get captions for videos that you upload.
+> Try the [Azure Video Analyzer for Media](../../azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-overview.md) as a demonstration of how you can get captions for videos that you upload.
Captioning can accompany real time or pre-recorded speech. Whether you're showing captions in real time or with a recording, you can use the [Speech SDK](speech-sdk.md) to recognize speech and get transcriptions. You can also use the [Batch transcription API](batch-transcription.md) for pre-recorded video.
There are some situations where [training a custom model](custom-speech-overview
## Next steps * [Get started with speech to text](get-started-speech-to-text.md)
-* [Get speech recognition results](get-speech-recognition-results.md)
+* [Get speech recognition results](get-speech-recognition-results.md)
cognitive-services How To Use Custom Entity Pattern Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-custom-entity-pattern-matching.md
Use this sample code if:
If you do not have access to a [LUIS](../LUIS/index.yml) app, but still want intents, this can be helpful since it is embedded within the SDK.
+For supported locales see [here](./language-support.md?tabs=IntentRecognitionPatternMatcher).
+ ## Prerequisites Be sure you have the following items before you begin this guide:
cognitive-services How To Use Simple Language Pattern Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-simple-language-pattern-matching.md
Use this sample code if:
If you do not have access to a [LUIS](../LUIS/index.yml) app, but still want intents, this can be helpful since it is embedded within the SDK.
+For supported locales see [here](./language-support.md?tabs=IntentRecognitionPatternMatcher).
## Prerequisites
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
The following table outlines supported languages for custom keyword and keyword
| Japanese (Japan) | ja-JP | No | Yes | | Portuguese (Brazil) | pt-BR | No | Yes |
+## Intent Recognition Pattern Matcher
+
+The Intent Recognizer Pattern Matcher supports the following locales:
+
+| Locale | Locale (BCP-47) |
+|--|--|
+| English (United States) | `en-US` |
+ ## Next steps * [Region support](regions.md)
cognitive-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-to-text.md
The [Speech SDK](speech-sdk.md) provides most of the functionalities that you ne
Use the following list to find the appropriate Speech SDK reference docs: -- <a href="https://aka.ms/csspeech/csharpref">C# SDK </a>-- <a href="https://aka.ms/csspeech/cppref">C++ SDK </a>-- <a href="https://aka.ms/csspeech/javaref">Java SDK </a>-- <a href="https://aka.ms/csspeech/pythonref">Python SDK</a>-- <a href="https://aka.ms/csspeech/javascriptref">JavaScript SDK</a>-- <a href="https://aka.ms/csspeech/objectivecref">Objective-C SDK </a>
+- <a href="/dotnet/api/overview/azure/cognitiveservices/client/speechservice">C# SDK </a>
+- <a href="/cpp/cognitive-services/speech/">C++ SDK </a>
+- <a href="/java/api/com.microsoft.cognitiveservices.speech">Java SDK </a>
+- <a href="/python/api/azure-cognitiveservices-speech/">Python SDK</a>
+- <a href="/javascript/api/microsoft-cognitiveservices-speech-sdk/">JavaScript SDK</a>
+- <a href="/objectivec/cognitive-services/speech/">Objective-C SDK </a>
> [!TIP] > The Speech service SDK is actively maintained and updated. To track changes, updates, and feature additions, see the [Speech SDK release notes](releasenotes.md).
For speech-to-text REST APIs, see the following resources:
## Next steps - [Get a Speech service subscription key for free](overview.md#try-the-speech-service-for-free)-- [Get the Speech SDK](speech-sdk.md)
+- [Get the Speech SDK](speech-sdk.md)
cognitive-services Smart Url Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/smart-url-refresh.md
If these two QnA pairs have individual prompts attached to them (for example, Q1
## Next steps
-* [Question answering quickstart](/azure/cognitive-services/language-service/question-answering/quickstart/sdk?pivots=studio)
-* [Update Sources API reference](/rest/api/cognitiveservices/questionanswering/question-answering-projects/update-sources)
+* [Question answering quickstart](../quickstart/sdk.md?pivots=studio)
+* [Update Sources API reference](/rest/api/cognitiveservices/questionanswering/question-answering-projects/update-sources)
communication-services Join Teams Meeting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/join-teams-meeting.md
Additional information on required dataflows for joining Teams meetings is avail
## Chat storage
-During a Teams meeting, all chat messages sent by Teams users or Communication Services users are stored in the geographic region associated with the Microsoft 365 organization hosting the meeting. For more information, review the article [Location of data in Microsoft Teams](/microsoftteams/location-of-data-in-teams). For each Communication Services user in the meetings, there is also a copy of the most recently sent message that is stored in the geographic region associated with the Communication Services resource used to develop the Communication Services application. For more information, review the article [Region availability and data residency](/azure/communication-services/concepts/privacy).
+During a Teams meeting, all chat messages sent by Teams users or Communication Services users are stored in the geographic region associated with the Microsoft 365 organization hosting the meeting. For more information, review the article [Location of data in Microsoft Teams](/microsoftteams/location-of-data-in-teams). For each Communication Services user in the meetings, there is also a copy of the most recently sent message that is stored in the geographic region associated with the Communication Services resource used to develop the Communication Services application. For more information, review the article [Region availability and data residency](./privacy.md).
If the hosting Microsoft 365 organization has defined a retention policy that deletes chat messages for any of the Teams users in the meeting, then all copies of the most recently sent message that have been stored for Communication Services users will also be deleted in accordance with the policy. If there is not a retention policy defined, then the copies of the most recently sent message for all Communication Services users will be deleted after 30 days. For more information about Teams retention policies, review the article [Learn about retention for Microsoft Teams](/microsoft-365/compliance/retention-policies-teams).
Microsoft will indicate to you via the Azure Communication Services API that rec
- [How-to: Join a Teams meeting](../how-tos/calling-sdk/teams-interoperability.md) - [Quickstart: Join a BYOI calling app to a Teams meeting](../quickstarts/voice-video-calling/get-started-teams-interop.md)-- [Quickstart: Join a BYOI chat app to a Teams meeting](../quickstarts/chat/meeting-interop.md)
+- [Quickstart: Join a BYOI chat app to a Teams meeting](../quickstarts/chat/meeting-interop.md)
communication-services Get Started Raw Media Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-raw-media-access.md
Title: Quickstart - Add RAW media access to your app (Android) description: In this quickstart, you'll learn how to add raw media access calling capabilities to your app using Azure Communication Services.-+ - Previously updated : 11/18/2021+ Last updated : 04/19/2022
In this quickstart, you'll learn how implement raw media access using the Azure Communication Services Calling SDK for Android.
-## Outbound virtual video device
- The Azure Communication Services Calling SDK offers APIs allowing apps to generate their own video frames to send to remote participants. This quick start builds upon [QuickStart: Add 1:1 video calling to your app](./get-started-with-video-calling.md?pivots=platform-android) for Android.
-## Overview
-
-Once an outbound virtual video device is created, use DeviceManager to make a new virtual video device that behaves just like any other webcam connected to your computer or mobile phone.
+## Virtual Video Stream Overview
Since the app will be generating the video frames, the app must inform the Azure Communication Services Calling SDK about the video formats the app is capable of generating. This is required to allow the Azure Communication Services Calling SDK to pick the best video format configuration given the network conditions at any giving time. The app must register a delegate to get notified about when it should start or stop producing video frames. The delegate event will inform the app which video format is more appropriate for the current network conditions.
-The following is an overview of the steps required to create an outbound virtual video device.
-
-1. Create a `VirtualDeviceIdentification` with basic identification information for the new outbound virtual video device.
-
- ```java
- VirtualDeviceIdentification deviceId = new VirtualDeviceIdentification();
- deviceId.setId("QuickStartVirtualVideoDevice");
- deviceId.setName("My First Virtual Video Device");
- ```
+The following is an overview of the steps required to create a virtual video stream.
-2. Create an array of `VideoFormat` with the video formats supported by the app. It is fine to have only one video format supported, but at least one of the provided video formats must be of the `MediaFrameKind::VideoSoftware` type. When multiple formats are provided, the order of the format in the list does not influence or prioritize which one will be used. The selected format is based on external factors like network bandwidth.
+1. Create an array of `VideoFormat` with the video formats supported by the app. It is fine to have only one video format supported, but at least one of the provided video formats must be of the `VideoFrameKind::VideoSoftware` type. When multiple formats are provided, the order of the format in the list does not influence or prioritize which one will be used. The selected format is based on external factors like network bandwidth.
```java ArrayList<VideoFormat> videoFormats = new ArrayList<VideoFormat>();
The following is an overview of the steps required to create an outbound virtual
format.setWidth(1280); format.setHeight(720); format.setPixelFormat(PixelFormat.RGBA);
- format.setMediaFrameKind(MediaFrameKind.VIDEO_SOFTWARE);
+ format.setMediaFrameKind(VideoFrameKind.VIDEO_SOFTWARE);
format.setFramesPerSecond(30); format.setStride1(1280 * 4); // It is times 4 because RGBA is a 32-bit format. videoFormats.add(format); ```
-3. Create `OutboundVirtualVideoDeviceOptions` and set `DeviceIdentification` and `VideoFormats` with the previously created objects.
+2. Create `OutgoingVirtualVideoStreamOptions` and set `VideoFormats` with the previously created object.
```java
- OutboundVirtualVideoDeviceOptions m_options = new OutboundVirtualVideoDeviceOptions();
-
- // ...
-
- m_options.setDeviceIdentification(deviceId);
- m_options.setVideoFormats(videoFormats);
+ OutgoingVirtualVideoStreamOptions options = new OutgoingVirtualVideoStreamOptions();
+ options.setVideoFormats(videoFormats);
```
-4. Make sure the `OutboundVirtualVideoDeviceOptions::OnFlowChanged` delegate is defined. This delegate will inform its listener about events requiring the app to start or stop producing video frames. In this quick start, `m_mediaFrameSender` is used as trigger to let the app know when it's time to start generating frames. Feel free to use any mechanism in your app as a trigger.
+3. Subscribe to `OutgoingVirtualVideoStreamOptions::addOnOutgoingVideoStreamStateChangedListener` delegate. This delegate will inform the state of the current stream, its important that you do not send frames if the state is no equal to `OutgoingVideoStreamState.STARTED`.
```java
- private MediaFrameSender m_mediaFrameSender;
+ private OutgoingVideoStreamState outgoingVideoStreamState;
- // ...
+ options.addOnOutgoingVideoStreamStateChangedListener(event -> {
- m_options.addOnFlowChangedListener(virtualDeviceFlowControlArgs -> {
- if (virtualDeviceFlowControlArgs.getMediaFrameSender().getRunningState() == VirtualDeviceRunningState.STARTED) {
- // Tell the app's frame generator to start producing frames.
- m_mediaFrameSender = virtualDeviceFlowControlArgs.getMediaFrameSender();
- } else {
- // Tell the app's frame generator to stop producing frames.
- m_mediaFrameSender = null;
- }
+ outgoingVideoStreamState = event.getOutgoingVideoStreamState();
}); ```
-5. Use `Device
+4. Make sure the `OutgoingVirtualVideoStreamOptions::addOnVideoFrameSenderChangedListener` delegate is defined. This delegate will inform its listener about events requiring the app to start or stop producing video frames. In this quick start, `mediaFrameSender` is used as trigger to let the app know when it's time to start generating frames. Feel free to use any mechanism in your app as a trigger.
```java
- private OutboundVirtualVideoDevice m_outboundVirtualVideoDevice;
+ private VideoFrameSender mediaFrameSender;
- // ...
+ options.addOnVideoFrameSenderChangedListener(event -> {
- m_outboundVirtualVideoDevice = m_deviceManager.createOutboundVirtualVideoDevice(m_options).get();
+ mediaFrameSender = event.getMediaFrameSender();
+ });
```
-6. Tell device manager to use the recently created virtual camera on calls.
+5. Create an instance of `VirtualVideoStream` using the `OutgoingVirtualVideoStreamOptions` we created previously
```java
- private LocalVideoStream m_localVideoStream;
-
- // ...
+ private VirtualVideoStream virtualVideoStream;
- for (VideoDeviceInfo videoDeviceInfo : m_deviceManager.getCameras())
- {
- String deviceId = videoDeviceInfo.getId();
- if (deviceId.equalsIgnoreCase("QuickStartVirtualVideoDevice")) // Same id used in step 1.
- {
- m_localVideoStream = LocalVideoStream(videoDeviceInfo, getApplicationContext());
- }
- }
+ virtualVideoStream = new VirtualVideoStream(options);
```
-7. In a non-UI thread or loop in the app, cast the `MediaFrameSender` to the appropriate type defined by the `MediaFrameKind` property of `VideoFormat`. For example, cast it to `SoftwareBasedVideoFrame` and then call the `send` method according to the number of planes defined by the MediaFormat.
+7. Once outgoingVideoStreamState is equal to `OutgoingVideoStreamState.STARTED` create and instance of `FrameGenerator` class this will start a non-UI thread and will send frames, call `FrameGenerator.SetVideoFrameSender` each time we get an updated `VideoFrameSender` on the previous delegate, cast the `VideoFrameSender` to the appropriate type defined by the `VideoFrameKind` property of `VideoFormat`. For example, cast it to `SoftwareBasedVideoFrameSender` and then call the `send` method according to the number of planes defined by the MediaFormat.
After that, create the ByteBuffer backing the video frame if needed. Then, update the content of the video frame. Finally, send the video frame to other participants with the `sendFrame` API. ```java
- java.nio.ByteBuffer plane1 = null;
- Random rand = new Random();
- byte greyValue = 0;
-
- // ...
- java.nio.ByteBuffer plane1 = null;
- Random rand = new Random();
-
- while (m_outboundVirtualVideoDevice != null) {
- while (m_mediaFrameSender != null) {
- if (m_mediaFrameSender.getMediaFrameKind() == MediaFrameKind.VIDEO_SOFTWARE) {
- SoftwareBasedVideoFrame sender = (SoftwareBasedVideoFrame) m_mediaFrameSender;
+ public class FrameGenerator {
+
+ private VideoFrameSender videoFrameSender;
+ private Thread frameIteratorThread;
+ private final Random random;
+ private volatile boolean stopFrameIterator = false;
+
+ public FrameGenerator() {
+
+ random = new Random();
+ }
+
+ public void FrameIterator() {
+
+ ByteBuffer plane = null;
+ while (!stopFrameIterator && videoFrameSender != null) {
+
+ plane = GenerateFrame(plane);
+ }
+ }
+
+ private ByteBuffer GenerateFrame(ByteBuffer plane)
+ {
+ try {
+
+ SoftwareBasedVideoFrameSender sender = (SoftwareBasedVideoFrameSender) videoFrameSender;
VideoFormat videoFormat = sender.getVideoFormat();
+ long timeStamp = sender.getTimestamp();
- // Gets the timestamp for when the video frame has been created.
- // This allows better synchronization with audio.
- int timeStamp = sender.getTimestamp();
+ if (plane == null || videoFormat.getStride1() * videoFormat.getHeight() != plane.capacity()) {
- // Adjusts frame dimensions to the video format that network conditions can manage.
- if (plane1 == null || videoFormat.getStride1() * videoFormat.getHeight() != plane1.capacity()) {
- plane1 = ByteBuffer.allocateDirect(videoFormat.getStride1() * videoFormat.getHeight());
- plane1.order(ByteOrder.nativeOrder());
+ plane = ByteBuffer.allocateDirect(videoFormat.getStride1() * videoFormat.getHeight());
+ plane.order(ByteOrder.nativeOrder());
}
- // Generates random gray scaled bands as video frame.
- int bandsCount = rand.nextInt(15) + 1;
+ int bandsCount = random.nextInt(15) + 1;
int bandBegin = 0; int bandThickness = videoFormat.getHeight() * videoFormat.getStride1() / bandsCount; for (int i = 0; i < bandsCount; ++i) {
- byte greyValue = (byte)rand.nextInt(254);
- java.util.Arrays.fill(plane1.array(), bandBegin, bandBegin + bandThickness, greyValue);
+
+ byte greyValue = (byte) random.nextInt(254);
+ java.util.Arrays.fill(plane.array(), bandBegin, bandBegin + bandThickness, greyValue);
bandBegin += bandThickness; }
- // Sends video frame to the other participants in the call.
- FrameConfirmation fr = sender.sendFrame(plane1, timeStamp).get();
+ FrameConfirmation fr = sender.sendFrame(plane, timeStamp).get();
- // Waits before generating the next video frame.
- // Video format defines how many frames per second app must generate.
Thread.sleep((long) (1000.0f / videoFormat.getFramesPerSecond())); }
+ catch (InterruptedException ex) {
+
+ ex.printStackTrace();
+ }
+ catch (ExecutionException ex2)
+ {
+ ex2.getMessage();
+ }
+
+ return plane;
}
- // Virtual camera hasn't been created yet.
- // Let's wait a little bit before checking again.
- // This is for demo only purposes.
- // Feel free to use a better synchronization mechanism.
- Thread.sleep(100);
+ private void StartFrameIterator()
+ {
+ frameIteratorThread = new Thread(this::FrameIterator);
+ frameIteratorThread.start();
+ }
+
+ public void StopFrameIterator()
+ {
+ try
+ {
+ if (frameIteratorThread != null)
+ {
+ stopFrameIterator = true;
+ frameIteratorThread.join();
+ frameIteratorThread = null;
+ stopFrameIterator = false;
+ }
+ }
+ catch (InterruptedException ex)
+ {
+ ex.getMessage();
+ }
+ }
+
+ @Override
+ public void SetVideoFrameSender(VideoFrameSender videoFramSender) {
+
+ StopFrameIterator();
+ this.videoFrameSender = videoFramSender;
+ StartFrameIterator();
+ }
} ```+
+## Screen Share Video Stream Overview
+
+Repeat steps `1 to 4` from the previous VirtualVideoStream tutorial.
+
+Since the Android system generates the frames, you have to implement your own foreground service to capture the frames and send them through using our API
+
+The following is an overview of the steps required to create a screen share video stream.
+
+1. Add this permission to your `Manifest.xml` file inside your Android project
+
+ ```xml
+ <uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
+ ```
+
+2. Create an instance of `ScreenShareVideoStream` using the `OutgoingVirtualVideoStreamOptions` we created previously
+
+ ```java
+ private ScreenShareVideoStream screenShareVideoStream;
+
+ screenShareVideoStream = new ScreenShareVideoStream(options);
+ ```
+
+3. Request needed permissions for screen capture on Android, once this method is called Android will call automatically `onActivityResult` containing the request code we have sent and the result of the operation, expect `Activity.RESULT_OK` if the permission has been provided by the user if so attach the screenShareVideoStream to the call and start your own foreground service to capture the frames.
+
+ ```java
+ public void GetScreenSharePermissions() {
+
+ try {
+
+ MediaProjectionManager mediaProjectionManager = (MediaProjectionManager) getSystemService(Context.MEDIA_PROJECTION_SERVICE);
+ startActivityForResult(mediaProjectionManager.createScreenCaptureIntent(), Constants.SCREEN_SHARE_REQUEST_INTENT_REQ_CODE);
+ } catch (Exception e) {
+
+ String error = "Could not start screen share due to failure to startActivityForResult for mediaProjectionManager screenCaptureIntent";
+ }
+ }
+
+ @Override
+ protected void onActivityResult(int requestCode, int resultCode, Intent data) {
+
+ super.onActivityResult(requestCode, resultCode, data);
+
+ if (requestCode == Constants.SCREEN_SHARE_REQUEST_INTENT_REQ_CODE) {
+
+ if (resultCode == Activity.RESULT_OK && data != null) {
+
+ // Attach the screenShareVideoStream to the call
+ // Start your foreground service
+ } else {
+
+ String error = "user cancelled, did not give permission to capture screen";
+ }
+ }
+ }
+ ```
+
+4. Once you receive a frame on your foreground service send it through using the `VideoFrameSender` provided
+
+ ````java
+ public void onImageAvailable(ImageReader reader) {
+
+ Image image = reader.acquireLatestImage();
+ if (image != null) {
+
+ final Image.Plane[] planes = image.getPlanes();
+ if (planes.length > 0) {
+
+ Image.Plane plane = planes[0];
+ final ByteBuffer buffer = plane.getBuffer();
+ try {
+
+ SoftwareBasedVideoFrameSender sender = (SoftwareBasedVideoFrameSender) videoFrameSender;
+ sender.sendFrame(buffer, sender.getTimestamp()).get();
+ } catch (Exception ex) {
+
+ Log.d("MainActivity", "MainActivity.onImageAvailable trace, failed to send Frame");
+ }
+ }
+
+ image.close();
+ }
+ }
+ ````
confidential-computing Confidential Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-containers.md
Marblerun supports confidential containers created with Graphene, Occlum, and EG
## Confidential Containers reference architectures - [Confidential data messaging for healthcare reference architecture and sample with Intel SGX confidential containers](https://github.com/Azure-Samples/confidential-container-samples/blob/main/confidential-healthcare-scone-confinf-onnx/README.md). -- [Confidential big-data processing with Apache Spark on AKS with Intel SGX confidential containers](https://docs.microsoft.com/azure/architecture/example-scenario/confidential/data-analytics-containers-spark-kubernetes-azure-sql).
+- [Confidential big-data processing with Apache Spark on AKS with Intel SGX confidential containers](/azure/architecture/example-scenario/confidential/data-analytics-containers-spark-kubernetes-azure-sql).
## Get in touch
Do you have questions about your implementation? Do you want to become an enable
- [Deploy AKS cluster with Intel SGX Confidential VM Nodes](./confidential-enclave-nodes-aks-get-started.md) - [Microsoft Azure Attestation](../attestation/overview.md) - [Intel SGX Confidential Virtual Machines](virtual-machine-solutions-sgx.md)-- [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md)
+- [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md)
container-apps Deploy Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-visual-studio-code.md
Now that you have a container app environment in Azure you can create a containe
9) Choose **External** to configure the HTTP traffic that the endpoint will accept.
-10) Leave the default value of 80 for the port, and then select **Enter** to complete the workflow.
+10) Enter a value of 3000 for the port, and then select **Enter** to complete the workflow. This value should be set to the port number that your container uses, which in the case of the sample app is 3000.
During this process, Visual Studio Code and Azure create the container app for you. The published Docker image you created earlier is also be deployed to the app. Once this process finishes, Visual Studio Code displays a notification with a link to browse to the site. Click this link, and to view your app in the browser.
cosmos-db Audit Restore Continuous https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/audit-restore-continuous.md
+
+ Title: Auditing the point in time restore action for continuous backup mode in Azure Cosmos DB
+description: This article provides details available to audit Azure Cosmos DB's point in time restore feature in continuous backup mode.
+++ Last updated : 04/18/2022++++
+# Audit the point in time restore action for continuous backup mode in Azure Cosmos DB
+
+Azure Cosmos DB provides you the list of all the point in time restores for continuous mode that were performed on a Cosmos DB account using [Activity Logs](/azure-monitor/essentials/activity-log). Activity logs can be viewed for any Cosmos DB account from the **Activity Logs** page in the Azure portal. The Activity Log shows all the operations that were triggered on the specific account. When a point in time restore is triggered, it shows up as `Restore Database Account` operation on the source account as well as the target account. The Activity Log for the source account can be used to audit restore events, and the activity logs on the target account can be used to get the updates about the progress of the restore.
+
+## Audit the restores that were triggered on a live database account
+
+When a restore is triggered on a source account, a log is emitted with the status *Started*. And when the restore succeeds or fails, a new log is emitted with the status *Succeeded* or *Failed* respectively.
+
+To get the list of just the restore operations that were triggered on a specific account, you can open the Activity Log of the source account, and search for **Restore database account** in the search bar with the required **Timespan** filter. The `UserPrincipalName` of the user that triggered the restore can be found from the `Event initiated by` column.
++
+The parameters of the restore request can be found by clicking on the event and selecting the JSON tab:
++
+## Audit the restores that were triggered on a deleted database account
+
+For the accounts that were already deleted, there would not be any database account page. Instead, the Activity Log in the subscription page can be used to get the restores that were triggered on a deleted account. Once the Activity Log page is opened, a new filter can be added to narrow down the results specific to the resource group the account existed in, or even using the database account name in the Resource filter. The Resource for the activity log is the database account on which the restore was triggered.
++
+The activity logs can also be accessed using Azure CLI or Azure PowerShell. For more information on activity logs, review [Azure Activity log - Azure Monitor](/azure-monitor/essentials/activity-log).
+
+## Track the progress of the restore operation
+
+Azure Cosmos DB allows you to track the progress of the restore using the activity logs of the restored database account. Once the restore is triggered, you will see a notification with the title **Restore Account**.
++
+The account status would be *Creating*, but it would have an Activity Log page. A new log event will appear after the restore of each collection. Note that there can be a delay of 5-10 minutes to see the log event after the actual restore of the collection is complete.
+
+ ## Next steps
+
+ * Learn more about [continuous backup](continuous-backup-restore-introduction.md) mode.
+ * Provision an account with continuous backup by using the [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), the [Azure CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template).
+ * [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.
+ * Learn about the [resource model of continuous backup mode](continuous-backup-restore-resource-model.md).
+ * Explore the [Frequently asked questions for continuous mode](continuous-backup-restore-frequently-asked-questions.yml).
cosmos-db Continuous Backup Restore Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-permissions.md
# Manage permissions to restore an Azure Cosmos DB account Azure Cosmos DB allows you to isolate and restrict the restore permissions for continuous backup account to a specific role or a principal. The owner of the account can trigger a restore and assign a role to other principals to perform the restore operation. These permissions can be applied at the subscription scope as shown in the following image:
cosmos-db Restore Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/restore-account-continuous-backup.md
Title: Restore an Azure Cosmos DB account that uses continuous backup mode.
-description: Learn how to identify the restore time and restore a live or deleted Azure Cosmos DB account. It shows how to use the event feed to identify the restore time and restore the account using Azure portal, PowerShell, CLI, or a Resource Manager template.
+description: Learn how to identify the restore time and restore a live or deleted Azure Cosmos DB account. It shows how to use the event feed to identify the restore time and restore the account using Azure portal, PowerShell, CLI, or an Azure Resource Manager template.
Previously updated : 12/09/2021 Last updated : 04/18/2022 -+ # Restore an Azure Cosmos DB account that uses continuous backup mode Azure Cosmos DB's point-in-time restore feature helps you to recover from an accidental change within a container, to restore a deleted account, database, or a container or to restore into any region (where backups existed). The continuous backup mode allows you to do restore to any point of time within the last 30 days.
-This article describes how to identify the restore time and restore a live or deleted Azure Cosmos DB account. It shows restore the account using [Azure portal](#restore-account-portal), [PowerShell](#restore-account-powershell), [CLI](#restore-account-cli), or a [Resource Manager template](#restore-arm-template).
+This article describes how to identify the restore time and restore a live or deleted Azure Cosmos DB account. It shows restore the account using [Azure portal](#restore-account-portal), [PowerShell](#restore-account-powershell), [CLI](#restore-account-cli), or an [Azure Resource Manager template](#restore-arm-template).
+
+> [!NOTE]
+> Currently in preview, the restore action for Table API and Gremlin API is supported via PowerShell and the Azure CLI.
## <a id="restore-account-portal"></a>Restore an account using Azure portal
Deleting source account while a restore is in-progress could result in failure o
### Restorable timestamp for live accounts
-To restore Azure Cosmos DB live accounts that are not deleted, it is a best practice to always identify the [latest restorable timestamp](get-latest-restore-timestamp.md) for the container. You can then use this timestamp to restore the account to it's latest version.
+To restore Azure Cosmos DB live accounts that are not deleted, it is a best practice to always identify the [latest restorable timestamp](get-latest-restore-timestamp.md) for the container. You can then use this timestamp to restore the account to its latest version.
### <a id="event-feed"></a>Use event feed to identify the restore time
Use the following steps to get the restore details from Azure portal:
1. Navigate to the **Export template** pane. It opens a JSON template, corresponding to the restored account.
-1. The **resources** > **properties** > **restoreParameters** object contains the restore details. The **restoreTimestampInUtc** gives you the time at which the account was restored and the **databasesToRestore** shows the specific database and container from which the account was restored.
- ## <a id="restore-account-powershell"></a>Restore an account using Azure PowerShell Before restoring the account, install the [latest version of Azure PowerShell](/powershell/azure/install-az-ps?view=azps-6.2.1&preserve-view=true) or version higher than 6.2.0. Next connect to your Azure account and select the required subscription with the following commands:
Before restoring the account, install the [latest version of Azure PowerShell](/
```azurepowershell Select-AzSubscription -Subscription <SubscriptionName>
-### <a id="trigger-restore-ps"></a>Trigger a restore operation
+### <a id="trigger-restore-ps"></a>Trigger a restore operation for SQL API account
The following cmdlet is an example to trigger a restore operation with the restore command by using the target account, source account, location, resource group, and timestamp:
Restore-AzCosmosDBAccount `
-Location "West US" ```
+**Example 3:** Restoring Gremlin API Account. This example restores the graphs *graph1*, *graph2* from *MyDB1* and the entire database *MyDB2*, which, includes all the containers under it.
+
+```azurepowershell
+$datatabaseToRestore1 = New-AzCosmosDBGremlinDatabaseToRestore -DatabaseName "MyDB1" -GraphName "graph1", "graph2"
+$datatabaseToRestore2 = New-AzCosmosDBGremlinDatabaseToRestore -DatabaseName "MyDB2"
+
+Restore-AzCosmosDBAccount `
+ -TargetResourceGroupName "MyRG" `
+ -TargetDatabaseAccountName "Pitracct" `
+ -SourceDatabaseAccountName "SourceGremlin" `
+ -RestoreTimestampInUtc "2022-04-05T22:06:00" `
+ -DatabasesToRestore $datatabaseToRestore1, $datatabaseToRestore2 `
+ -Location "West US"
+
+```
+
+**Example 4:** Restoring Table API Account. This example restores the tables *table1*, *table1* from *MyDB1*
+
+```azurepowershell
+$tablesToRestore = New-AzCosmosDBTableToRestore -TableName "table1", "table2"
+
+Restore-AzCosmosDBAccount `
+ -TargetResourceGroupName "MyRG" `
+ -TargetDatabaseAccountName "Pitracct" `
+ -SourceDatabaseAccountName "SourceTable" `
+ -RestoreTimestampInUtc "2022-04-06T22:06:00" `
+ -TablesToRestore $tablesToRestore
+ -Location "West US"
+```
### <a id="get-the-restore-details-powershell"></a>Get the restore details from the restored account
Get-AzCosmosdbMongoDBRestorableDatabase `
```
-#### List all the versions of mongodb collections of a database in a live database account
+#### List all the versions of MongoDB collections of a database in a live database account
```azurepowershell
Get-AzCosmosdbMongoDBRestorableCollection `
-Location "West US" ```
-#### List all the resources of a mongodb database account that are available to restore at a given timestamp and region
+#### List all the resources of a MongoDB database account that are available to restore at a given timestamp and region
```azurepowershell
Get-AzCosmosdbMongoDBRestorableResource `
-RestoreLocation "West US" ` -RestoreTimestamp "2020-07-20T16:09:53+0000" ```
+### <a id="enumerate-gremlin-api-ps"></a>Enumerate restorable resources for Gremlin API
+
+The enumeration cmdlets help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account, database, and graph resources.
+
+#### List all the versions of Gremlin databases in a live database account
+
+Listing all the versions of databases allows you to choose the right database in a scenario where the actual time of existence of database is unknown.
+Run the following PowerShell command to list all the versions of databases. This command only works with live accounts. The `DatabaseAccountInstanceId` and the `Location` parameters are obtained from the `name` and `location` properties in the response of `Get-AzCosmosDBRestorableDatabaseAccount` cmdlet. The `DatabaseAccountInstanceId` attribute refers to `instanceId` property of source database account being restored:
+
+```azurepowershell
+Get-AzCosmosdbGremlinRestorableDatabase `
+ -Location "East US" `
+ -DatabaseAccountInstanceId <DatabaseAccountInstanceId>
+```
+
+#### List all the versions of Gremlin graphs of a database in a live database account
+
+Use the following command to list all the versions of Gremlin API graphs. This command only works with live accounts. The `DatabaseRId` parameter is the `ResourceId` of the database you want to restore. It is the value of `ownerResourceid` attribute found in the response of `Get-AzCosmosdbGremlinRestorableDatabase` cmdlet. The response also includes a list of operations performed on all the graphs inside this database.
+
+```azurepowershell
+Get-AzCosmosdbGremlinRestorableGraph `
+ -DatabaseAccountInstanceId "d056a4f8-044a-436f-80c8-cd3edbc94c68" `
+ -DatabaseRId "AoQ13r==" `
+ -Location "West US"
+```
+
+#### Find databases or graphs that can be restored at any given timestamp
+
+Use the following command to get the list of databases or graphs that can be restored at any given timestamp. This command only works with live accounts.
+
+```azurepowershell
+Get-AzCosmosdbGremlinRestorableResource `
+ -DatabaseAccountInstanceId "d056a4f8-044a-436f-80c8-cd3edbc94c68" `
+ -Location "West US" `
+ -RestoreLocation "East US" `
+ -RestoreTimestamp "2020-07-20T16:09:53+0000"
+```
+
+### <a id="enumerate-table-api-ps"></a>Enumerate restorable resources for Table API
+
+The enumeration cmdlets help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account and table resources.
+
+#### List all the versions of tables of a database in a live database account
+
+Use the following command to list all the versions of tables. This command only works with live accounts.
+
+```azurepowershell
+Get-AzCosmosdbTableRestorableTable `
+ -DatabaseAccountInstanceId "d056a4f8-044a-436f-80c8-cd3edbc94c68"
+ ` -Location "West US"
+```
+
+#### Find tables that can be restored at any given timestamp
+
+Use the following command to get the list of tables that can be restored at any given timestamp. This command only works with live accounts.
+
+```azurepowershell
+Get-AzCosmosdbTableRestorableResource `
+ -DatabaseAccountInstanceId "d056a4f8-044a-436f-80c8-cd3edbc94c68" `
+ -Location "West US" `
+ -RestoreLocation "East US" `
+ -RestoreTimestamp "2020-07-20T16:09:53+0000"
+```
+ ## <a id="restore-account-cli"></a>Restore an account using Azure CLI
Before restoring the account, install Azure CLI with the following steps:
1. Install the latest version of Azure CLI
- * Install the latest version of [Azure CLI](/cli/azure/install-azure-cli) or version higher than 2.26.0
+ * Install the latest version of [Azure CLI](/cli/azure/install-azure-cli) or version higher than 2.26.0.
* If you have already installed CLI, run `az upgrade` command to update to the latest version. This command will only work with CLI version higher than 2.11. If you have an earlier version, use the above link to install the latest version. 1. Sign in and select your subscription
- * Sign into your Azure account with `az login` command.
+ * Sign in to your Azure account with `az login` command.
* Select the required subscription using `az account set -s <subscriptionguid>` command.
-### <a id="trigger-restore-cli"></a>Trigger a restore operation with CLI
+### <a id="trigger-restore-cli"></a>Trigger a restore operation with Azure CLI
The simplest way to trigger a restore is by issuing the restore command with name of the target account, source account, location, resource group, timestamp (in UTC), and optionally the database and container names. The following are some examples to trigger the restore operation:
-1. Create a new Azure Cosmos DB account by restoring from an existing account.
+#### Create a new Azure Cosmos DB account by restoring from an existing account
```azurecli-interactive
The simplest way to trigger a restore is by issuing the restore command with nam
```
-2. Create a new Azure Cosmos DB account by restoring only selected databases and containers from an existing database account.
+#### Create a new Azure Cosmos DB account by restoring only selected databases and containers from an existing database account
```azurecli-interactive
The simplest way to trigger a restore is by issuing the restore command with nam
--databases-to-restore name=MyDB2 collections=Collection3 Collection4 ```
+#### Create a new Azure Cosmos DB Gremlin API account by restoring only selected databases and graphs from an existing Gremlin API account
+
+ ```azurecli-interactive
+
+ az cosmosdb restore \
+ --resource-group MyResourceGroup \
+ --target-database-account-name MyRestoredCosmosDBDatabaseAccount \
+ --account-name MySourceAccount \
+ --restore-timestamp 2022-04-13T16:03:41+0000 \
+ --location "West US" \
+ --gremlin-databases-to-restore name=MyDB1 graphs=graph1 graph2 \
+ --gremlin-databases-to-restore name=MyDB2 graphs =graph3 graph4
+ ```
+
+ #### Create a new Azure Cosmos DB Table API account by restoring only selected tables from an existing Table API account
+
+ ```azurecli-interactive
+
+ az cosmosdb restore \
+ --resource-group MyResourceGroup \
+ --target-database-account-name MyRestoredCosmosDBDatabaseAccount \
+ --account-name MySourceAccount \
+ --restore-timestamp 2022-04-14T06:03:41+0000 \
+ --location "West US" \
+ --tables-to-restore table1 table2
+ ```
### <a id="get-the-restore-details-cli"></a>Get the restore details from the restored account
-Run the following command to get the restore details. The `az cosmosdb show` command output shows the value of `createMode` property. If the value is set to **Restore**. it indicates that the account was restored from another account. The `restoreParameters` property has further details such as `restoreSource`, which has the source account ID. The last GUID in the `restoreSource` parameter is the instanceId of the source account. And the restoreTimestamp will be under the restoreParameters object:
+Run the following command to get the restore details. The `az cosmosdb show` command output shows the value of `createMode` property. If the value is set to **Restore**, it indicates that the account was restored from another account. The `restoreParameters` property has further details such as `restoreSource`, which has the source account ID. The last GUID in the `restoreSource` parameter is the `instanceId` of the source account. And the `restoreTimestamp` will be under the `restoreParameters` object:
```azurecli-interactive az cosmosdb show --name MyCosmosDBDatabaseAccount --resource-group MyResourceGroup ```
-### <a id="enumerate-sql-api"></a>Enumerate restorable resources for SQL API
+### <a id="enumerate-sql-api-cli"></a>Enumerate restorable resources for SQL API
The enumeration commands described below help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account, database, and container resources. #### List all the accounts that can be restored in the current subscription
-Run the following CLI command to list all the accounts that can be restored in the current subscription
+Run the following Azure CLI command to list all the accounts that can be restored in the current subscription
```azurecli-interactive az cosmosdb restorable-database-account list --account-name "Pitracct" ```
-The response includes all the database accounts (both live and deleted) that can be restored and the regions that they can be restored from:
+The response includes all the database accounts (both live and deleted) that can be restored, and the regions that they can be restored from:
```json {
The response includes all the database accounts (both live and deleted) that can
"apiType": "Sql", "creationTime": "2021-01-08T23:34:11.095870+00:00", "deletionTime": null,
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/7133a59a-d1c0-4645-a699-6e296d6ac865",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/abcd1234-d1c0-4645-a699-abcd1234",
"identity": null, "location": "West US",
- "name": "7133a59a-d1c0-4645-a699-6e296d6ac865",
+ "name": "abcd1234-d1c0-4645-a699-abcd1234",
"restorableLocations": [ { "creationTime": "2021-01-08T23:34:11.095870+00:00",
Just like the `CreationTime` or `DeletionTime` for the account, there is a `Crea
Listing all the versions of databases allows you to choose the right database in a scenario where the actual time of existence of database is unknown.
-Run the following CLI command to list all the versions of databases. This command only works with live accounts. The `instance-id` and the `location` parameters are obtained from the `name` and `location` properties in the response of `az cosmosdb restorable-database-account list` command. The instanceId attribute is also a property of source database account that is being restored:
+Run the following Azure CLI command to list all the versions of databases. This command only works with live accounts. The `instance-id` and the `location` parameters are obtained from the `name` and `location` properties in the response of `az cosmosdb restorable-database-account list` command. The `instanceId` attribute is also a property of source database account that is being restored:
```azurecli-interactive az cosmosdb sql restorable-database list \
- --instance-id "7133a59a-d1c0-4645-a699-6e296d6ac865" \
+ --instance-id "abcd1234-d1c0-4645-a699-abcd1234" \
--location "West US" ```
This command output now shows when a database was created and deleted.
```json [ {
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/7133a59a-d1c0-4645-a699-6e296d6ac865/restorableSqlDatabases/40e93dbd-2abe-4356-a31a-35567b777220",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/abcd1234-d1c0-4645-a699-abcd1234/restorableSqlDatabases/40e93dbd-2abe-4356-a31a-35567b777220",
.. "name": "40e93dbd-2abe-4356-a31a-35567b777220", "resource": {
This command output now shows when a database was created and deleted.
.. }, {
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/7133a59a-d1c0-4645-a699-6e296d6ac865/restorableSqlDatabases/243c38cb-5c41-4931-8cfb-5948881a40ea",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/abcd1234-d1c0-4645-a699-abcd1234/restorableSqlDatabases/243c38cb-5c41-4931-8cfb-5948881a40ea",
.. "name": "243c38cb-5c41-4931-8cfb-5948881a40ea", "resource": {
Use the following command to list all the versions of SQL containers. This comma
```azurecli-interactive az cosmosdb sql restorable-container list \
- --instance-id "7133a59a-d1c0-4645-a699-6e296d6ac865" \
+ --instance-id "abcd1234-d1c0-4645-a699-abcd1234" \
--database-rid "OIQ1AA==" \ --location "West US" ```
Use the following command to get the list of databases or containers that can be
```azurecli-interactive az cosmosdb sql restorable-resource list \
- --instance-id "7133a59a-d1c0-4645-a699-6e296d6ac865" \
+ --instance-id "abcd1234-d1c0-4645-a699-abcd1234" \
--location "West US" \ --restore-location "West US" \ --restore-timestamp "2021-01-10T01:00:00+0000"
az cosmosdb sql restorable-resource list \
] ```
-### <a id="enumerate-mongodb-api"></a>Enumerate restorable resources for MongoDB API account
+### <a id="enumerate-mongodb-api-cli"></a>Enumerate restorable resources for MongoDB API account
The enumeration commands described below help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account, database, and container resources. These commands only work for live accounts.
-#### List all the versions of mongodb databases in a live database account
+#### List all the versions of MongoDB databases in a live database account
```azurecli-interactive az cosmosdb mongodb restorable-database list \
- --instance-id "7133a59a-d1c0-4645-a699-6e296d6ac865" \
+ --instance-id "abcd1234-d1c0-4645-a699-abcd1234" \
--location "West US" ```
-#### List all the versions of mongodb collections of a database in a live database account
+#### List all the versions of MongoDB collections of a database in a live database account
```azurecli-interactive az cosmosdb mongodb restorable-collection list \
- --instance-id "7133a59a-d1c0-4645-a699-6e296d6ac865" \
+ --instance-id "abcd1234-d1c0-4645-a699-abcd1234" \
--database-rid "AoQ13r==" \ --location "West US" ```
az cosmosdb mongodb restorable-collection list \
```azurecli-interactive az cosmosdb mongodb restorable-resource list \
- --instance-id "7133a59a-d1c0-4645-a699-6e296d6ac865" \
+ --instance-id "abcd1234-d1c0-4645-a699-abcd1234" \
--location "West US" \ --restore-location "West US" \ --restore-timestamp "2020-07-20T16:09:53+0000" ```
-## <a id="restore-arm-template"></a>Restore using the Resource Manager template
-You can also restore an account using Resource Manager template. When defining the template include the following parameters:
-* Set the `createMode` parameter to *Restore*
-* Define the `restoreParameters`, notice that the `restoreSource` value is extracted from the output of the `az cosmosdb restorable-database-account list` command for your source account. The Instance ID attribute for your account name is used to do the restore.
-* Set the `restoreMode` parameter to *PointInTime* and configure the `restoreTimestampInUtc` value.
+#### List all the versions of databases in a live database account
+The enumeration commands described below help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account, database, and graph resources. These commands only work for live accounts.
+
+```azurecli-interactive
+az cosmosdb gremlin restorable-database list \
+ --instance-id "abcd1234-d1c0-4645-a699-abcd1234" \
+ --location "West US"
+```
+
+This command output now shows when a database was created and deleted.
+```
+[ {
+ "id": "/subscriptions/abcd1234-b6ac-4328-a753-abcd1234/providers/Microsoft.DocumentDB/locations/eastus2euap/restorableDatabaseAccounts/abcd1234-4316-483b-8308-abcd1234/restorableGremlinDatabases/abcd1234-0e32-4036-ac9d-abcd1234",
+ "name": "abcd1234-0e32-4036-ac9d-abcd1234",
+ "resource": {
+ "eventTimestamp": "2022-02-09T17:10:18Z",
+ "operationType": "Create",
+ "ownerId": "db1",
+ "ownerResourceId": "1XUdAA==",
+ "rid": "ymn7kwAAAA=="
+ },
+ "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restorableGremlinDatabases"
+
+ }
+]
+```
+
+#### List all the versions of Gremlin graphs of a database in a live database account
+
+```azurecli-interactive
+az cosmosdb gremlin restorable-graph list \
+ --instance-id "abcd1234-d1c0-4645-a699-abcd1234" \
+ --database-rid "OIQ1AA==" \
+ --location "West US"
+```
+
+This command output shows includes list of operations performed on all the containers inside this database:
+```
+[ {
+
+ "id": "/subscriptions/23587e98-b6ac-4328-a753-03bcd3c8e744/providers/Microsoft.DocumentDB/locations/eastus2euap/restorableDatabaseAccounts/a00d591d-4316-483b-8308-44193c5f3073/restorableGraphs/1792cead-4307-4032-860d-3fc30bd46a20",
+ "name": "1792cead-4307-4032-860d-3fc30bd46a20",
+ "resource": {
+ "eventTimestamp": "2022-02-09T17:10:31Z",
+ "operationType": "Create",
+ "ownerId": "graph1",
+ "ownerResourceId": "1XUdAPv9duQ=",
+ "rid": "IcWqcQAAAA=="
+ },
+ "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restorableGraphs"
+ }
+]
+```
+
+#### Find databases or graphs that can be restored at any given timestamp
+
+```azurecli-interactive
+
+az cosmosdb gremlin restorable-resource list \
+ --instance-id "abcd1234-d1c0-4645-a699-abcd1234" \
+ --location "West US" \
+ --restore-location "West US" \
+ --restore-timestamp "2021-01-10T01:00:00+0000"
+```
+```
+[ {
+ "databaseName": "db1",
+ "graphNames": [
+ "graph1",
+ "graph3",
+ "graph2"
+ ]
+ }
+]
+```
+
+### <a id="enumerate-table-api-cli"></a>Enumerate restorable resources for Table API account
+
+The enumeration commands described below help you discover the resources that are available for restore at various timestamps. Additionally, they also provide a feed of key events on the restorable account and Table API resources. These commands only work for live accounts.
+
+#### List all the versions of tables in a live database account
+
+```azurecli-interactive
+az cosmosdb table restorable-table list \
+ --instance-id "abcd1234-d1c0-4645-a699-abcd1234"
+ --location "West US"
+```
+```
+[ {
+ "id": "/subscriptions/23587e98-b6ac-4328-a753-03bcd3c8e744/providers/Microsoft.DocumentDB/locations/WestUS/restorableDatabaseAccounts/7e4d666a-c6ba-4e1f-a4b9-e92017c5e8df/restorableTables/59781d91-682b-4cc2-93a3-c25d03fab159",
+ "name": "59781d91-682b-4cc2-93a3-c25d03fab159",
+ "resource": {
+ "eventTimestamp": "2022-02-09T17:09:54Z",
+ "operationType": "Create",
+ "ownerId": "table1",
+ "ownerResourceId": "tOdDAKYiBhQ=",
+ "rid": "9pvDGwAAAA=="
+ },
+ "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restorableTables"
+ },
+ {"id": "/subscriptions/23587e98-b6ac-4328-a753-03bcd3c8e744/providers/Microsoft.DocumentDB/locations/eastus2euap/restorableDatabaseAccounts/7e4d666a-c6ba-4e1f-a4b9-e92017c5e8df/restorableTables/2c9f35eb-a14c-4ab5-a7e0-6326c4f6b785",
+ "name": "2c9f35eb-a14c-4ab5-a7e0-6326c4f6b785",
+ "resource": {
+ "eventTimestamp": "2022-02-09T20:47:53Z",
+ "operationType": "Create",
+ "ownerId": "table3",
+ "ownerResourceId": "tOdDALBwexw=",
+ "rid": "01DtkgAAAA=="
+ },
+ "type": "Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restorableTables"
+ },
+]
+```
+
+### List all the resources of a Table API account that are available to restore at a given timestamp and region
+
+```azurecli-interactive
+az cosmosdb table restorable-resource list \
+ --instance-id "abcd1234-d1c0-4645-a699-abcd1234" \
+ --location "West US" \
+ --restore-location "West US" \
+ --restore-timestamp "2020-07-20T16:09:53+0000"
+```
+```
+{
+ "tableNames": [
+ "table1",
+ "table3",
+ "table2"
+ ]
+}
+```
+
+## <a id="restore-arm-template"></a>Restore using the Azure Resource Manager template
+
+You can also restore an account using Azure Resource Manager (ARM) template. When defining the template, include the following parameters:
+
+### Restore SQL API or MongoDB API account using ARM template
+
+1. Set the `createMode` parameter to *Restore*.
+1. Define the `restoreParameters`, notice that the `restoreSource` value is extracted from the output of the `az cosmosdb restorable-database-account list` command for your source account. The Instance ID attribute for your account name is used to do the restore.
+1. Set the `restoreMode` parameter to *PointInTime* and configure the `restoreTimestampInUtc` value.
+
+Use the following ARM template to restore an account for the Azure Cosmos DB SQL API or MongoDB API. Examples for other APIs are provided next.
```json {
You can also restore an account using Resource Manager template. When defining t
} ```
-Next deploy the template by using Azure PowerShell or CLI. The following example shows how to deploy the template with a CLI command:
+### Restore Gremlin API account using ARM template
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "name": "ademo-pitr1",
+ "type": "Microsoft.DocumentDB/databaseAccounts",
+ "apiVersion": "2016-03-31",
+ "location": "West US",
+ "properties": {
+ "locations": [
+ {
+ "locationName": "West US"
+ }
+ ],
+ "backupPolicy": {
+ "type": "Continuous"
+ },
+ "databaseAccountOfferType": "Standard",
+ "createMode": "Restore",
+ "restoreParameters": {
+ "restoreSource": "/subscriptions/2296c272-5d55-40d9-bc05-4d56dc2d7588/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/5cb9d82e-ec71-430b-b977-cd6641db85bc",
+ "restoreMode": "PointInTime",
+ "restoreTimestampInUtc": "2021-10-27T23:20:46Z",
+ "gremlinDatabasesToRestore": {
+ "databaseName": "db1",
+ "graphNames": [
+ "graph1", "graph2"
+ ]
+ }
+ }
+ }
+ }
+ ]
+}
+```
+
+### Restore Table API account using ARM template
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "name": "ademo-pitr1",
+ "type": "Microsoft.DocumentDB/databaseAccounts",
+ "apiVersion": "2016-03-31",
+ "location": "West US",
+ "properties": {
+ "locations": [
+ {
+ "locationName": "West US"
+ }
+ ],
+ "backupPolicy": {
+ "type": "Continuous"
+ },
+ "databaseAccountOfferType": "Standard",
+ "createMode": "Restore",
+ "restoreParameters": {
+ "restoreSource": "/subscriptions/1296c352-5d33-40d9-bc05-4d56dc2a7521/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/4bcb9d82e-ec71-430b-b977-cd6641db85ad",
+ "restoreMode": "PointInTime",
+ "restoreTimestampInUtc": "2022-04-13T10:20:46Z",
+ "tablesToRestore": [
+ "table1", "table2"
+ ]
+ }
+ }
+ }
+ ]
+}
+```
+
+Next, deploy the template by using Azure PowerShell or Azure CLI. The following example shows how to deploy the template with an Azure CLI command:
```azurecli-interactive az group deployment create -g <ResourceGroup> --template-file <RestoreTemplateFilePath>
cosmos-db Best Practice Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/best-practice-java.md
This article walks through the best practices for using the Azure Cosmos DB Java
| <input type="checkbox"/> | Indexing | The Azure Cosmos DB indexing policy also allows you to specify which document paths to include or exclude from indexing by using indexing paths `IndexingPolicy#getIncludedPaths()` and `IndexingPolicy#getExcludedPaths()`. Ensure that you exclude unused paths from indexing for faster writes. For a sample on how to create indexes using the SDK [visit here](performance-tips-java-sdk-v4-sql.md#indexing-policy) | | <input type="checkbox"/> | Document Size | The request charge of a specified operation correlates directly to the size of the document. We recommend reducing the size of your documents as operations on large documents cost more than operations on smaller documents. | | <input type="checkbox"/> | Enabling Query Metrics | For additional logging of your backend query executions, follow instructions on how to capture SQL Query Metrics using [Java SDK](troubleshoot-java-sdk-v4-sql.md#query-operations) |
-| <input type="checkbox"/> | SDK Logging | Use SDK logging to capture additional diagnostics information and troubleshoot latency issues. Log the [CosmosDiagnostics](/java/api/com.azure.cosmos.cosmosdiagnostics?view=azure-java-stable&preserve-view=true) in Java SDK for more detailed cosmos diagnostic information for the current request to the service. As an example use case, capture Diagnostics on any exception and on completed operations if the `CosmosDiagnostics#getDuration()` is greater than a designated threshold value (i.e. if you have an SLA of 10 seconds, then capture diagnostics when `getDuration()` > 10 seconds). It's advised to only use these diagnostics during performance testing. For more information, follow [capture diagnostics on Java SDK](/azure/cosmos-db/sql/troubleshoot-java-sdk-v4-sql#capture-the-diagnostics) |
+| <input type="checkbox"/> | SDK Logging | Use SDK logging to capture additional diagnostics information and troubleshoot latency issues. Log the [CosmosDiagnostics](/jav#capture-the-diagnostics) |
## Best practices when using Gateway mode Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to tweak [maxConnectionPoolSize](/java/api/com.azure.cosmos.gatewayconnectionconfig.setmaxconnectionpoolsize?view=azure-java-stable#com-azure-cosmos-gatewayconnectionconfig-setmaxconnectionpoolsize(int)&preserve-view=true) to a different value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In Java v4 SDK, the default value for `GatewayConnectionConfig#maxConnectionPoolSize` is 1000. To change the value, you can set `GatewayConnectionConfig#maxConnectionPoolSize` to a different value.
To learn more about designing your application for scale and high performance, s
Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Create Sql Api Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-go.md
In this quickstart, you'll build a sample Go application that uses the Azure SDK
Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
-To learn more about Azure Cosmos DB, go to [Azure Cosmos DB](/azure/cosmos-db/introduction).
+To learn more about Azure Cosmos DB, go to [Azure Cosmos DB](../introduction.md).
## Prerequisites
Trying to do capacity planning for a migration to Azure Cosmos DB? You can use i
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) > [!div class="nextstepaction"]
-> [Import data into Azure Cosmos DB for the SQL API](../import-data.md)
+> [Import data into Azure Cosmos DB for the SQL API](../import-data.md)
cosmos-db Performance Tips Dotnet Sdk V3 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-tips-dotnet-sdk-v3-sql.md
If you're testing at high throughput levels, or at rates that are greater than 5
## <a id="metadata-operations"></a> Metadata operations
-Do not verify a Database and/or Container exists by calling `Create...IfNotExistsAsync` and/or `Read...Async` in the hot path and/or before doing an item operation. The validation should only be done on application startup when it is necessary, if you expect them to be deleted (otherwise it's not needed). These metadata operations will generate extra end-to-end latency, have no SLA, and their own separate [limitations](/azure/cosmos-db/sql/troubleshoot-request-rate-too-large#rate-limiting-on-metadata-requests) that do not scale like data operations.
+Do not verify a Database and/or Container exists by calling `Create...IfNotExistsAsync` and/or `Read...Async` in the hot path and/or before doing an item operation. The validation should only be done on application startup when it is necessary, if you expect them to be deleted (otherwise it's not needed). These metadata operations will generate extra end-to-end latency, have no SLA, and their own separate [limitations](./troubleshoot-request-rate-too-large.md#rate-limiting-on-metadata-requests) that do not scale like data operations.
## <a id="logging-and-tracing"></a> Logging and tracing
cosmos-db Performance Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-tips.md
If you're testing at high throughput levels (more than 50,000 RU/s), the client
## <a id="metadata-operations"></a> Metadata operations
-Do not verify a Database and/or Collection exists by calling `Create...IfNotExistsAsync` and/or `Read...Async` in the hot path and/or before doing an item operation. The validation should only be done on application startup when it is necessary, if you expect them to be deleted (otherwise it's not needed). These metadata operations will generate extra end-to-end latency, have no SLA, and their own separate [limitations](/azure/cosmos-db/sql/troubleshoot-request-rate-too-large#rate-limiting-on-metadata-requests) that do not scale like data operations.
+Do not verify a Database and/or Collection exists by calling `Create...IfNotExistsAsync` and/or `Read...Async` in the hot path and/or before doing an item operation. The validation should only be done on application startup when it is necessary, if you expect them to be deleted (otherwise it's not needed). These metadata operations will generate extra end-to-end latency, have no SLA, and their own separate [limitations](./troubleshoot-request-rate-too-large.md#rate-limiting-on-metadata-requests) that do not scale like data operations.
## <a id="logging-and-tracing"></a> Logging and tracing
cosmos-db Troubleshoot Changefeed Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-changefeed-functions.md
description: Common issues, workarounds, and diagnostic steps, when using the Az
Previously updated : 03/28/2022 Last updated : 04/14/2022
The previous versions of the Azure Cosmos DB Extension did not support using a l
This error means that you are currently using a partitioned lease collection with an old [extension dependency](#dependencies). Upgrade to the latest available version. If you are currently running on Azure Functions V1, you will need to upgrade to Azure Functions V2.
+### Azure Function fails to start with "Forbidden (403); Substatus: 5300... The given request [POST ...] cannot be authorized by AAD token in data plane"
+
+This error means your Function is attempting to [perform a non-data operation using Azure AD identities](troubleshoot-forbidden.md#non-data-operations-are-not-allowed). You cannot use `CreateLeaseContainerIfNotExists = true` when using Azure AD identities.
+ ### Azure Function fails to start with "The lease collection, if partitioned, must have partition key equal to id." This error means that your current leases container is partitioned, but the partition key path is not `/id`. To resolve this issue, you need to recreate the leases container with `/id` as the partition key.
cosmos-db Troubleshoot Dot Net Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-slow-request.md
Consider the following when developing your application:
## Metadata operations
-If you need to verify that a database or container exists, don't do so by calling `Create...IfNotExistsAsync` or `Read...Async` before doing an item operation. The validation should only be done on application startup when it's necessary, if you expect them to be deleted. These metadata operations generate extra latency, have no service-level agreement (SLA), and have their own separate [limitations](/azure/cosmos-db/sql/troubleshoot-request-rate-too-large#rate-limiting-on-metadata-requests). They don't scale like data operations.
+If you need to verify that a database or container exists, don't do so by calling `Create...IfNotExistsAsync` or `Read...Async` before doing an item operation. The validation should only be done on application startup when it's necessary, if you expect them to be deleted. These metadata operations generate extra latency, have no service-level agreement (SLA), and have their own separate [limitations](./troubleshoot-request-rate-too-large.md#rate-limiting-on-metadata-requests). They don't scale like data operations.
## Slow requests on bulk mode
cosmos-db Troubleshoot Forbidden https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-forbidden.md
description: Learn how to diagnose and fix forbidden exceptions.
Previously updated : 10/06/2021 Last updated : 04/14/2022
The HTTP status code 403 represents the request is forbidden to complete.
## Firewall blocking requests
-Data plane requests can come to Cosmos DB via the following 3 paths.
+Data plane requests can come to Cosmos DB via the following three paths.
- Public internet (IPv4) - Service endpoint - Private endpoint
-When a data plane request is blocked with 403 Forbidden, the error message will specify via which of the above 3 paths the request came to Cosmos DB.
+When a data plane request is blocked with 403 Forbidden, the error message will specify via which of the above three paths the request came to Cosmos DB.
- `Request originated from client IP {...} through public internet.` - `Request originated from client VNET through service endpoint.`
Partition key reached maximum size of {...} GB
This error means that your current [partitioning design](../partitioning-overview.md#logical-partitions) and workload is trying to store more than the allowed amount of data for a given partition key value. There is no limit to the number of logical partitions in your container but the size of data each logical partition can store is limited. You can reach to support for clarification. ## Non-data operations are not allowed
-This scenario happens when non-data [operations are disallowed in the account](../how-to-setup-rbac.md#permission-model). On this scenario, it's common to see errors like the ones below:
+This scenario happens when [attempting to perform non-data operations](../how-to-setup-rbac.md#permission-model) using Azure Active Directory (Azure AD) identities. On this scenario, it's common to see errors like the ones below:
``` Operation 'POST' on resource 'calls' is not allowed through Azure Cosmos DB endpoint
Forbidden (403); Substatus: 5300; The given request [PUT ...] cannot be authoriz
``` ### Solution
-Perform the operation through Azure Resource Manager, Azure portal, Azure CLI, or Azure PowerShell. Or reallow execution of non-data operations.
+Perform the operation through Azure Resource Manager, Azure portal, Azure CLI, or Azure PowerShell.
+If you are using the [Azure Functions Cosmos DB Trigger](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md) make sure the `CreateLeaseContainerIfNotExists` property of the trigger isn't set to `true`. Using Azure AD identities blocks any non-data operation, such as creating the lease container.
## Next steps * Configure [IP Firewall](../how-to-configure-firewall.md).
cosmos-db Create Table Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-dotnet.md
public class UpdateWeatherObject
} ```
-In the sample app, this object is passed to the `UpdateEntity` method in the `TableService` class. This method first loads the existing entity from the Table API using the [GetEntity](/dotnet/api/azure.data.tables.tableclient.getentity) method on the [TableClient](/dotnet/api/azure.data.tables.tableclient). It then updates that entity object and uses the `UpdateEntity` method save the updates to the database. Note how the [UpdateEntity](/dotnet/api/azure.data.tables.tableclient.updateentity) method takes the current Etag of the object to insure the object has not changed since it was initially loaded. If you want to update the entity regardless, you may pass a value of `Etag.Any` to the `UpdateEntity` method.
+In the sample app, this object is passed to the `UpdateEntity` method in the `TableService` class. This method first loads the existing entity from the Table API using the [GetEntity](/dotnet/api/azure.data.tables.tableclient.getentity) method on the [TableClient](/dotnet/api/azure.data.tables.tableclient). It then updates that entity object and uses the `UpdateEntity` method save the updates to the database. Note how the [UpdateEntity](/dotnet/api/azure.data.tables.tableclient.updateentity) method takes the current Etag of the object to insure the object has not changed since it was initially loaded. If you want to update the entity regardless, you may pass a value of `ETag.All` to the `UpdateEntity` method.
```csharp public void UpdateEntity(UpdateWeatherObject weatherObject)
cosmos-db How To Use Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-python.md
Title: Use the Azure Tables client library for Python
-description: Store structured data in the cloud using the Azure Tables client library for Python.
+ Title: 'Quickstart: Table API with Python - Azure Cosmos DB'
+description: This quickstart shows how to access the Azure Cosmos DB Table API from a Python application using the Azure Data Tables SDK
+ ms.devlang: python-+ Last updated 03/23/2021- -+
-# Get started with Azure Tables client library using Python
+
+# Quickstart: Build a Table API app with Python SDK and Azure Cosmos DB
+ [!INCLUDE[appliesto-table-api](../includes/appliesto-table-api.md)]
+This quickstart shows how to access the Azure Cosmos DB [Table API](https://docs.microsoft.com/azure/cosmos-db/table/introduction) from a Python application. The Cosmos DB Table API is a schemaless data store allowing applications to store structured NoSQL data in the cloud. Because data is stored in a schemaless design, new properties (columns) are automatically added to the table when an object with a new attribute is added to the table. Python applications can access the Cosmos DB Table API using the [Azure Data Tables SDK for Python](https://pypi.org/project/azure-data-tables/) package.
+
+## Prerequisites
-The Azure Table storage and the Azure Cosmos DB are services that store structured NoSQL data in the cloud, providing a key/attribute store with a schemaless design. Because Table storage and Azure Cosmos DB are schemaless, it's easy to adapt your data as the needs of your application evolve. Access to the table storage and table API data is fast and cost-effective for many types of applications, and is typically lower in cost than traditional SQL for similar volumes of data.
+The sample application is written in [Python3.6](https://www.python.org/downloads/), though the principles apply to all Python3.6+ applications. You can use [Visual Studio Code](https://code.visualstudio.com/) as an IDE.
-You can use the Table storage or the Azure Cosmos DB to store flexible datasets like user data for web applications, address books, device information, or other types of metadata your service requires. You can store any number of entities in a table, and a storage account may contain any number of tables, up to the capacity limit of the storage account.
+If you don't have an [Azure subscription](https://docs.microsoft.com/azure/guides/developer/azure-developer-guide#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/dotnet) before you begin.
-### About this sample
+## Sample application
-This sample shows you how to use the [Azure Data Tables SDK for Python](https://pypi.org/project/azure-data-tables/) in common Azure Table storage scenarios. The name of the SDK indicates it is for use with Azure Tables storage, but it works with both Azure Cosmos DB and Azure Tables storage, each service just has a unique endpoint. These scenarios are explored using Python examples that illustrate how to:
+The sample application for this tutorial may be cloned or downloaded from the repository https://github.com/Azure-Samples/msdocs-azure-tables-sdk-python-flask. Both a starter and completed app are included in the sample repository.
-* Create and delete tables
-* Insert and query entities
-* Modify entities
+```bash
+git clone https://github.com/Azure-Samples/msdocs-azure-tables-sdk-python-flask.git
+```
-While working through the scenarios in this sample, you may want to refer to the [Azure Data Tables SDK for Python API reference](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/tables/azure-data-tables).
+The sample application uses weather data as an example to demonstrate the capabilities of the Table API. Objects representing weather observations are stored and retrieved using the Table API, including storing objects with additional properties to demonstrate the schemaless capabilities of the Table API.
-## Prerequisites
-You need the following to complete this sample successfully:
+## 1 - Create an Azure Cosmos DB account
-* [Python](https://www.python.org/downloads/) 2.7 or 3.6+.
-* [Azure Data Tables SDK for Python](https://pypi.python.org/pypi/azure-data-tables/). This SDK connects with both Azure Table storage and the Azure Cosmos DB Table API.
-* [Azure Storage account](../../storage/common/storage-account-create.md) or [Azure Cosmos DB account](https://azure.microsoft.com/try/cosmosdb/).
+You first need to create a Cosmos DB Tables API account that will contain the table(s) used in your application. This can be done using the Azure portal, Azure CLI, or Azure PowerShell.
-## Create an Azure service account
+### [Azure portal](#tab/azure-portal)
+Log in to the [Azure portal](https://portal.azure.com/) and follow these steps to create an Cosmos DB account.
-**Create an Azure storage account**
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-python/create-cosmos-db-acct-1.md)] | :::image type="content" source="./media/create-table-python/azure-portal-create-cosmos-db-account-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find Cosmos DB accounts in Azure." lightbox="./media/create-table-python/azure-portal-create-cosmos-db-account-table-api-1.png"::: |
+| [!INCLUDE [Create cosmos db account step 2](./includes/create-table-python/create-cosmos-db-acct-2.md)] | :::image type="content" source="./media/create-table-python/azure-portal-create-cosmos-db-account-table-api-2-240px.png" alt-text="A screenshot showing the Create button location on the Cosmos DB accounts page in Azure." lightbox="./media/create-table-python/azure-portal-create-cosmos-db-account-table-api-2.png"::: |
+| [!INCLUDE [Create cosmos db account step 3](./includes/create-table-python/create-cosmos-db-acct-3.md)] | :::image type="content" source="./media/create-table-python/azure-portal-create-cosmos-db-account-table-api-3-240px.png" alt-text="A screenshot showing the Azure Table option as the correct option to select." lightbox="./media/create-table-python/azure-portal-create-cosmos-db-account-table-api-3.png"::: |
+| [!INCLUDE [Create cosmos db account step 4](./includes/create-table-python/create-cosmos-db-acct-4.md)] | :::image type="content" source="./media/create-table-python/azure-portal-create-cosmos-db-account-table-api-4-240px.png" alt-text="A screenshot showing how to fill out the fields on the Cosmos DB Account creation page." lightbox="./media/create-table-python/azure-portal-create-cosmos-db-account-table-api-4.png"::: |
+### [Azure CLI](#tab/azure-cli)
-**Create an Azure Cosmos DB Table API account**
+Cosmos DB accounts are created using the [az cosmosdb create](https://docs.microsoft.com/cli/azure/cosmosdb#az-cosmosdb-create) command. You must include the `--capabilities EnableTable` option to enable table storage within your Cosmos DB. As all Azure resources must be contained in a resource group, the following code snippet also creates a resource group for the Cosmos DB account.
+Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Cosmos DB account names must also be unique across Azure.
-## Install the Azure Data Tables SDK for Python
+Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com/) or on a workstation with the [Azure CLI installed](https://docs.microsoft.com/cli/azure/install-azure-cli).
-After you've created a Storage account, your next step is to install the [Microsoft Azure Data Tables SDK for Python](https://pypi.python.org/pypi/azure-data-tables/). For details on installing the SDK, refer to the [README.md](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/tables/azure-data-tables/README.md) file in the Data Tables SDK for Python repository on GitHub.
+It typically takes several minutes for the Cosmos DB account creation process to complete.
-## Import the TableServiceClient and TableEntity classes
+```azurecli
+LOCATION='eastus'
+RESOURCE_GROUP_NAME='rg-msdocs-tables-sdk-demo'
+COSMOS_ACCOUNT_NAME='cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
+COSMOS_TABLE_NAME='WeatherData'
-To work with entities in the Azure Data Tables service in Python, you use the `TableServiceClient` and `TableEntity` classes. Add this code near the top your Python file to import both:
+az group create \
+ --location $LOCATION \
+ --name $RESOURCE_GROUP_NAME
-```python
-from azure.data.tables import TableServiceClient
-from azure.data.tables import TableEntity
+az cosmosdb create \
+ --name $COSMOS_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --capabilities EnableTable
```
-## Connect to Azure Table service
-You can either connect to the Azure Storage account or the Azure Cosmos DB Table API account. Get the shared key or connection string based on the type of account you are using.
+### [Azure PowerShell](#tab/azure-powershell)
-### Creating the Table service client from a shared key
+Azure Cosmos DB accounts are created using the [New-AzCosmosDBAccount](https://docs.microsoft.com/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet. You must include the `-ApiKind "Table"` option to enable table storage within your Cosmos DB. As all Azure resources must be contained in a resource group, the following code snippet also creates a resource group for the Azure Cosmos DB account.
-Create a `TableServiceClient` object, and pass in your Cosmos DB or Storage account name, account key and table endpoint. Replace `myaccount`, `mykey` and `mytableendpoint` with your Cosmos DB or Storage account name, key and table endpoint.
+Azure Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Azure Cosmos DB account names must also be unique across Azure.
-```python
-from azure.core.credentials import AzureNamedKeyCredential
+Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](https://docs.microsoft.com/powershell/azure/install-az-ps).
+
+It typically takes several minutes for the Cosmos DB account creation process to complete.
+
+```azurepowershell
+$location = 'eastus'
+$resourceGroupName = 'rg-msdocs-tables-sdk-demo'
+$cosmosAccountName = 'cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
-credential = AzureNamedKeyCredential("myaccount", "mykey")
-table_service = TableServiceClient(endpoint="mytableendpoint", credential=credential)
+# Create a resource group
+New-AzResourceGroup `
+ -Location $location `
+ -Name $resourceGroupName
+
+# Create an Azure Cosmos DB
+New-AzCosmosDBAccount `
+ -Name $cosmosAccountName `
+ -ResourceGroupName $resourceGroupName `
+ -Location $location `
+ -ApiKind "Table"
```
-### Creating the Table service client from a connection string
++
+## 2 - Create a table
-Copy your Cosmos DB or Storage account connection string from the Azure portal, and create a `TableServiceClient` object using your copied connection string:
+Next, you need to create a table within your Cosmos DB account for your application to use. Unlike a traditional database, you only need to specify the name of the table, not the properties (columns) in the table. As data is loaded into your table, the properties (columns) will be automatically created as needed.
-```python
-table_service = TableServiceClient.from_connection_string(conn_str='DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=mykey;TableEndpoint=mytableendpoint;')
+### [Azure portal](#tab/azure-portal)
+
+In the [Azure portal](https://portal.azure.com/), complete the following steps to create a table inside your Cosmos DB account.
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Create cosmos db table step 1](./includes/create-table-python/create-cosmos-table-1.md)] | :::image type="content" source="./media/create-table-python/azure-portal-create-cosmos-db-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find your Cosmos DB account." lightbox="./media/create-table-python/azure-portal-create-cosmos-db-table-api-1.png"::: |
+| [!INCLUDE [Create cosmos db table step 2](./includes/create-table-python/create-cosmos-table-2.md)] | :::image type="content" source="./media/create-table-python/azure-portal-create-cosmos-db-table-api-2-240px.png" alt-text="A screenshot showing the location of the Add Table button." lightbox="./media/create-table-python/azure-portal-create-cosmos-db-table-api-2.png"::: |
+| [!INCLUDE [Create cosmos db table step 3](./includes/create-table-python/create-cosmos-table-3.md)] | :::image type="content" source="./media/create-table-python/azure-portal-create-cosmos-db-table-api-3-240px.png" alt-text="A screenshot showing how to New Table dialog box for an Cosmos DB table." lightbox="./media/create-table-python/azure-portal-create-cosmos-db-table-api-3.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+Tables in Cosmos DB are created using the [az cosmosdb table create](https://docs.microsoft.com/cli/azure/cosmosdb/table#az-cosmosdb-table-create) command.
+
+```azurecli
+COSMOS_TABLE_NAME='WeatherData'
+
+az cosmosdb table create \
+ --account-name $COSMOS_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $COSMOS_TABLE_NAME \
+ --throughput 400
```
-## Create a table
+### [Azure PowerShell](#tab/azure-powershell)
-Call `create_table` to create the table.
+Tables in Cosmos DB are created using the [New-AzCosmosDBTable](https://docs.microsoft.com/powershell/module/az.cosmosdb/new-azcosmosdbtable) cmdlet.
-```python
-table_service.create_table('tasktable')
+```azurepowershell
+$cosmosTableName = 'WeatherData'
+
+# Create the table for the application to use
+New-AzCosmosDBTable `
+ -Name $cosmosTableName `
+ -AccountName $cosmosAccountName `
+ -ResourceGroupName $resourceGroupName
```
-## Add an entity to a table
+
-Create a table in your account and get a `TableClient` to perform operations on the newly created table. To add an entity, you first create an object that represents your entity, then pass the object to the `TableClient.create_entity` method. The entity object can be a dictionary or an object of type `TableEntity`, and defines your entity's property names and values. Every entity must include the required [PartitionKey and RowKey](#partitionkey-and-rowkey) properties, in addition to any other properties you define for the entity.
+## 3 - Get Cosmos DB connection string
-This example creates a dictionary object representing an entity, then passes it to the `create_entity` method to add it to the table:
+To access your table(s) in Cosmos DB, your app will need the table connection string for the CosmosDB Storage account. The connection string can be retrieved using the Azure portal, Azure CLI or Azure PowerShell.
-```python
-table_client = table_service.get_table_client(table_name="tasktable")
-task = {u'PartitionKey': u'tasksSeattle', u'RowKey': u'001',
- u'description': u'Take out the trash', u'priority': 200}
-table_client.create_entity(entity=task)
+### [Azure portal](#tab/azure-portal)
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Get cosmos db table connection string step 1](./includes/create-table-python/get-cosmos-connection-string-1.md)] | :::image type="content" source="./media/create-table-python/azure-portal-cosmos-db-table-connection-string-1-240px.png" alt-text="A screenshot showing the location of the connection strings link on the Cosmos DB page." lightbox="./media/create-table-python/azure-portal-cosmos-db-table-connection-string-1.png"::: |
+| [!INCLUDE [Get cosmos db table connection string step 2](./includes/create-table-python/get-cosmos-connection-string-2.md)] | :::image type="content" source="./media/create-table-python/azure-portal-cosmos-db-table-connection-string-2-240px.png" alt-text="A screenshot showing which connection string to select and use in your application." lightbox="./media/create-table-python/azure-portal-cosmos-db-table-connection-string-2.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+To get the primary connection string using Azure CLI, use the [az cosmosdb keys list](https://docs.microsoft.com/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command with the option `--type connection-strings`. This command uses a [JMESPath query](https://jmespath.org/) to display only the primary table connection string.
+
+```azurecli
+# This gets the primary connection string
+az cosmosdb keys list \
+ --type connection-strings \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $COSMOS_ACCOUNT_NAME \
+ --query "connectionStrings[?description=='Primary Table Connection String'].connectionString" \
+ --output tsv
```
-This example creates an `TableEntity` object, then passes it to the `create_entity` method to add it to the table:
+### [Azure PowerShell](#tab/azure-powershell)
-```python
-task = TableEntity()
-task[u'PartitionKey'] = u'tasksSeattle'
-task[u'RowKey'] = u'002'
-task[u'description'] = u'Wash the car'
-task[u'priority'] = 100
-table_client.create_entity(task)
+To get the primary connection string using Azure PowerShell, use the [Get-AzCosmosDBAccountKey](https://docs.microsoft.com/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
+
+```azurepowershell
+# This gets the primary connection string
+$(Get-AzCosmosDBAccountKey `
+ -ResourceGroupName $resourceGroupName `
+ -Name $cosmosAccountName `
+ -Type "ConnectionStrings")."Primary Table Connection String"
+```
+
+The connection string for your Cosmos DB account is considered an app secret and must be protected like any other app secret or password.
+++
+## 4 - Install the Azure Data Tables SDK for Python
+
+After you've created a Cosmos DB account, your next step is to install the Microsoft [Azure Data Tables SDK for Python](https://pypi.python.org/pypi/azure-data-tables/). For details on installing the SDK, refer to the [README.md](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/tables/azure-data-tables/README.md) file in the Data Tables SDK for Python repository on GitHub.
+
+Install the Azure Tables client library for Python with pip:
+
+```bash
+pip install azure-data-tables
```
-### PartitionKey and RowKey
+
-You must specify both a **PartitionKey** and a **RowKey** property for every entity. These are the unique identifiers of your entities, as together they form the primary key of an entity. You can query using these values much faster than you can query any other entity properties because only these properties are indexed.
+## 5 - Configure the Table client in .env file
-The Table service uses **PartitionKey** to intelligently distribute table entities across storage nodes. Entities that have the same **PartitionKey** are stored on the same node. **RowKey** is the unique ID of the entity within the partition it belongs to.
+Copy your Azure Cosmos DB account connection string from the Azure portal, and create a TableServiceClient object using your copied connection string. Switch to folder `1-strater-app` or `2-completed-app`. Then, add the value of the corresponding environment variables in `.env` file.
-## Update an entity
+```python
+# Configuration Parameters
+conn_str = "A connection string to an Azure Cosmos account."
+table_name = "WeatherData"
+project_root_path = "Project abs path"
+```
-To update all of an entity's property values, call the `update_entity` method. This example shows how to replace an existing entity with an updated version:
+The Azure SDK communicates with Azure using client objects to execute different operations against Azure. The `TableServiceClient` object is the object used to communicate with the Cosmos DB Table API. An application will typically have a single `TableServiceClient` overall, and it will have a `TableClient` per table.
```python
-task = {u'PartitionKey': u'tasksSeattle', u'RowKey': u'001',
- u'description': u'Take out the garbage', u'priority': 250}
-table_client.update_entity(task)
+self.conn_str = os.getenv("AZURE_CONNECTION_STRING")
+self.table_service = TableServiceClient.from_connection_string(self.conn_str)
```
-If the entity that is being updated doesn't already exist, then the update operation will fail. If you want to store an entity whether it exists or not, use `upsert_entity`. In the following example, the first call will replace the existing entity. The second call will insert a new entity, since no entity with the specified PartitionKey and RowKey exists in the table.
++
+## 6 - Implement Cosmos DB table operations
+
+All Cosmos DB table operations for the sample app are implemented in the `TableServiceHelper` class located in *helper* file under the *webapp* directory. You will need to import the `TableServiceClient` class at the top of this file to work with objects in the `azure.data.tables` SDK package.
```python
-# Replace the entity created earlier
-task = {u'PartitionKey': u'tasksSeattle', u'RowKey': u'001',
- u'description': u'Take out the garbage again', u'priority': 250}
-table_client.upsert_entity(task)
+from azure.data.tables import TableServiceClient
+```
+
+At the start of the `TableServiceHelper` class, create a constructor and add a member variable for the `TableClient` object to allow the `TableClient` object to be injected into the class.
-# Insert a new entity
-task = {u'PartitionKey': u'tasksSeattle', u'RowKey': u'003',
- u'description': u'Buy detergent', u'priority': 300}
-table_client.upsert_entity(task)
+```python
+def __init__(self, table_name=None, conn_str=None):
+ self.table_name = table_name if table_name else os.getenv("table_name")
+ self.conn_str = conn_str if conn_str else os.getenv("conn_str")
+ self.table_service = TableServiceClient.from_connection_string(self.conn_str)
+ self.table_client = self.table_service.get_table_client(self.table_name)
```
-> [!TIP]
-> The **mode=UpdateMode.REPLACE** parameter in `update_entity` method replaces all properties and values of an existing entity, which you can also use to remove properties from an existing entity. The **mode=UpdateMode.MERGE** parameter is used by default to update an existing entity with new or modified property values without completely replacing the entity.
+### Filter rows returned from a table
+
+To filter the rows returned from a table, you can pass an OData style filter string to the `query_entities` method. For example, if you wanted to get all of the weather readings for Chicago between midnight July 1, 2021 and midnight July 2, 2021 (inclusive) you would pass in the following filter string.
-## Modify multiple entities
+```odata
+PartitionKey eq 'Chicago' and RowKey ge '2021-07-01 12:00 AM' and RowKey le '2021-07-02 12:00 AM'
+```
-To ensure the atomic processing of a request by the Table service, you can submit multiple operations together in a batch. First, add multiple operations to a list. Next, call `Table_client.submit_transaction` to submit the operations in an atomic operation. All entities to be modified in batch must be in the same partition.
+You can view related OData filter operators on the azure-data-tables website in the section [Writing Filters](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/tables/azure-data-tables/samples#writing-filters).
-This example adds two entities together in a batch:
+When request.args parameter is passed to the `query_entity` method in the `TableServiceHelper` class, it creates a filter string for each non-null property value. It then creates a combined filter string by joining all of the values together with an "and" clause. This combined filter string is passed to the `query_entities` method on the `TableClient` object and only rows matching the filter string will be returned. You can use a similar method in your code to construct suitable filter strings as required by your application.
```python
-task004 = {u'PartitionKey': u'tasksSeattle', u'RowKey': '004',
- 'description': u'Go grocery shopping', u'priority': 400}
-task005 = {u'PartitionKey': u'tasksSeattle', u'RowKey': '005',
- u'description': u'Clean the bathroom', u'priority': 100}
-operations = [("create", task004), ("create", task005)]
-table_client.submit_transaction(operations)
+def query_entity(self, params):
+ filters = []
+ if params.get("partitionKey"):
+ filters.append("PartitionKey eq '{}'".format(params.get("partitionKey")))
+ if params.get("rowKeyDateStart") and params.get("rowKeyTimeStart"):
+ filters.append("RowKey ge '{} {}'".format(params.get("rowKeyDateStart"), params.get("rowKeyTimeStart")))
+ if params.get("rowKeyDateEnd") and params.get("rowKeyTimeEnd"):
+ filters.append("RowKey le '{} {}'".format(params.get("rowKeyDateEnd"), params.get("rowKeyTimeEnd")))
+ if params.get("minTemperature"):
+ filters.append("Temperature ge {}".format(params.get("minTemperature")))
+ if params.get("maxTemperature"):
+ filters.append("Temperature le {}".format(params.get("maxTemperature")))
+ if params.get("minPrecipitation"):
+ filters.append("Precipitation ge {}".format(params.get("minPrecipitation")))
+ if params.get("maxPrecipitation"):
+ filters.append("Precipitation le {}".format(params.get("maxPrecipitation")))
+ return list(self.table_client.query_entities(" and ".join(filters)))
```
-## Query for an entity
+### Insert data using a TableEntity object
-To query for an entity in a table, pass its PartitionKey and RowKey to the `Table_client.get_entity` method.
+The simplest way to add data to a table is by using a `TableEntity` object. In this example, data is mapped from an input model object to a `TableEntity` object. The properties on the input object representing the weather station name and observation date/time are mapped to the `PartitionKey` and `RowKey` properties respectively which together form a unique key for the row in the table. Then the additional properties on the input model object are mapped to dictionary properties on the TableEntity object. Finally, the `create_entity` method on the `TableClient` object is used to insert data into the table.
+
+Modify the `insert_entity` function in the example application to contain the following code.
```python
-task = table_client.get_entity('tasksSeattle', '001')
-print(task['description'])
-print(task['priority'])
+def insert_entity(self):
+ entity = self.deserialize()
+ return self.table_client.create_entity(entity)
+
+@staticmethod
+def deserialize():
+ params = {key: request.form.get(key) for key in request.form.keys()}
+ params["PartitionKey"] = params.pop("StationName")
+ params["RowKey"] = "{} {}".format(params.pop("ObservationDate"), params.pop("ObservationTime"))
+ return params
```
-## Query a set of entities
+### Upsert data using a TableEntity object
-You can query for a set of entities by supplying a filter string with the **query_filter** parameter. This example finds all tasks in Seattle by applying a filter on PartitionKey:
+If you try to insert a row into a table with a partition key/row key combination that already exists in that table, you will receive an error. For this reason, it is often preferable to use the `upsert_entity` instead of the `create_entity` method when adding rows to a table. If the given partition key/row key combination already exists in the table, the `upsert_entity` method will update the existing row. Otherwise, the row will be added to the table.
```python
-tasks = table_client.query_entities(query_filter="PartitionKey eq 'tasksSeattle'")
-for task in tasks:
- print(task['description'])
- print(task['priority'])
+def upsert_entity(self):
+ entity = self.deserialize()
+ return self.table_client.upsert_entity(entity)
+
+@staticmethod
+def deserialize():
+ params = {key: request.form.get(key) for key in request.form.keys()}
+ params["PartitionKey"] = params.pop("StationName")
+ params["RowKey"] = "{} {}".format(params.pop("ObservationDate"), params.pop("ObservationTime"))
+ return params
```
-## Query a subset of entity properties
+### Insert or upsert data with variable properties
+
+One of the advantages of using the Cosmos DB Table API is that if an object being loaded to a table contains any new properties then those properties are automatically added to the table and the values stored in Cosmos DB. There is no need to run DDL statements like ALTER TABLE to add columns as in a traditional database.
-You can also restrict which properties are returned for each entity in a query. This technique, called *projection*, reduces bandwidth and can improve query performance, especially for large entities or result sets. Use the **select** parameter and pass the names of the properties you want returned to the client.
+This model gives your application flexibility when dealing with data sources that may add or modify what data needs to be captured over time or when different inputs provide different data to your application. In the sample application, we can simulate a weather station that sends not just the base weather data but also some additional values. When an object with these new properties is stored in the table for the first time, the corresponding properties (columns) will be automatically added to the table.
-The query in the following code returns only the descriptions of entities in the table.
+To insert or upsert such an object using the Table API, map the properties of the expandable object into a `TableEntity` object and use the `create_entity` or `upsert_entity` methods on the `TableClient` object as appropriate.
-> [!NOTE]
-> The following snippet works only against the Azure Storage. It is not supported by the Storage Emulator.
+In the sample application, the `upsert_entity` function can also implement the function of insert or upsert data with variable properties
```python
-tasks = table_client.query_entities(
- query_filter="PartitionKey eq 'tasksSeattle'", select='description')
-for task in tasks:
- print(task['description'])
+def insert_entity(self):
+ entity = self.deserialize()
+ return self.table_client.create_entity(entity)
+
+def upsert_entity(self):
+ entity = self.deserialize()
+ return self.table_client.upsert_entity(entity)
+
+@staticmethod
+def deserialize():
+ params = {key: request.form.get(key) for key in request.form.keys()}
+ params["PartitionKey"] = params.pop("StationName")
+ params["RowKey"] = "{} {}".format(params.pop("ObservationDate"), params.pop("ObservationTime"))
+ return params
```
-## Query for an entity without partition and row keys
+### Update an entity
-You can also list entities within a table without using the partition and row keys. Use the `table_client.list_entities` method as show in the following example:
+Entities can be updated by calling the `update_entity` method on the `TableClient` object.
+
+In the sample app, this object is passed to the `upsert_entity` method in the `TableClient` class. It updates that entity object and uses the `upsert_entity` method save the updates to the database.
```python
-print("Get the first item from the table")
-tasks = table_client.list_entities()
-lst = list(tasks)
-print(lst[0])
+def update_entity(self):
+ entity = self.update_deserialize()
+ return self.table_client.update_entity(entity)
+
+@staticmethod
+def update_deserialize():
+ params = {key: request.form.get(key) for key in request.form.keys()}
+ params["PartitionKey"] = params.pop("StationName")
+ params["RowKey"] = params.pop("ObservationDate")
+ return params
```
-## Delete an entity
-
-Delete an entity by passing its **PartitionKey** and **RowKey** to the `delete_entity` method.
+### Remove an entity
+To remove an entity from a table, call the `delete_entity` method on the `TableClient` object with the partition key and row key of the object.
+
```python
-table_client.delete_entity('tasksSeattle', '001')
+def delete_entity(self):
+ partition_key = request.form.get("StationName")
+ row_key = request.form.get("ObservationDate")
+ return self.table_client.delete_entity(partition_key, row_key)
```
-## Delete a table
+## 7 - Run the code
-If you no longer need a table or any of the entities within it, call the `delete_table` method to permanently delete the table from Azure Storage.
+Run the sample application to interact with the Cosmos DB Table API. The first time you run the application, there will be no data because the table is empty. Use any of the buttons at the top of application to add data to the table.
-```python
-table_service.delete_table('tasktable')
+
+Selecting the **Insert using Table Entity** button opens a dialog allowing you to insert or upsert a new row using a `TableEntity` object.
++
+Selecting the **Insert using Expandable** Data button brings up a dialog that enables you to insert an object with custom properties, demonstrating how the Cosmos DB Table API automatically adds properties (columns) to the table when needed. Use the *Add Custom Field* button to add one or more new properties and demonstrate this capability.
++
+Use the **Insert Sample Data** button to load some sample data into your Cosmos DB Table.
++
+Select the **Filter Results** item in the top menu to be taken to the Filter Results page. On this page, fill out the filter criteria to demonstrate how a filter clause can be built and passed to the Cosmos DB Table API.
++
+## Clean up resources
+
+When you are finished with the sample application, you should remove all Azure resources related to this article from your Azure account. You can do this by deleting the resource group.
+
+### [Azure portal](#tab/azure-portal)
+
+A resource group can be deleted using the [Azure portal](https://portal.azure.com/) by doing the following.
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Delete resource group step 1](./includes/create-table-python/remove-resource-group-1.md)] | :::image type="content" source="./media/create-table-python/azure-portal-remove-resource-group-1-240px.png" alt-text="A screenshot showing how to search for a resource group." lightbox="./media/create-table-python/azure-portal-remove-resource-group-1.png"::: |
+| [!INCLUDE [Delete resource group step 2](./includes/create-table-python/remove-resource-group-2.md)] | :::image type="content" source="./media/create-table-python/azure-portal-remove-resource-group-2-240px.png" alt-text="A screenshot showing the location of the Delete resource group button." lightbox="./media/create-table-python/azure-portal-remove-resource-group-2.png"::: |
+| [!INCLUDE [Delete resource group step 3](./includes/create-table-python/remove-resource-group-3.md)] | :::image type="content" source="./media/create-table-python/azure-portal-remove-resource-group-3-240px.png" alt-text="A screenshot showing the confirmation dialog for deleting a resource group." lightbox="./media/create-table-python/azure-portal-remove-resource-group-3.png"::: |
+
+### [Azure CLI](#tab/azure-cli)
+
+To delete a resource group using the Azure CLI, use the [az group delete](https://docs.microsoft.com/cli/azure/group#az-group-delete) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+
+```azurecli
+az group delete --name $RESOURCE_GROUP_NAME
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To delete a resource group using Azure PowerShell, use the [Remove-AzResourceGroup](https://docs.microsoft.com/powershell/module/az.resources/remove-azresourcegroup) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+
+```azurepowershell
+Remove-AzResourceGroup -Name $resourceGroupName
``` ++ ## Next steps
-* [FAQ - Develop with the Table API](table-api-faq.yml)
-* [Azure Data Tables SDK for Python API reference](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/tables/azure-data-tables)
-* [Python Developer Center](https://azure.microsoft.com/develop/python/)
-* [Microsoft Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md): A free, cross-platform application for working visually with Azure Storage data on Windows, macOS, and Linux.
-* [Working with Python in Visual Studio (Windows)](/visualstudio/python/overview-of-python-tools-for-visual-studio)
+In this quickstart, you've learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run an app. Now you can query your data using the Table API.
+> [!div class="nextstepaction"]
+> [Import table data to the Table API](https://docs.microsoft.com/azure/cosmos-db/table/table-import)
cost-management-billing Download Azure Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-invoice.md
tags: billing
Previously updated : 02/17/2022 Last updated : 04/15/2022
If you pay for Azure with a credit card and you buy reservation, Azure generates
An invoice is only generated for a subscription that belongs to a billing account for an MOSP. [Check your access to an MOSP account](../manage/view-all-accounts.md#check-the-type-of-your-account).
-You must have an account admin role for a subscription to download its invoice. Users with owner, contributor, or reader roles can download its invoice, if the account admin has given them permission. For more information, see [Allow users to download invoices](../manage/manage-billing-access.md#opt-in).
+You must have an *account admin* role for a subscription to download its invoice. Users with owner, contributor, or reader roles can download its invoice, if the account admin has given them permission. For more information, see [Allow users to download invoices](../manage/manage-billing-access.md#opt-in).
+
+Azure Government customers canΓÇÖt request their invoice by email. They can only download it.
1. Select your subscription from the [Subscriptions page](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) in the Azure portal. 1. Select **Invoices** from the billing section.
data-catalog Data Catalog Adopting Data Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-adopting-data-catalog.md
Last updated 02/17/2022
# Approach and process for adopting Azure Data Catalog This article helps you get started adopting **Azure Data Catalog** in your organization. To successfully adopt **Azure Data Catalog**, focus on three key items: define your vision, identify key business use cases within your organization, and choose a pilot project.
data-catalog Data Catalog Common Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-common-scenarios.md
Last updated 02/22/2022
# Azure Data Catalog common scenarios This article presents common scenarios where Azure Data Catalog can help your organization get more value from its existing data sources.
data-catalog Data Catalog Developer Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-developer-concepts.md
Last updated 02/16/2022
# Azure Data Catalog developer concepts Microsoft **Azure Data Catalog** is a fully managed cloud service that provides capabilities for data source discovery and for crowdsourcing data source metadata. Developers can use the service via its REST APIs. Understanding the concepts implemented in the service is important for developers to successfully integrate with **Azure Data Catalog**.
data-catalog Data Catalog Dsr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-dsr.md
Last updated 02/24/2022
# Supported data sources in Azure Data Catalog You can publish metadata by using a public API or a click-once registration tool, or by manually entering information directly to the Azure Data Catalog web portal. The following table summarizes all data sources that are supported by the catalog today, and the publishing capabilities for each. Also listed are the external data tools that each data source can launch from our portal "open-in" experience. The second table contains a more technical specification of each data-source connection property.
data-catalog Data Catalog Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-get-started.md
# Quickstart: Create an Azure Data Catalog via the Azure portal Azure Data Catalog is a fully managed cloud service that serves as a system of registration and system of discovery for enterprise data assets. For a detailed overview, see [What is Azure Data Catalog](overview.md).
data-catalog Data Catalog How To Annotate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-annotate.md
Last updated 02/18/2022
# How to annotate data sources in Azure Data Catalog ## Introduction
data-catalog Data Catalog How To Big Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-big-data.md
Last updated 02/14/2022
# How to catalog big data in Azure Data Catalog ## Introduction
data-catalog Data Catalog How To Business Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-business-glossary.md
Last updated 02/23/2022
# Set up the business glossary for governed tagging ## Introduction
data-catalog Data Catalog How To Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-connect.md
Last updated 02/22/2022
# How to connect to data sources ## Introduction
data-catalog Data Catalog How To Data Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-data-profile.md
Last updated 02/18/2022
# How to data profile data sources in Azure Data Catalog ## Introduction
data-catalog Data Catalog How To Discover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-discover.md
Last updated 02/24/2022
# How to discover data sources in Azure Data Catalog ## Introduction
data-catalog Data Catalog How To Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-documentation.md
Last updated 02/17/2022
# How to document data sources in Azure Data Catalog ## Introduction
data-catalog Data Catalog How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-manage.md
Last updated 02/15/2022
# Manage data assets in Azure Data Catalog ## Introduction
data-catalog Data Catalog How To Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-register.md
Last updated 02/25/2022
# Register data sources in Azure Data Catalog ## Introduction
data-catalog Data Catalog How To Save Pin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-save-pin.md
Last updated 02/10/2022
# Save searches and pin data assets in Azure Data Catalog ## Introduction
data-catalog Data Catalog How To Secure Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-secure-catalog.md
Last updated 02/14/2022
# How to secure access to data catalog and data assets > [!IMPORTANT] > This feature is available only in the standard edition of Azure Data Catalog.
data-catalog Data Catalog How To View Related Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-view-related-data-assets.md
Last updated 02/11/2022
# How to view related data assets in Azure Data Catalog Azure Data Catalog allows you to view data assets that are related to a selected data asset, and see the relationships between them.
data-catalog Data Catalog Keyboard Shortcuts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-keyboard-shortcuts.md
Last updated 02/11/2022
# Keyboard shortcuts for Azure Data Catalog ## Keyboard shortcuts for the Data Catalog data source registration tool
data-catalog Data Catalog Migration To Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-migration-to-azure-purview.md
Title: Migrate from Azure Data Catalog to Azure Purview
-description: Steps to migrate from Azure Data Catalog to Microsoft's unified data governance service--Azure Purview.
+ Title: Migrate from Azure Data Catalog to Microsoft Purview
+description: Steps to migrate from Azure Data Catalog to Microsoft's unified data governance service--Microsoft Purview.
Last updated 01/24/2022
-#Customer intent: As an Azure Data Catalog user, I want to know why and how to migrate to Azure Purview so that I can use the best tools to manage my data.
+#Customer intent: As an Azure Data Catalog user, I want to know why and how to migrate to Microsoft Purview so that I can use the best tools to manage my data.
-# Migrate from Azure Data Catalog to Azure Purview
+# Migrate from Azure Data Catalog to Microsoft Purview
-Microsoft launched a unified data governance service to help manage and govern your on-premises, multi-cloud, and software-as-a-service (SaaS) data. Azure Purview creates a map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage. Azure Purview enables data curators to manage and secure their data estate and empowers data consumers to find valuable, trustworthy data.
+Microsoft launched a unified data governance service to help manage and govern your on-premises, multi-cloud, and software-as-a-service (SaaS) data. Microsoft Purview creates a map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage. Microsoft Purview enables data curators to manage and secure their data estate and empowers data consumers to find valuable, trustworthy data.
-The document shows you how to do the migration from Azure Data Catalog to Azure Purview.
+The document shows you how to do the migration from Azure Data Catalog to Microsoft Purview.
## Recommended approach
-To migrate from Azure Data Catalog to Azure Purview, we recommend the following approach:
+To migrate from Azure Data Catalog to Microsoft Purview, we recommend the following approach:
:heavy_check_mark: Step 1: [Assess readiness](#assess-readiness) :heavy_check_mark: Step 2: [Prepare to migrate](#prepare-to-migrate)
-:heavy_check_mark: Step 3: [Migrate to Azure Purview](#migrate-to-azure-purview)
+:heavy_check_mark: Step 3: [Migrate to Microsoft Purview](#migrate-to-microsoft-purview)
-:heavy_check_mark: Step 4: [Cutover from Azure Data Catalog to Azure Purview](#cutover-from-azure-data-catalog-to-azure-purview)
+:heavy_check_mark: Step 4: [Cutover from Azure Data Catalog to Microsoft Purview](#cutover-from-azure-data-catalog-to-microsoft-purview)
> [!NOTE]
-> Azure Data Catalog and Azure Purview are different services, so there is no in-place upgrade experience. Intentional migration effort required.
+> Azure Data Catalog and Microsoft Purview are different services, so there is no in-place upgrade experience. Intentional migration effort required.
## Assess readiness
-Look at [Azure Purview](https://azure.microsoft.com/services/purview/) and understand key differences of Azure Data Catalog and Azure Purview.
+Look at [Microsoft Purview](https://azure.microsoft.com/services/purview/) and understand key differences of Azure Data Catalog and Microsoft Purview.
-||Azure Data Catalog |Azure Purview |
+||Azure Data Catalog |Microsoft Purview |
|||| |**Pricing** |[User based model](https://azure.microsoft.com/pricing/details/data-catalog/) |[Pay-As-You-Go model](https://azure.microsoft.com/pricing/details/azure-purview/) | |**Platform** |[Data catalog](overview.md) |[Unified governance platform for data discoverability, classification, lineage, and governance.](../purview/purview-connector-overview.md) | |**Extensibility** |N/A |[Extensible on Apache Atlas](../purview/tutorial-purview-tools.md)| |**SDK/PowerShell support** |N/A |[Supports REST APIs](/rest/api/purview/) |
-Compare [Azure Data Catalog supported sources](data-catalog-dsr.md) and [Azure Purview supported sources](../purview/purview-connector-overview.md), to confirm you can support your data landscape.
+Compare [Azure Data Catalog supported sources](data-catalog-dsr.md) and [Microsoft Purview supported sources](../purview/purview-connector-overview.md), to confirm you can support your data landscape.
## Prepare to migrate 1. Identify data sources that you'll migrate.
- Take this opportunity to identify logical and business connections between your data sources and assets. Azure Purview will allow you to create a map of your data landscape that reflects how your data is used and discovered in your organization.
-1. Review [Azure Purview best practices for deployment and architecture](../purview/deployment-best-practices.md) to develop a deployment strategy for Azure Purview.
+ Take this opportunity to identify logical and business connections between your data sources and assets. Microsoft Purview will allow you to create a map of your data landscape that reflects how your data is used and discovered in your organization.
+1. Review [Microsoft Purview best practices for deployment and architecture](../purview/deployment-best-practices.md) to develop a deployment strategy for Microsoft Purview.
1. Determine the impact that a migration will have on your business. For example: how will Azure Data catalog be used until the transition is complete? 1. Create a migration plan.
-## Migrate to Azure Purview
+## Migrate to Microsoft Purview
-Manually migrate your data from Azure Data Catalog to Azure Purview.
+Manually migrate your data from Azure Data Catalog to Microsoft Purview.
-[Create an Azure Purview account](../purview/create-catalog-portal.md), [create collections](../purview/create-catalog-portal.md) in your data map, set up [permissions for your users](../purview/catalog-permissions.md), and onboard your data sources.
+[Create a Microsoft Purview account](../purview/create-catalog-portal.md), [create collections](../purview/create-catalog-portal.md) in your data map, set up [permissions for your users](../purview/catalog-permissions.md), and onboard your data sources.
-We suggest you review the Azure Purview best practices documentation before deploying your Azure Purview account, so you can deploy the best environment for your data landscape.
+We suggest you review the Microsoft Purview best practices documentation before deploying your Microsoft Purview account, so you can deploy the best environment for your data landscape.
Here's a selection of articles that may help you get started:-- [Azure Purview security best practices](../purview/concept-best-practices-security.md)
+- [Microsoft Purview security best practices](../purview/concept-best-practices-security.md)
- [Accounts architecture best practices](../purview/concept-best-practices-accounts.md) - [Collections architectures best practices](../purview/concept-best-practices-collections.md) - [Create a collection](../purview/quickstart-create-collection.md)-- [Import Azure sources to Azure Purview at scale](../purview/tutorial-data-sources-readiness.md)
+- [Import Azure sources to Microsoft Purview at scale](../purview/tutorial-data-sources-readiness.md)
- [Tutorial: Onboard an on-premises SQL Server instance](../purview/tutorial-register-scan-on-premises-sql-server.md)
-## Cutover from Azure Data Catalog to Azure Purview
+## Cutover from Azure Data Catalog to Microsoft Purview
-After the business has begun to use Azure Purview, cutover from Azure Data Catalog by deleting the Azure Data Catalog.
+After the business has begun to use Microsoft Purview, cutover from Azure Data Catalog by deleting the Azure Data Catalog.
## Next steps-- Learn how [Azure Purview's data insights](../purview/concept-insights.md) can provide you up-to-date information on your data landscape.-- Learn how [Azure Purview integrations with Azure security products](../purview/how-to-integrate-with-azure-security-products.md) to bring even more security to your data landscape.-- Discover how [sensitivity labels in Azure Purview](../purview/create-sensitivity-label.md) help detect and protect your sensitive information.
+- Learn how [Microsoft Purview's data insights](../purview/concept-insights.md) can provide you up-to-date information on your data landscape.
+- Learn how [Microsoft Purview integrations with Azure security products](../purview/how-to-integrate-with-azure-security-products.md) to bring even more security to your data landscape.
+- Discover how [sensitivity labels in Microsoft Purview](../purview/create-sensitivity-label.md) help detect and protect your sensitive information.
data-catalog Data Catalog Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-samples.md
Last updated 02/16/2022
# Azure Data Catalog developer samples Get started developing Azure Data Catalog apps using the Data Catalog REST API. The Data Catalog REST API is a REST-based API that provides programmatic access to Data Catalog resources to register, annotate, and search data assets programmatically.
data-catalog Data Catalog Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-terminology.md
Last updated 02/15/2022
# Azure Data Catalog terminology This article provides an introduction to concepts and terms used in Azure Data Catalog documentation.
data-catalog Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/overview.md
Last updated 02/24/2022
# What is Azure Data Catalog? Azure Data Catalog is a fully managed cloud service that lets users discover the data sources they need and understand the data sources they find. At the same time, Data Catalog helps organizations get more value from their existing investments.
data-catalog Register Data Assets Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/register-data-assets-tutorial.md
Last updated 02/24/2022
# Tutorial: Register data assets in Azure Data Catalog In this tutorial, you use the registration tool to register data assets from the database sample with the catalog. Registration is the process of extracting key structural metadata such as names, types, and locations from the data source and the assets it contains, and copying that metadata to the catalog. The data source and data assets remain where they are, but the metadata is used by the catalog to make them more easily discoverable and understandable.
data-catalog Troubleshoot Policy Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/troubleshoot-policy-configuration.md
Last updated 02/10/2022
# Troubleshooting Azure Data Catalog This article describes common troubleshooting concerns for Azure Data Catalog resources.
data-factory Author Global Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/author-global-parameters.md
There are two ways to integrate global parameters in your continuous integration
* Include global parameters in the ARM template * Deploy global parameters via a PowerShell script
-For general use cases, it is recommended to include global parameters in the ARM template. This integrates natively with the solution outlined in [the CI/CD doc](continuous-integration-delivery.md). In case of automatic publishing and Azure Purview connection, **PowerShell script** method is required. You can find more about PowerShell script method later. Global parameters will be added as an ARM template parameter by default as they often change from environment to environment. You can enable the inclusion of global parameters in the ARM template from the **Manage** hub.
+For general use cases, it is recommended to include global parameters in the ARM template. This integrates natively with the solution outlined in [the CI/CD doc](continuous-integration-delivery.md). In case of automatic publishing and Microsoft Purview connection, **PowerShell script** method is required. You can find more about PowerShell script method later. Global parameters will be added as an ARM template parameter by default as they often change from environment to environment. You can enable the inclusion of global parameters in the ARM template from the **Manage** hub.
:::image type="content" source="media/author-global-parameters/include-arm-template.png" alt-text="Include in ARM template"::: > [!NOTE]
-> The **Include in ARM template** configuration is only available in "Git mode". Currently it is disabled in "live mode" or "Data Factory" mode. In case of automatic publishing or Azure Purview connection, do not use Include global parameters method; use PowerShell script method.
+> The **Include in ARM template** configuration is only available in "Git mode". Currently it is disabled in "live mode" or "Data Factory" mode. In case of automatic publishing or Microsoft Purview connection, do not use Include global parameters method; use PowerShell script method.
> [!WARNING] >You cannot use ΓÇÿ-ΓÇÿ in the parameter name. You will receive an errorcode "{"code":"BadRequest","message":"ErrorCode=InvalidTemplate,ErrorMessage=The expression >'pipeline().globalParameters.myparam-dbtest-url' is not valid: .....}". But, you can use the ΓÇÿ_ΓÇÖ in the parameter name.
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ci-cd-github-troubleshoot-guide.md
Previously updated : 11/09/2021 Last updated : 04/18/2022 # Troubleshoot CI-CD, Azure DevOps, and GitHub issues in Azure Data Factory and Synapse Analytics
Sometimes you encounter Authentication issues like HTTP status 401. Especially w
#### Cause
-What we have observed is that the token was obtained from the original tenant, but the service is in guest tenant and trying to use the token to visit DevOps in guest tenant. This is not the expected behavior.
+The token was obtained from the original tenant, but the service is in guest tenant trying to use the token to visit DevOps in guest tenant. This type of token access isn't the expected behavior.
#### Recommendation
When trying to publish changes, you get following error message:
` ### Cause
-You have detached the Git configuration and set it up again with the "Import resources" flag selected, which sets the service as "in sync". This means no change during publication..
+You have detached the Git configuration and set it up again with the "Import resources" flag selected, which sets the service as "in sync". This means no change during publication.
#### Resolution
You are unable to move a data factory from one Resource Group to another, failin
#### Resolution
-You can delete the SSIS-IR and Shared IRs to allow the move operation. If you do not want to delete the integration runtimes, then the best way is to follow the copy and clone document to do the copy and after it's done, delete the old data factory.
+You can delete the SSIS-IR and Shared IRs to allow the move operation. If you don't want to delete the integration runtimes, then the best way is to follow the copy and clone document to do the copy and after it's done, delete the old data factory.
### Unable to export and import ARM template
Until recently, the it was only possible to publish a pipeline for deployments b
CI/CD process has been enhanced. The **Automated** publish feature takes, validates, and exports all ARM template features from the UI. It makes the logic consumable via a publicly available npm package [@microsoft/azure-data-factory-utilities](https://www.npmjs.com/package/@microsoft/azure-data-factory-utilities). This method allows you to programmatically trigger these actions instead of having to go to the UI and click a button. This method gives your CI/CD pipelines a **true** continuous integration experience. Follow [CI/CD Publishing Improvements](./continuous-integration-delivery-improvements.md) for details.
-### Cannot publish because of 4 MB ARM template limit
+### Cannot publish because of 4-MB ARM template limit
#### Issue
-You cannot deploy because you hit Azure Resource Manager limit of 4 MB total template size. You need a solution to deploy after crossing the limit.
+You can't deploy because you hit Azure Resource Manager limit of 4-MB total template size. You need a solution to deploy after crossing the limit.
#### Cause
-Azure Resource Manager restricts template size to be 4-MB. Limit the size of your template to 4-MB, and each parameter file to 64 KB. The 4 MB limit applies to the final state of the template after it has been expanded with iterative resource definitions, and values for variables and parameters. But, you have crossed the limit.
+Azure Resource Manager restricts template size to be 4-MB. Limit the size of your template to 4-MB, and each parameter file to 64 KB. The 4-MB limit applies to the final state of the template after it has been expanded with iterative resource definitions, and values for variables and parameters. But, you have crossed the limit.
#### Resolution For small to medium solutions, a single template is easier to understand and maintain. You can see all the resources and values in a single file. For advanced scenarios, linked templates enable you to break down the solution into targeted components. Follow best practice at [Using Linked and Nested Templates](../azure-resource-manager/templates/linked-templates.md?tabs=azure-powershell).
+### DevOps API limit of 20 MB causes ADF trigger twice instead of once
+
+#### Issue
+
+While publishing ADF resources, the azure pipeline triggers twice instead of once.
+
+#### Cause
+
+DevOps has limitation of 20-MB REST api load for arm templates, linked template and global parameters. Large ADF resources are reorganized to get around GitHub API rate limits. That may rarely cause ADF DevOps APIs hit 20-MB limit.
+
+#### Resolution
+
+Use ADF **Automated publish** (preferred) or **manual trigger** method to trigger once instead of twice.
+ ### Cannot connect to GIT Enterprise ##### Issue
-You cannot connect to GIT Enterprise because of permission issues. You can see error like **422 - Unprocessable Entity.**
+You can't connect to GIT Enterprise because of permission issues. You can see error like **422 - Unprocessable Entity.**
#### Cause
-* You have not configured Oauth for the service.
+* You haven't configured Oauth for the service.
* Your URL is misconfigured. The repoConfiguration should be of type [FactoryGitHubConfiguration](/dotnet/api/microsoft.azure.management.datafactory.models.factorygithubconfiguration?view=azure-dotnet&preserve-view=true) #### Resolution
An instance of the service, or the resource group containing it, was deleted and
#### Cause
-It is possible to recover the instance only if source control was configured for it with DevOps or Git. This action will bring all the latest published resources, but **will not** restore any unpublished pipelines, datasets, or linked services. If there is no Source control, recovering a deleted instance from the Azure backend is not possible because once the service receives the delete command, the instance is permanently deleted without any backup.
+It is possible to recover the instance only if source control was configured for it with DevOps or Git. This action will bring all the latest published resources, but **will not** restore any unpublished pipelines, datasets, or linked services. If there is no Source control, recovering a deleted instance from the Azure backend isn't possible because once the service receives the delete command, the instance is permanently deleted without any backup.
#### Resolution
To recover a deleted service instance that has source control configured, refer
* If there was a Self-hosted Integration Runtime in a deleted data factory or Synapse workspace, a new instance of the IR must be created in a new factory or workspace. The on-premises or virtual machine IR instance must be uninstalled and reinstalled, and a new key obtained. After setup of the new IR is completed, the Linked Service must be updated to point to new IR and the connected tested again, or it will fail with error **invalid reference.**
-### Cannot deploy to different stage using automatic publish method
+### Can't deploy to different stage using automatic publish method
#### Issue Customer followed all necessary steps like installing NPM package and setting up a higher stage using Azure DevOps, but deployment still fails.
Following section is not valid because package.json folder is not valid.
``` It should have DataFactory included in customCommand like *'run build validate $(Build.Repository.LocalPath)/DataFactory/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/yourFactoryName'*. Make sure the generated YAML file for higher stage should have required JSON artifacts.
-### Git Repository or Azure Purview Connection Disconnected
+### Git Repository or Microsoft Purview Connection Disconnected
#### Issue When deploying a service instance, the git repository or purview connection is disconnected.
You can monitor the pipeline using **SDK**, **Azure Monitor** or [Monitor](./mon
You want to perform unit testing during development and deployment of your pipelines. #### Cause
-During development and deployment cycles, you may want to unit test your pipeline before you manually or automatically publish your pipeline. Test automation allows you to run more tests, in less time, with guaranteed repeatability. Automatically re-testing all your pipelines before deployment gives you some protection against regression faults. Automated testing is a key component of CI/CD software development approaches: inclusion of automated tests in CI/CD deployment pipelines can significantly improve quality. In long run, tested pipeline artifacts are reused saving you cost and time.
+During development and deployment cycles, you may want to unit test your pipeline before you manually or automatically publish your pipeline. Test automation allows you to run more tests, in less time, with guaranteed repeatability. Automatically retesting all your pipelines before deployment gives you some protection against regression faults. Automated testing is a key component of CI/CD software development approaches: inclusion of automated tests in CI/CD deployment pipelines can significantly improve quality. In long run, tested pipeline artifacts are reused saving you cost and time.
#### Resolution Because customers may have different unit testing requirements with different skillsets, usual practice is to follow following steps:
If you want to share integration runtimes across all stages, consider using a te
### GIT publish may fail because of PartialTempTemplates files #### Issue
-When you have 1000s of old temporary ARM json files in PartialTemplates folder, publish may fail.
+When you have 1000 s of old temporary ARM json files in PartialTemplates folder, publish may fail.
#### Cause On publish, ADF fetches every file inside each folder in the collaboration branch. In the past, publishing generated two folders in the publish branch: PartialArmTemplates and LinkedTemplates. PartialArmTemplates files are no longer generated. However, because there can be many old files (thousands) in the PartialArmTemplates folder, this may result in many requests being made to GitHub on publish and the rate limit being hit.
For more help with troubleshooting, try the following resources:
* [Data Factory feature requests](/answers/topics/azure-data-factory.html) * [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory) * [Stack overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory)
-* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
+* [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
data-factory Connect Data Factory To Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connect-data-factory-to-azure-purview.md
Title: Connect a Data Factory to Azure Purview
-description: Learn about how to connect a Data Factory to Azure Purview
+ Title: Connect a Data Factory to Microsoft Purview
+description: Learn about how to connect a Data Factory to Microsoft Purview
Last updated 10/25/2021
-# Connect Data Factory to Azure Purview (Preview)
+# Connect Data Factory to Microsoft Purview (Preview)
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-[Azure Purview](../purview/overview.md) is a unified data governance service that helps you manage and govern your on-premises, multi-cloud, and software-as-a-service (SaaS) data. You can connect your data factory to Azure Purview. That connection allows you to use Azure Purview for capturing lineage data, and to discover and explore Azure Purview assets.
+[Microsoft Purview](../purview/overview.md) is a unified data governance service that helps you manage and govern your on-premises, multi-cloud, and software-as-a-service (SaaS) data. You can connect your data factory to Microsoft Purview. That connection allows you to use Microsoft Purview for capturing lineage data, and to discover and explore Microsoft Purview assets.
-## Connect Data Factory to Azure Purview
+## Connect Data Factory to Microsoft Purview
-You have two options to connect data factory to Azure Purview:
+You have two options to connect data factory to Microsoft Purview:
-- [Connect to Azure Purview account in Data Factory](#connect-to-azure-purview-account-in-data-factory)-- [Register Data Factory in Azure Purview](#register-data-factory-in-azure-purview)
+- [Connect to Microsoft Purview account in Data Factory](#connect-to-microsoft-purview-account-in-data-factory)
+- [Register Data Factory in Microsoft Purview](#register-data-factory-in-microsoft-purview)
-### Connect to Azure Purview account in Data Factory
+### Connect to Microsoft Purview account in Data Factory
-You need to have **Owner** or **Contributor** role on your data factory to connect to an Azure Purview account.
+You need to have **Owner** or **Contributor** role on your data factory to connect to a Microsoft Purview account.
To establish the connection on Data Factory authoring UI:
-1. In the ADF authoring UI, go to **Manage** -> **Azure Purview**, and select **Connect to an Azure Purview account**.
+1. In the ADF authoring UI, go to **Manage** -> **Microsoft Purview**, and select **Connect to a Microsoft Purview account**.
- :::image type="content" source="./media/data-factory-purview/register-purview-account.png" alt-text="Screenshot for registering an Azure Purview account.":::
+ :::image type="content" source="./media/data-factory-purview/register-purview-account.png" alt-text="Screenshot for registering a Microsoft Purview account.":::
2. Choose **From Azure subscription** or **Enter manually**. **From Azure subscription**, you can select the account that you have access to.
-3. Once connected, you can see the name of the Azure Purview account in the tab **Azure Purview account**.
+3. Once connected, you can see the name of the Microsoft Purview account in the tab **Microsoft Purview account**.
-If your Azure Purview account is protected by firewall, create the managed private endpoints for Azure Purview. Learn more about how to let Data Factory [access a secured Azure Purview account](how-to-access-secured-purview-account.md). You can either do it during the initial connection or edit an existing connection later.
+If your Microsoft Purview account is protected by firewall, create the managed private endpoints for Microsoft Purview. Learn more about how to let Data Factory [access a secured Microsoft Purview account](how-to-access-secured-purview-account.md). You can either do it during the initial connection or edit an existing connection later.
-The Azure Purview connection information is stored in the data factory resource like the following. To establish the connection programmatically, you can update the data factory and add the `purviewConfiguration` settings. When you want to push lineage from SSIS activities, also add `catalogUri` tag additionally.
+The Microsoft Purview connection information is stored in the data factory resource like the following. To establish the connection programmatically, you can update the data factory and add the `purviewConfiguration` settings. When you want to push lineage from SSIS activities, also add `catalogUri` tag additionally.
```json {
The Azure Purview connection information is stored in the data factory resource
} ```
-### Register Data Factory in Azure Purview
+### Register Data Factory in Microsoft Purview
-For how to register Data Factory in Azure Purview, see [How to connect Azure Data Factory and Azure Purview](../purview/how-to-link-azure-data-factory.md).
+For how to register Data Factory in Microsoft Purview, see [How to connect Azure Data Factory and Microsoft Purview](../purview/how-to-link-azure-data-factory.md).
## Set up authentication
-Data factory's managed identity is used to authenticate lineage push operations from data factory to Azure Purview.
+Data factory's managed identity is used to authenticate lineage push operations from data factory to Microsoft Purview.
-Grant the data factory's managed identity **Data Curator** role on your Azure Purview **root collection**. Learn more about [Access control in Azure Purview](../purview/catalog-permissions.md) and [Add roles and restrict access through collections](../purview/how-to-create-and-manage-collections.md#add-roles-and-restrict-access-through-collections).
+Grant the data factory's managed identity **Data Curator** role on your Microsoft Purview **root collection**. Learn more about [Access control in Microsoft Purview](../purview/catalog-permissions.md) and [Add roles and restrict access through collections](../purview/how-to-create-and-manage-collections.md#add-roles-and-restrict-access-through-collections).
-When connecting data factory to Azure Purview on authoring UI, ADF tries to add such role assignment automatically. If you have **Collection admins** role on the Azure Purview root collection and have access to Azure Purview account from your network, this operation is done successfully.
+When connecting data factory to Microsoft Purview on authoring UI, ADF tries to add such role assignment automatically. If you have **Collection admins** role on the Microsoft Purview root collection and have access to Microsoft Purview account from your network, this operation is done successfully.
-## Monitor Azure Purview connection
+## Monitor Microsoft Purview connection
-Once you connect the data factory to an Azure Purview account, you see the following page with details on the enabled integration capabilities.
+Once you connect the data factory to a Microsoft Purview account, you see the following page with details on the enabled integration capabilities.
For **Data Lineage - Pipeline**, you may see one of below status: -- **Connected**: The data factory is successfully connected to the Azure Purview account. Note this indicates data factory is associated with an Azure Purview account and has permission to push lineage to it. If your Azure Purview account is protected by firewall, you also need to make sure the integration runtime used to execute the activities and conduct lineage push can reach the Azure Purview account. Learn more from [Access a secured Azure Purview account from Azure Data Factory](how-to-access-secured-purview-account.md).-- **Disconnected**: The data factory cannot push lineage to Azure Purview because Azure Purview Data Curator role is not granted to data factory's managed identity. To fix this issue, go to your Azure Purview account to check the role assignments, and manually grant the role as needed. Learn more from [Set up authentication](#set-up-authentication) section.
+- **Connected**: The data factory is successfully connected to the Microsoft Purview account. Note this indicates data factory is associated with a Microsoft Purview account and has permission to push lineage to it. If your Microsoft Purview account is protected by firewall, you also need to make sure the integration runtime used to execute the activities and conduct lineage push can reach the Microsoft Purview account. Learn more from [Access a secured Microsoft Purview account from Azure Data Factory](how-to-access-secured-purview-account.md).
+- **Disconnected**: The data factory cannot push lineage to Microsoft Purview because Microsoft Purview Data Curator role is not granted to data factory's managed identity. To fix this issue, go to your Microsoft Purview account to check the role assignments, and manually grant the role as needed. Learn more from [Set up authentication](#set-up-authentication) section.
- **Unknown**: Data Factory cannot check the status. Possible reasons are:
- - Cannot reach the Azure Purview account from your current network because the account is protected by firewall. You can launch the ADF UI from a private network that has connectivity to your Azure Purview account instead.
- - You don't have permission to check role assignments on the Azure Purview account. You can contact the Azure Purview account admin to check the role assignments for you. Learn about the needed Azure Purview role from [Set up authentication](#set-up-authentication) section.
+ - Cannot reach the Microsoft Purview account from your current network because the account is protected by firewall. You can launch the ADF UI from a private network that has connectivity to your Microsoft Purview account instead.
+ - You don't have permission to check role assignments on the Microsoft Purview account. You can contact the Microsoft Purview account admin to check the role assignments for you. Learn about the needed Microsoft Purview role from [Set up authentication](#set-up-authentication) section.
-## Report lineage data to Azure Purview
+## Report lineage data to Microsoft Purview
-Once you connect the data factory to an Azure Purview account, when you execute pipelines, Data Factory push lineage information to the Azure Purview account. For detailed supported capabilities, see [Supported Azure Data Factory activities](../purview/how-to-link-azure-data-factory.md#supported-azure-data-factory-activities). For an end to end walkthrough, refer to [Tutorial: Push Data Factory lineage data to Azure Purview](tutorial-push-lineage-to-purview.md).
+Once you connect the data factory to a Microsoft Purview account, when you execute pipelines, Data Factory push lineage information to the Microsoft Purview account. For detailed supported capabilities, see [Supported Azure Data Factory activities](../purview/how-to-link-azure-data-factory.md#supported-azure-data-factory-activities). For an end to end walkthrough, refer to [Tutorial: Push Data Factory lineage data to Microsoft Purview](tutorial-push-lineage-to-purview.md).
-## Discover and explore data using Azure Purview
+## Discover and explore data using Microsoft Purview
-Once you connect the data factory to an Azure Purview account, you can use the search bar at the top center of Data Factory authoring UI to search for data and perform actions. Learn more from [Discover and explore data in ADF using Azure Purview](how-to-discover-explore-purview-data.md).
+Once you connect the data factory to a Microsoft Purview account, you can use the search bar at the top center of Data Factory authoring UI to search for data and perform actions. Learn more from [Discover and explore data in ADF using Microsoft Purview](how-to-discover-explore-purview-data.md).
## Next steps
-[Tutorial: Push Data Factory lineage data to Azure Purview](tutorial-push-lineage-to-purview.md)
+[Tutorial: Push Data Factory lineage data to Microsoft Purview](tutorial-push-lineage-to-purview.md)
-[Discover and explore data in ADF using Azure Purview](how-to-discover-explore-purview-data.md)
+[Discover and explore data in ADF using Microsoft Purview](how-to-discover-explore-purview-data.md)
-[Access a secured Azure Purview account](how-to-access-secured-purview-account.md)
+[Access a secured Microsoft Purview account](how-to-access-secured-purview-account.md)
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-crm-office-365.md
Previously updated : 01/10/2022 Last updated : 04/12/2022 # Copy and transform data in Dynamics 365 (Microsoft Dataverse) or Dynamics CRM using Azure Data Factory or Azure Synapse Analytics
To copy data from Dynamics, the copy activity **source** section supports the fo
> [!IMPORTANT] >- When you copy data from Dynamics, explicit column mapping from Dynamics to sink is optional. But we highly recommend the mapping to ensure a deterministic copy result.
->- When the service imports a schema in the authoring UI, it infers the schema. It does so by sampling the top rows from the Dynamics query result to initialize the source column list. In that case, columns with no values in the top rows are omitted. The same behavior applies to copy executions if there is no explicit mapping. You can review and add more columns into the mapping, which are honored during copy runtime.
+>- When the service imports a schema in the authoring UI, it infers the schema. It does so by sampling the top rows from the Dynamics query result to initialize the source column list. In that case, columns with no values in the top rows are omitted. The same behavior also applies to data preview and copy executions if there is no explicit mapping. You can review and add more columns into the mapping, which are honored during copy runtime.
#### Example
data-factory Control Flow Lookup Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-lookup-activity.md
Previously updated : 09/09/2021 Last updated : 04/06/2022 # Lookup activity in Azure Data Factory and Azure Synapse Analytics
Note the following:
- The Lookup activity can return up to **5000 rows**; if the result set contains more records, the first 5000 rows will be returned. - The Lookup activity output supports up to **4 MB** in size, activity will fail if the size exceeds the limit. - The longest duration for Lookup activity before timeout is **24 hours**.-- When you use query or stored procedure to lookup data, make sure to return one and exact one result set. Otherwise, Lookup activity fails.+
+> [!Note]
+> When you use query or stored procedure to lookup data, make sure to return one and exact one result set. Otherwise, Lookup activity fails.
The following data sources are supported for Lookup activity.
data-factory Data Factory Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-private-link.md
Finally, you must create the private endpoint in your data factory.
## Restrict access for data factory resources using private link
-If you want to restrict access for data factory resources in your subscriptions by private link, please follow [Use portal to create private link for managing Azure resources](https://docs.microsoft.com/azure/azure-resource-manager/management/create-private-link-access-portal?source=docs)
+If you want to restrict access for data factory resources in your subscriptions by private link, please follow [Use portal to create private link for managing Azure resources](../azure-resource-manager/management/create-private-link-access-portal.md?source=docs)
## Known issue You are unable to access each other PaaS Resources when both sides are exposed to private Link and private endpoint. This is a known limitation of private link and private endpoint.
For example, if A is using a private link to access the portal of data factory A
- [Create a data factory by using the Azure Data Factory UI](quickstart-create-data-factory-portal.md) - [Introduction to Azure Data Factory](introduction.md)-- [Visual authoring in Azure Data Factory](author-visually.md)
+- [Visual authoring in Azure Data Factory](author-visually.md)
data-factory Data Factory Tutorials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-tutorials.md
Below is a list of tutorials to help explain and walk through a series of Data F
## Data lineage
-[Azure Purview](turorial-push-lineage-to-purview.md)
+[Microsoft Purview](turorial-push-lineage-to-purview.md)
## Next steps Learn more about Data Factory [pipelines](concepts-pipelines-activities.md) and [data flows](concepts-data-flow-overview.md).
data-factory How To Access Secured Purview Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-access-secured-purview-account.md
Title: Access a secured Azure Purview account
-description: Learn about how to access a firewall protected Azure Purview account through private endpoints from Azure Data Factory
+ Title: Access a secured Microsoft Purview account
+description: Learn about how to access a firewall protected Microsoft Purview account through private endpoints from Azure Data Factory
Last updated 09/02/2021
-# Access a secured Azure Purview account from Azure Data Factory
+# Access a secured Microsoft Purview account from Azure Data Factory
-This article describes how to access a secured Azure Purview account from Azure Data Factory for different integration scenarios.
+This article describes how to access a secured Microsoft Purview account from Azure Data Factory for different integration scenarios.
-## Azure Purview private endpoint deployment scenarios
+## Microsoft Purview private endpoint deployment scenarios
-You can use [Azure private endpoints](../private-link/private-endpoint-overview.md) for your Azure Purview accounts to allow secure access from a virtual network (VNet) to the catalog over a Private Link. Azure Purview provides different types of private points for various access need: *account* private endpoint, *portal* private endpoint, and *ingestion* private endpoints. Learn more from [Azure Purview private endpoints conceptual overview](../purview/catalog-private-link.md#conceptual-overview).
+You can use [Azure private endpoints](../private-link/private-endpoint-overview.md) for your Microsoft Purview accounts to allow secure access from a virtual network (VNet) to the catalog over a Private Link. Microsoft Purview provides different types of private points for various access need: *account* private endpoint, *portal* private endpoint, and *ingestion* private endpoints. Learn more from [Microsoft Purview private endpoints conceptual overview](../purview/catalog-private-link.md#conceptual-overview).
-If your Azure Purview account is protected by firewall and denies public access, make sure you follow below checklist to set up the private endpoints so Data Factory can successfully connect to Azure Purview.
+If your Microsoft Purview account is protected by firewall and denies public access, make sure you follow below checklist to set up the private endpoints so Data Factory can successfully connect to Microsoft Purview.
-| Scenario | Required Azure Purview private endpoints |
+| Scenario | Required Microsoft Purview private endpoints |
| | |
-| [Run pipeline and report lineage to Azure Purview](tutorial-push-lineage-to-purview.md) | For Data Factory pipeline to push lineage to Azure Purview, Azure Purview ***account*** and ***ingestion*** private endpoints are required. <br>- When using **Azure Integration Runtime**, follow the steps in [Managed private endpoints for Azure Purview](#managed-private-endpoints-for-azure-purview) section to create managed private endpoints in the Data Factory managed virtual network.<br>- When using **Self-hosted Integration Runtime**, follow the steps in [this section](../purview/catalog-private-link-end-to-end.md#option-2enable-account-portal-and-ingestion-private-endpoint-on-existing-azure-purview-accounts) to create the *account* and *ingestion* private endpoints in your integration runtime's virtual network. |
-| [Discover and explore data using Azure Purview on ADF UI](how-to-discover-explore-purview-data.md) | To use the search bar at the top center of Data Factory authoring UI to search for Azure Purview data and perform actions, you need to create Azure Purview ***account*** and ***portal*** private endpoints in the virtual network that you launch the Data Factory Studio. Follow the steps in [Enable *account* and *portal* private endpoint](../purview/catalog-private-link-account-portal.md#option-2enable-account-and-portal-private-endpoint-on-existing-azure-purview-accounts). |
+| [Run pipeline and report lineage to Microsoft](tutorial-push-lineage-to-purview.md) | For Data Factory pipeline to push lineage to Microsoft, Microsoft Purview ***account*** and ***ingestion*** private endpoints are required. <br>- When using **Azure Integration Runtime**, follow the steps in [Managed private endpoints for Microsoft Purview](#managed-private-endpoints-for-microsoft-purview) section to create managed private endpoints in the Data Factory managed virtual network.<br>- When using **Self-hosted Integration Runtime**, follow the steps in [this section](../purview/catalog-private-link-end-to-end.md#option-2enable-account-portal-and-ingestion-private-endpoint-on-existing-microsoft-purview-accounts) to create the *account* and *ingestion* private endpoints in your integration runtime's virtual network. |
+| [Discover and explore data using Microsoft on ADF UI](how-to-discover-explore-purview-data.md) | To use the search bar at the top center of Data Factory authoring UI to search for Microsoft Purview data and perform actions, you need to create Microsoft Purview ***account*** and ***portal*** private endpoints in the virtual network that you launch the Data Factory Studio. Follow the steps in [Enable *account* and *portal* private endpoint](../purview/catalog-private-link-account-portal.md#option-2enable-account-and-portal-private-endpoint-on-existing-microsoft-purview-accounts). |
-## Managed private endpoints for Azure Purview
+## Managed private endpoints for Microsoft Purview
-[Managed private endpoints](managed-virtual-network-private-endpoint.md#managed-private-endpoints) are private endpoints created in the Azure Data Factory Managed Virtual Network establishing a private link to Azure resources. When you run pipeline and report lineage to a firewall protected Azure Purview account, create an Azure Integration Runtime with "Virtual network configuration" option enabled, then create the Azure Purview ***account*** and ***ingestion*** managed private endpoints as follows.
+[Managed private endpoints](managed-virtual-network-private-endpoint.md#managed-private-endpoints) are private endpoints created in the Azure Data Factory Managed Virtual Network establishing a private link to Azure resources. When you run pipeline and report lineage to a firewall protected Microsoft Purview account, create an Azure Integration Runtime with "Virtual network configuration" option enabled, then create the Microsoft Purview ***account*** and ***ingestion*** managed private endpoints as follows.
### Create managed private endpoints
-To create managed private endpoints for Azure Purview on Data Factory authoring UI:
+To create managed private endpoints for Microsoft Purview on Data Factory authoring UI:
-1. Go to **Manage** -> **Azure Purview**, and click **Edit** to edit your existing connected Azure Purview account or click **Connect to an Azure Purview account** to connect to a new Azure Purview account.
+1. Go to **Manage** -> **Microsoft Purview**, and click **Edit** to edit your existing connected Microsoft Purview account or click **Connect to a Microsoft Purview account** to connect to a new Microsoft Purview account.
2. Select **Yes** for **Create managed private endpoints**. You need to have at least one Azure Integration Runtime with "Virtual network configuration" option enabled in the data factory to see this option.
-3. Click **+ Create all** button to batch create the needed Azure Purview private endpoints, including the ***account*** private endpoint and the ***ingestion*** private endpoints for the Azure Purview managed resources - Blob storage, Queue storage, and Event Hubs namespace. You need to have at least **Reader** role on your Azure Purview account for Data Factory to retrieve the Azure Purview managed resources' information.
+3. Click **+ Create all** button to batch create the needed Microsoft Purview private endpoints, including the ***account*** private endpoint and the ***ingestion*** private endpoints for the Microsoft Purview managed resources - Blob storage, Queue storage, and Event Hubs namespace. You need to have at least **Reader** role on your Microsoft Purview account for Data Factory to retrieve the Microsoft Purview managed resources' information.
- :::image type="content" source="./media/how-to-access-secured-purview-account/purview-create-all-managed-private-endpoints.png" alt-text="Create managed private endpoint for your connected Azure Purview account.":::
+ :::image type="content" source="./media/how-to-access-secured-purview-account/purview-create-all-managed-private-endpoints.png" alt-text="Create managed private endpoint for your connected Microsoft Purview account.":::
4. In the next page, specify a name for the private endpoint. It will be used to generate names for the ingestion private endpoints as well with suffix.
- :::image type="content" source="./media/how-to-access-secured-purview-account/name-purview-private-endpoints.png" alt-text="Name the managed private endpoints for your connected Azure Purview account.":::
+ :::image type="content" source="./media/how-to-access-secured-purview-account/name-purview-private-endpoints.png" alt-text="Name the managed private endpoints for your connected Microsoft Purview account.":::
-5. Click **Create** to create the private endpoints. After creation, 4 private endpoint requests will be generated that must [get approved by an owner of Azure Purview](#approve-private-endpoint-connections).
+5. Click **Create** to create the private endpoints. After creation, 4 private endpoint requests will be generated that must [get approved by an owner of Microsoft Purview](#approve-private-endpoint-connections).
-Such batch managed private endpoint creation is provided on the Azure Purview UI only. If you want to create the managed private endpoints programmatically, you need to create those PEs individually. You can find Azure Purview managed resources' information from Azure portal -> your Azure Purview account -> Managed resources.
+Such batch managed private endpoint creation is provided on the Microsoft Purview UI only. If you want to create the managed private endpoints programmatically, you need to create those PEs individually. You can find Microsoft Purview managed resources' information from Azure portal -> your Microsoft Purview account -> Managed resources.
### Approve private endpoint connections
-After you create the managed private endpoints for Azure Purview, you see "Pending" state first. The Azure Purview owner need to approve the private endpoint connections for each resource.
+After you create the managed private endpoints for Microsoft Purview, you see "Pending" state first. The Microsoft Purview owner need to approve the private endpoint connections for each resource.
-If you have permission to approve the Azure Purview private endpoint connection, from Data Factory UI:
+If you have permission to approve the Microsoft Purview private endpoint connection, from Data Factory UI:
-1. Go to **Manage** -> **Azure Purview** -> **Edit**
+1. Go to **Manage** -> **Microsoft Purview** -> **Edit**
2. In the private endpoint list, click the **Edit** (pencil) button next to each private endpoint name 3. Click **Manage approvals in Azure portal** which will bring you to the resource. 4. On the given resource, go to **Networking** -> **Private endpoint connection** to approve it. The private endpoint is named as `data_factory_name.your_defined_private_endpoint_name` with description as "Requested by data_factory_name". 5. Repeat this operation for all private endpoints.
-If you don't have permission to approve the Azure Purview private endpoint connection, ask the Azure Purview account owner to do as follows.
+If you don't have permission to approve the Microsoft Purview private endpoint connection, ask the Microsoft Purview account owner to do as follows.
-- For *account* private endpoint, go to Azure portal -> your Azure Purview account -> Networking -> Private endpoint connection to approve.-- For *ingestion* private endpoints, go to Azure portal -> your Azure Purview account -> Managed resources, click into the Storage account and Event Hubs namespace respectively, and approve the private endpoint connection in Networking -> Private endpoint connection page.
+- For *account* private endpoint, go to Azure portal -> your Microsoft Purview account -> Networking -> Private endpoint connection to approve.
+- For *ingestion* private endpoints, go to Azure portal -> your Microsoft Purview account -> Managed resources, click into the Storage account and Event Hubs namespace respectively, and approve the private endpoint connection in Networking -> Private endpoint connection page.
### Monitor managed private endpoints
-You can monitor the created managed private endpoints for Azure Purview at two places:
+You can monitor the created managed private endpoints for Microsoft Purview at two places:
-- Go to **Manage** -> **Azure Purview** -> **Edit** to open your existing connected Azure Purview account. To see all the relevant private endpoints, you need to have at least **Reader** role on your Azure Purview account for Data Factory to retrieve the Azure Purview managed resources' information. Otherwise, you only see *account* private endpoint with warning.-- Go to **Manage** -> **Managed private endpoints** where you see all the managed private endpoints created under the data factory. If you have at least **Reader** role on your Azure Purview account, you see Azure Purview relevant private endpoints being grouped together. Otherwise, they show up separately in the list.
+- Go to **Manage** -> **Microsoft Purview** -> **Edit** to open your existing connected Microsoft Purview account. To see all the relevant private endpoints, you need to have at least **Reader** role on your Microsoft Purview account for Data Factory to retrieve the Microsoft Purview managed resources' information. Otherwise, you only see *account* private endpoint with warning.
+- Go to **Manage** -> **Managed private endpoints** where you see all the managed private endpoints created under the data factory. If you have at least **Reader** role on your Microsoft Purview account, you see Microsoft Purview relevant private endpoints being grouped together. Otherwise, they show up separately in the list.
## Next steps -- [Connect Data Factory to Azure Purview](connect-data-factory-to-azure-purview.md)-- [Tutorial: Push Data Factory lineage data to Azure Purview](tutorial-push-lineage-to-purview.md)-- [Discover and explore data in ADF using Azure Purview](how-to-discover-explore-purview-data.md)
+- [Connect Data Factory to Microsoft Purview](connect-data-factory-to-azure-purview.md)
+- [Tutorial: Push Data Factory lineage data to Microsoft Purview](tutorial-push-lineage-to-purview.md)
+- [Discover and explore data in ADF using Microsoft Purview](how-to-discover-explore-purview-data.md)
data-factory How To Discover Explore Purview Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-discover-explore-purview-data.md
Title: Discover and explore data in ADF using Azure Purview
-description: Learn how to discover, explore data in Azure Data Factory using Azure Purview
+ Title: Discover and explore data in ADF using Microsoft Purview
+description: Learn how to discover, explore data in Azure Data Factory using Microsoft Purview
Last updated 08/10/2021
-# Discover and explore data in ADF using Azure Purview
+# Discover and explore data in ADF using Microsoft Purview
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-In this article, you will register an Azure Purview Account to a Data Factory. That connection allows you to discover Azure Purview assets and interact with them through ADF capabilities.
+In this article, you will register a Microsoft Purview Account to a Data Factory. That connection allows you to discover Microsoft Purview assets and interact with them through ADF capabilities.
You can perform the following tasks in ADF: -- Use the search box at the top to find Azure Purview assets based on keywords
+- Use the search box at the top to find Microsoft Purview assets based on keywords
- Understand the data based on metadata, lineage, annotations - Connect those data to your data factory with linked services or datasets ## Prerequisites -- [Azure Purview account](../purview/create-catalog-portal.md)
+- [Microsoft Purview account](../purview/create-catalog-portal.md)
- [Data Factory](./quickstart-create-data-factory-portal.md) -- [Connect an Azure Purview Account into Data Factory](./connect-data-factory-to-azure-purview.md)
+- [Connect a Microsoft Purview Account into Data Factory](./connect-data-factory-to-azure-purview.md)
-## Using Azure Purview in Data Factory
+## Using Microsoft Purview in Data Factory
-The use Azure Purview in Data Factory requires you to have access to that Azure Purview account. Data Factory passes-through your Azure Purview permission. As an example, if you have a curator permission role, you will be able to edit metadata scanned by Azure Purview.
+The use Microsoft Purview in Data Factory requires you to have access to that Microsoft Purview account. Data Factory passes-through your Microsoft Purview permission. As an example, if you have a curator permission role, you will be able to edit metadata scanned by Microsoft Purview.
### Data discovery: search datasets
-To discover data registered and scanned by Azure Purview, you can use the Search bar at the top center of Data Factory portal. Make sure that you select Azure Purview to search for all of your organization data.
+To discover data registered and scanned by Microsoft Purview, you can use the Search bar at the top center of Data Factory portal. Make sure that you select Microsoft Purview to search for all of your organization data.
:::image type="content" source="./media/data-factory-purview/search-dataset.png" alt-text="Screenshot for performing over datasets."::: ### Actions that you can perform over datasets with Data Factory resources
-You can directly create Linked Service, Dataset, or dataflow over the data you search by Azure Purview.
+You can directly create Linked Service, Dataset, or dataflow over the data you search by Microsoft Purview.
##  Next steps
-[Tutorial: Push Data Factory lineage data to Azure Purview](turorial-push-lineage-to-purview.md)
+[Tutorial: Push Data Factory lineage data to Microsoft Purview](turorial-push-lineage-to-purview.md)
-[Connect an Azure Purview Account into Data Factory](connect-data-factory-to-azure-purview.md)
+[Connect a Microsoft Purview Account into Data Factory](connect-data-factory-to-azure-purview.md)
-[How to Search Data in Azure Purview Data Catalog](../purview/how-to-search-catalog.md)
+[How to Search Data in Microsoft Purview Data Catalog](../purview/how-to-search-catalog.md)
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/managed-virtual-network-private-endpoint.md
The following data sources have native Private Endpoint support and can be conne
- Azure Key Vault - Azure Machine Learning - Azure Private Link Service-- Azure Purview
+- Microsoft Purview
- Azure SQL Database - Azure SQL Managed Instance - (public preview) - Azure Synapse Analytics
data-factory Tutorial Push Lineage To Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-push-lineage-to-purview.md
Title: Push Data Factory lineage data to Azure Purview
-description: Learn about how to push Data Factory lineage data to Azure Purview
+ Title: Push Data Factory lineage data to Microsoft Purview
+description: Learn about how to push Data Factory lineage data to Microsoft Purview
Last updated 08/10/2021
-# Push Data Factory lineage data to Azure Purview
+# Push Data Factory lineage data to Microsoft Purview
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-In this tutorial, you'll use the Data Factory user interface (UI) to create a pipeline that run activities and report lineage data to Azure Purview account. Then you can view all the lineage information in your Azure Purview account.
+In this tutorial, you'll use the Data Factory user interface (UI) to create a pipeline that run activities and report lineage data to Microsoft Purview account. Then you can view all the lineage information in your Microsoft Purview account.
Currently, lineage is supported for Copy, Data Flow, and Execute SSIS activities. Learn more details on the supported capabilities from [Supported Azure Data Factory activities](../purview/how-to-link-azure-data-factory.md#supported-azure-data-factory-activities).
Currently, lineage is supported for Copy, Data Flow, and Execute SSIS activities
* **Azure subscription**. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. * **Azure Data Factory**. If you don't have an Azure Data Factory, see [Create an Azure Data Factory](./quickstart-create-data-factory-portal.md).
-* **Azure Purview account**. The Azure Purview account captures all lineage data generated by data factory. If you don't have an Azure Purview account, see [Create an Azure Purview](../purview/create-catalog-portal.md).
+* **Microsoft Purview account**. The Microsoft Purview account captures all lineage data generated by data factory. If you don't have a Microsoft Purview account, see [Create a Microsoft Purview](../purview/create-catalog-portal.md).
-## Run pipeline and push lineage data to Azure Purview
+## Run pipeline and push lineage data to Microsoft Purview
-### Step 1: Connect Data Factory to your Azure Purview account
+### Step 1: Connect Data Factory to your Microsoft Purview account
-You can establish the connection between Data Factory and Azure Purview account by following the steps in [Connect Data Factory to Azure Purview](connect-data-factory-to-azure-purview.md).
+You can establish the connection between Data Factory and Microsoft Purview account by following the steps in [Connect Data Factory to Microsoft Purview](connect-data-factory-to-azure-purview.md).
### Step 2: Run pipeline in Data Factory
After you run the pipeline, in the [pipeline monitoring view](monitor-visually.m
:::image type="content" source="./media/data-factory-purview/monitor-lineage-reporting-status.png" alt-text="Monitor the lineage reporting status in pipeline monitoring view.":::
-### Step 4: View lineage information in your Azure Purview account
+### Step 4: View lineage information in your Microsoft Purview account
-On Azure Purview UI, you can browse assets and choose type "Azure Data Factory". You can also search the Data Catalog using keywords.
+On Microsoft Purview UI, you can browse assets and choose type "Azure Data Factory". You can also search the Data Catalog using keywords.
On the activity asset, click the Lineage tab, you can see all the lineage information. - Copy activity:
- :::image type="content" source="./media/data-factory-purview/copy-lineage.png" alt-text="Screenshot of the Copy activity lineage in Azure Purview." lightbox="./media/data-factory-purview/copy-lineage.png":::
+ :::image type="content" source="./media/data-factory-purview/copy-lineage.png" alt-text="Screenshot of the Copy activity lineage in Microsoft Purview." lightbox="./media/data-factory-purview/copy-lineage.png":::
- Data Flow activity:
- :::image type="content" source="./media/data-factory-purview/dataflow-lineage.png" alt-text="Screenshot of the Data Flow lineage in Azure Purview." lightbox="./media/data-factory-purview/dataflow-lineage.png":::
+ :::image type="content" source="./media/data-factory-purview/dataflow-lineage.png" alt-text="Screenshot of the Data Flow lineage in Microsoft Purview." lightbox="./media/data-factory-purview/dataflow-lineage.png":::
> [!NOTE] > For the lineage of Dataflow activity, we only support source and sink. The lineage for Dataflow transformation is not supported yet. - Execute SSIS Package activity:
- :::image type="content" source="./media/data-factory-purview/ssis-lineage.png" alt-text="Screenshot of the Execute SSIS lineage in Azure Purview." lightbox="./media/data-factory-purview/ssis-lineage.png":::
+ :::image type="content" source="./media/data-factory-purview/ssis-lineage.png" alt-text="Screenshot of the Execute SSIS lineage in Microsoft Purview." lightbox="./media/data-factory-purview/ssis-lineage.png":::
> [!NOTE] > For the lineage of Execute SSIS Package activity, we only support source and destination. The lineage for transformation is not supported yet.
On the activity asset, click the Lineage tab, you can see all the lineage inform
[Catalog lineage user guide](../purview/catalog-lineage-user-guide.md)
-[Connect Data Factory to Azure Purview](connect-data-factory-to-azure-purview.md)
+[Connect Data Factory to Microsoft Purview](connect-data-factory-to-azure-purview.md)
databox Data Box Troubleshoot Share Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-troubleshoot-share-access.md
Previously updated : 08/23/2021 Last updated : 04/15/2022
The failed connection attempts may include background processes, such as retries
**Suggested resolution.** To connect to an SMB share after a share account lockout, do these steps:
-1. Verify the SMB credentials for the share. In the local web UI of your device, go to **Connect and copy**, and select **SMB** for the share. You'll see the following dialog box.
+1. If the dashboard status indicates the device is locked, unlock the device from the top command bar and retry the connection.
+
+ :::image type="content" source="media/data-box-troubleshoot-share-access/dashboard-locked.png" alt-text="Screenshot of the dashboard locked status.":::
+
+1. If you are still unable to connect to an SMB share after unlocking your device, verify the SMB credentials for the share. In the local web UI of your device, go to **Connect and copy**, and select **SMB** for the share. You'll see the following dialog box.
![Screenshot of Access Share And Copy Data screen for an SMB share on a Data Box. Copy icons for the account, username, and password are highlighted.](media/data-box-troubleshoot-share-access/get-share-credentials-01.png)
defender-for-cloud Information Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/information-protection.md
Title: Prioritize security actions by data sensitivity - Microsoft Defender for Cloud
-description: Use Azure Purview's data sensitivity classifications in Microsoft Defender for Cloud
+description: Use Microsoft Purview's data sensitivity classifications in Microsoft Defender for Cloud
Last updated 11/09/2021
Last updated 11/09/2021
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
-[Azure Purview](../purview/overview.md), Microsoft's data governance service, provides rich insights into the *sensitivity of your data*. With automated data discovery, sensitive data classification, and end-to-end data lineage, Azure Purview helps organizations manage and govern data in hybrid and multi-cloud environments.
+[Microsoft Purview](../purview/overview.md), Microsoft's data governance service, provides rich insights into the *sensitivity of your data*. With automated data discovery, sensitive data classification, and end-to-end data lineage, Microsoft Purview helps organizations manage and govern data in hybrid and multi-cloud environments.
-Microsoft Defender for Cloud customers using Azure Purview can benefit from an additional vital layer of metadata in alerts and recommendations: information about any potentially sensitive data involved. This knowledge helps solve the triage challenge and ensures security professionals can focus their attention on threats to sensitive data.
+Microsoft Defender for Cloud customers using Microsoft Purview can benefit from an additional vital layer of metadata in alerts and recommendations: information about any potentially sensitive data involved. This knowledge helps solve the triage challenge and ensures security professionals can focus their attention on threats to sensitive data.
-This page explains the integration of Azure Purview's data sensitivity classification labels within Defender for Cloud.
+This page explains the integration of Microsoft Purview's data sensitivity classification labels within Defender for Cloud.
## Availability |Aspect|Details| |-|:-| |Release state:|Preview.<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]|
-|Pricing:|You'll need an Azure Purview account to create the data sensitivity classifications and run the scans. Viewing the scan results and using the output is free for Defender for Cloud users|
+|Pricing:|You'll need a Microsoft Purview account to create the data sensitivity classifications and run the scans. Viewing the scan results and using the output is free for Defender for Cloud users|
|Required roles and permissions:|**Security admin** and **Security contributor**| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet (**Partial**: Subset of alerts and vulnerability assessment for SQL servers. Behavioral threat protections aren't available.)|
Defender for Cloud includes two mechanisms to help prioritize recommendations an
However, where possible, you'd want to focus the security team's efforts on risks to the organization's **data**. If two recommendations have equal impact on your secure score, but one relates to a resource with sensitive data, ideally you'd include that knowledge when determining prioritization.
-Azure Purview's data sensitivity classifications and data sensitivity labels provide that knowledge.
+Microsoft Purview's data sensitivity classifications and data sensitivity labels provide that knowledge.
## Discover resources with sensitive data
-To provide the vital information about discovered sensitive data, and help ensure you have that information when you need it, Defender for Cloud displays information from Azure Purview in multiple locations.
+To provide the vital information about discovered sensitive data, and help ensure you have that information when you need it, Defender for Cloud displays information from Microsoft Purview in multiple locations.
> [!TIP]
-> If a resource is scanned by multiple Azure Purview accounts, the information shown in Defender for Cloud relates to the most recent scan.
+> If a resource is scanned by multiple Microsoft Purview accounts, the information shown in Defender for Cloud relates to the most recent scan.
### Alerts and recommendations pages
This vital additional layer of metadata helps solve the triage challenge and ens
### Inventory filters
-The [asset inventory page](asset-inventory.md) has a collection of powerful filters to group your resources with outstanding alerts and recommendations according to the criteria relevant for any scenario. These filters include **Data sensitivity classifications** and **Data sensitivity labels**. Use these filters to evaluate the security posture of resources on which Azure Purview has discovered sensitive data.
+The [asset inventory page](asset-inventory.md) has a collection of powerful filters to group your resources with outstanding alerts and recommendations according to the criteria relevant for any scenario. These filters include **Data sensitivity classifications** and **Data sensitivity labels**. Use these filters to evaluate the security posture of resources on which Microsoft Purview has discovered sensitive data.
:::image type="content" source="./media/information-protection/information-protection-inventory-filters.png" alt-text="Screenshot of information protection filters in Microsoft Defender for Cloud's asset inventory page." lightbox="./media/information-protection/information-protection-inventory-filters.png":::
When you select a single resource - whether from an alert, recommendation, or th
The resource health page provides a snapshot view of the overall health of a single resource. You can review detailed information about the resource and all recommendations that apply to that resource. Also, if you're using any of the Microsoft Defender plans, you can see outstanding security alerts for that specific resource too.
-When reviewing the health of a specific resource, you'll see the Azure Purview information on this page and can use it determine what data has been discovered on this resource alongside the Azure Purview account used to scan the resource.
+When reviewing the health of a specific resource, you'll see the Microsoft Purview information on this page and can use it determine what data has been discovered on this resource alongside the Microsoft Purview account used to scan the resource.
### Overview tile
-The dedicated **Information protection** tile in Defender for CloudΓÇÖs [overview dashboard](overview-page.md) shows Azure PurviewΓÇÖs coverage. It also shows the resource types with the most sensitive data discovered.
+The dedicated **Information protection** tile in Defender for CloudΓÇÖs [overview dashboard](overview-page.md) shows Microsoft PurviewΓÇÖs coverage. It also shows the resource types with the most sensitive data discovered.
-A graph shows the number of recommendations and alerts by classified resource types. The tile also includes a link to Azure Purview to scan additional resources. Select the tile to see classified resources in Defender for CloudΓÇÖs asset inventory page.
+A graph shows the number of recommendations and alerts by classified resource types. The tile also includes a link to Microsoft Purview to scan additional resources. Select the tile to see classified resources in Defender for CloudΓÇÖs asset inventory page.
:::image type="content" source="./media/information-protection/overview-dashboard-information-protection.png" alt-text="Screenshot of the information protection tile in Microsoft Defender for Cloud's overview dashboard." lightbox="./media/information-protection/overview-dashboard-information-protection.png":::
A graph shows the number of recommendations and alerts by classified resource ty
For related information, see: -- [What is Azure Purview?](../purview/overview.md)-- [Azure Purview's supported data sources and file types](../purview/sources-and-scans.md) and [supported data stores](../purview/purview-connector-overview.md)-- [Azure Purview deployment best practices](../purview/deployment-best-practices.md)-- [How to label to your data in Azure Purview](../purview/how-to-automatically-label-your-content.md)
+- [What is Microsoft Purview?](../purview/overview.md)
+- [Microsoft Purview's supported data sources and file types](../purview/sources-and-scans.md) and [supported data stores](../purview/purview-connector-overview.md)
+- [Microsoft Purview deployment best practices](../purview/deployment-best-practices.md)
+- [How to label to your data in Microsoft Purview](../purview/how-to-automatically-label-your-content.md)
defender-for-cloud Overview Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/overview-page.md
In the center of the page are the **feature tiles**, each linking to a high prof
- **Regulatory compliance** - Defender for Cloud provides insights into your compliance posture based on continuous assessments of your Azure environment. Defender for Cloud analyzes risk factors in your environment according to security best practices. These assessments are mapped to compliance controls from a supported set of standards. [Learn more](regulatory-compliance-dashboard.md). - **Firewall Manager** - This tile shows the status of your hubs and networks from [Azure Firewall Manager](../firewall-manager/overview.md). - **Inventory** - The asset inventory page of Microsoft Defender for Cloud provides a single page for viewing the security posture of the resources you've connected to Microsoft Defender for Cloud. All resources with unresolved security recommendations are shown in the inventory. If you've enabled the integration with Microsoft Defender for Endpoint and enabled Microsoft Defender for Servers, you'll also have access to a software inventory. The tile on the overview page shows you at a glance the total healthy and unhealthy resources (for the currently selected subscriptions). [Learn more](asset-inventory.md).-- **Information protection** - A graph on this tile shows the resource types that have been scanned by [Azure Purview](../purview/overview.md), found to contain sensitive data, and have outstanding recommendations and alerts. Follow the **scan** link to access the Azure Purview accounts and configure new scans, or select any other part of the tile to open the [asset inventory](asset-inventory.md) and view your resources according to your Azure Purview data sensitivity classifications. [Learn more](information-protection.md).
+- **Information protection** - A graph on this tile shows the resource types that have been scanned by [Microsoft Purview](../purview/overview.md), found to contain sensitive data, and have outstanding recommendations and alerts. Follow the **scan** link to access the Microsoft Purview accounts and configure new scans, or select any other part of the tile to open the [asset inventory](asset-inventory.md) and view your resources according to your Microsoft Purview data sensitivity classifications. [Learn more](information-protection.md).
### Insights
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Our Ignite release includes:
- [Azure Security Center and Azure Defender become Microsoft Defender for Cloud](#azure-security-center-and-azure-defender-become-microsoft-defender-for-cloud) - [Native CSPM for AWS and threat protection for Amazon EKS, and AWS EC2](#native-cspm-for-aws-and-threat-protection-for-amazon-eks-and-aws-ec2)-- [Prioritize security actions by data sensitivity (powered by Azure Purview) (in preview)](#prioritize-security-actions-by-data-sensitivity-powered-by-azure-purview-in-preview)
+- [Prioritize security actions by data sensitivity (powered by Microsoft Purview) (in preview)](#prioritize-security-actions-by-data-sensitivity-powered-by-microsoft-purview-in-preview)
- [Expanded security control assessments with Azure Security Benchmark v3](#expanded-security-control-assessments-with-azure-security-benchmark-v3) - [Microsoft Sentinel connector's optional bi-directional alert synchronization released for general availability (GA)](#microsoft-sentinel-connectors-optional-bi-directional-alert-synchronization-released-for-general-availability-ga) - [New recommendation to push Azure Kubernetes Service (AKS) logs to Sentinel](#new-recommendation-to-push-azure-kubernetes-service-aks-logs-to-sentinel)
When you've added your AWS accounts, Defender for Cloud protects your AWS resour
Learn more about [connecting your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md).
-### Prioritize security actions by data sensitivity (powered by Azure Purview) (in preview)
+### Prioritize security actions by data sensitivity (powered by Microsoft Purview) (in preview)
Data resources remain a popular target for threat actors. So it's crucial for security teams to identify, prioritize, and secure sensitive data resources across their cloud environments.
-To address this challenge, Microsoft Defender for Cloud now integrates sensitivity information from [Azure Purview](../purview/overview.md). Azure Purview is a unified data governance service that provides rich insights into the sensitivity of your data within multi-cloud, and on-premises workloads.
+To address this challenge, Microsoft Defender for Cloud now integrates sensitivity information from [Microsoft Purview](../purview/overview.md). Microsoft Purview is a unified data governance service that provides rich insights into the sensitivity of your data within multi-cloud, and on-premises workloads.
-The integration with Azure Purview extends your security visibility in Defender for Cloud from the infrastructure level down to the data, enabling an entirely new way to prioritize resources and security activities for your security teams.
+The integration with Microsoft Purview extends your security visibility in Defender for Cloud from the infrastructure level down to the data, enabling an entirely new way to prioritize resources and security activities for your security teams.
Learn more in [Prioritize security actions by data sensitivity](information-protection.md).
defender-for-iot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture.md
Specifically for OT networks, OT network sensors also provide the following anal
Defender for IoT provides hybrid network support using the following management options: -- **The Azure portal**. Use the Azure portal as a single pane of glass view all data ingested from your devices via network sensors. The Azure portal provides extra value, such as [workbooks](workbooks.md), [connections to Microsoft Sentinel](/azure/sentinel/iot-solution?toc=%2Fazure%2Fdefender-for-iot%2Forganizations%2Ftoc.json&bc=%2Fazure%2Fdefender-for-iot%2Fbreadcrumb%2Ftoc.json&tabs=use-out-of-the-box-analytics-rules-recommended), and more.
+- **The Azure portal**. Use the Azure portal as a single pane of glass view all data ingested from your devices via network sensors. The Azure portal provides extra value, such as [workbooks](workbooks.md), [connections to Microsoft Sentinel](../../sentinel/iot-solution.md?bc=%2fazure%2fdefender-for-iot%2fbreadcrumb%2ftoc.json&tabs=use-out-of-the-box-analytics-rules-recommended&toc=%2fazure%2fdefender-for-iot%2forganizations%2ftoc.json), and more.
Also use the Azure portal to obtain new appliances and software updates, onboard and maintain your sensors in Defender for IoT, and update threat intelligence packages.
For more information, see:
- [Frequently asked questions](resources-frequently-asked-questions.md) - [Sensor connection methods](architecture-connections.md)-- [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md)-
+- [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md)
defender-for-iot Connect Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/connect-sensors.md
Attach the gateway to the `GatewaySubnet` subnet you created [earlier](#step-2-d
For more information, see: -- [About VPN gateways](/azure/vpn-gateway/vpn-gateway-about-vpngateways)-- [Connect a virtual network to an ExpressRoute circuit using the portal](/azure/expressroute/expressroute-howto-linkvnet-portal-resource-manager)-- [Modify local network gateway settings using the Azure portal](/azure/vpn-gateway/vpn-gateway-modify-local-network-gateway-portal)
+- [About VPN gateways](../../vpn-gateway/vpn-gateway-about-vpngateways.md)
+- [Connect a virtual network to an ExpressRoute circuit using the portal](../../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md)
+- [Modify local network gateway settings using the Azure portal](../../vpn-gateway/vpn-gateway-modify-local-network-gateway-portal.md)
### Step 4: Define network security groups
For more information, see:
Define an Azure virtual machine scale set to create and manage a group of load-balanced virtual machine, where you can automatically increase or decrease the number of virtual machines as needed.
-Use the following procedure to create a scale set to use with your sensor connection. For more information, see [What are virtual machine scale sets?](/azure/virtual-machine-scale-sets/overview)
+Use the following procedure to create a scale set to use with your sensor connection. For more information, see [What are virtual machine scale sets?](../../virtual-machine-scale-sets/overview.md)
1. Create a scale set with the following parameter definitions:
Use the following procedure to create a scale set to use with your sensor connec
Azure Load Balancer is a layer-4 load balancer that distributes incoming traffic among healthy virtual machine instances using a hash-based distribution algorithm.
-For more information, see the [Azure Load Balancer documentation](/azure/load-balancer/load-balancer-overview).
+For more information, see the [Azure Load Balancer documentation](../../load-balancer/load-balancer-overview.md).
To create an Azure load balancer for your sensor connection:
While you'll need to migrate your connections before the [legacy version reaches
## Next steps
-For more information, see [Sensor connection methods](architecture-connections.md).
+For more information, see [Sensor connection methods](architecture-connections.md).
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview.md
Microsoft Defender for IoT is a unified security solution for identifying IoT an
**For end-user organizations**, Microsoft Defender for IoT provides an agentless, network-layer monitoring that integrates smoothly with industrial equipment and SOC tools. You can deploy Microsoft Defender for IoT in Azure-connected and hybrid environments or completely on-premises.
-**For IoT device builders**, Microsoft Defender for IoT also offers a lightweight, micro-agent that supports standard IoT operating systems, such as Linux and RTOS. The Microsoft Defender device builder agent helps you ensure that security is built into your IoT/OT projects, from the cloud. For more information, see [Microsoft Defender for IoT for device builders documentation](/azure/defender-for-iot/device-builders/overview).
+**For IoT device builders**, Microsoft Defender for IoT also offers a lightweight, micro-agent that supports standard IoT operating systems, such as Linux and RTOS. The Microsoft Defender device builder agent helps you ensure that security is built into your IoT/OT projects, from the cloud. For more information, see [Microsoft Defender for IoT for device builders documentation](../device-builders/overview.md).
## Agentless device monitoring
For more information, see:
- [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md) - [Microsoft Defender for IoT architecture](architecture.md)-- [Quickstart: Get started with Defender for IoT](getting-started.md)
+- [Quickstart: Get started with Defender for IoT](getting-started.md)
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
For more information, see [Manage your device inventory from the Azure portal](h
### Use Azure Monitor workbooks with Microsoft Defender for IoT (Public preview)
-[Azure Monitor workbooks](/azure/azure-monitor/visualize/workbooks-overview) provide graphs and dashboards that visually reflect your data, and are now available directly in Microsoft Defender for IoT with data from [Azure Resource Graph](/azure/governance/resource-graph/).
+[Azure Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md) provide graphs and dashboards that visually reflect your data, and are now available directly in Microsoft Defender for IoT with data from [Azure Resource Graph](../../governance/resource-graph/index.yml).
In the Azure portal, use the new Defender for IoT **Workbooks** page to view workbooks created by Microsoft and provided out-of-the-box, or create custom workbooks of your own.
Unicode characters are now supported when working with sensor certificate passph
## Next steps
-[Getting started with Defender for IoT](getting-started.md)
+[Getting started with Defender for IoT](getting-started.md)
defender-for-iot Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/workbooks.md
Learn more about viewing dashboards and reports on the sensor console:
Learn more about Azure Monitor workbooks and Azure Resource Graph: -- [Azure Resource Graph documentation](/azure/governance/resource-graph/)-- [Azure Monitor workbook documentation](/azure/azure-monitor/visualize/workbooks-overview)-- [Kusto Query Language (KQL) documentation](/azure/data-explorer/kusto/query/)
+- [Azure Resource Graph documentation](../../governance/resource-graph/index.yml)
+- [Azure Monitor workbook documentation](../../azure-monitor/visualize/workbooks-overview.md)
+- [Kusto Query Language (KQL) documentation](/azure/data-explorer/kusto/query/)
devtest-labs Deliver Proof Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/deliver-proof-concept.md
Learn about Azure and DevTest Labs by using the following resources:
### Enroll all users in Azure AD
-For management, such as adding users or adding lab owners, all lab users must belong to the [Azure Active Directory (Azure AD)](https://azure.microsoft.com/services/active-directory) tenant for the Azure subscription the pilot uses. Many enterprises set up [hybrid identity](/azure/active-directory/hybrid/whatis-hybrid-identity) to enable users to use their on-premises identities in the cloud. You don't need a hybrid identity for a DevTest Labs proof of concept.
+For management, such as adding users or adding lab owners, all lab users must belong to the [Azure Active Directory (Azure AD)](https://azure.microsoft.com/services/active-directory) tenant for the Azure subscription the pilot uses. Many enterprises set up [hybrid identity](../active-directory/hybrid/whatis-hybrid-identity.md) to enable users to use their on-premises identities in the cloud. You don't need a hybrid identity for a DevTest Labs proof of concept.
## Scope the proof of concept
A full DevTest Labs solution includes some important planning and design decisio
### Subscription topology
-The enterprise-level requirements for resources in Azure can extend beyond the [available quotas within a single subscription](/azure/azure-resource-manager/management/azure-subscription-service-limits). You might need several Azure subscriptions, or you might need to make service requests to increase initial subscription limits. For more information, see [Scalability considerations](devtest-lab-reference-architecture.md#scalability-considerations).
+The enterprise-level requirements for resources in Azure can extend beyond the [available quotas within a single subscription](../azure-resource-manager/management/azure-subscription-service-limits.md). You might need several Azure subscriptions, or you might need to make service requests to increase initial subscription limits. For more information, see [Scalability considerations](devtest-lab-reference-architecture.md#scalability-considerations).
It's important to decide how to distribute resources across subscriptions before final, full-scale rollout, because it's difficult to move resources to another subscription later. For example, you can't move a lab to another subscription after it's created. The [Subscription decision guide](/azure/architecture/cloud-adoption/decision-guides/subscriptions) is a valuable planning resource. ### Network topology
-The [default network infrastructure](/azure/app-service/networking-features) that DevTest Labs automatically creates might not meet requirements and constraints for enterprise users. For example, enterprises often use:
+The [default network infrastructure](../app-service/networking-features.md) that DevTest Labs automatically creates might not meet requirements and constraints for enterprise users. For example, enterprises often use:
- [Azure ExpressRoute-connected virtual networks](/azure/architecture/reference-architectures/hybrid-networking) for connecting on-premises networks to Azure.-- [Peered virtual networks](/azure/virtual-network/virtual-network-peering-overview) in a [hub-spoke configuration](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke) for connecting virtual networks across subscriptions.-- [Forced tunneling](/azure/vpn-gateway/vpn-gateway-forced-tunneling-rm) to limit traffic to on-premises networks.
+- [Peered virtual networks](../virtual-network/virtual-network-peering-overview.md) in a [hub-spoke configuration](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke) for connecting virtual networks across subscriptions.
+- [Forced tunneling](../vpn-gateway/vpn-gateway-forced-tunneling-rm.md) to limit traffic to on-premises networks.
For more information, see [Networking components](devtest-lab-reference-architecture.md#networking-components).
The solution has the following requirements:
## Next steps - [Scale up a DevTest Labs deployment](devtest-lab-guidance-orchestrate-implementation.md)-- [Orchestrate DevTest Labs implementation](devtest-lab-guidance-orchestrate-implementation.md)
+- [Orchestrate DevTest Labs implementation](devtest-lab-guidance-orchestrate-implementation.md)
devtest-labs Devtest Lab Add Devtest User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-add-devtest-user.md
To add a member:
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-You can add a DevTest Labs User to a lab by using the following Azure PowerShell script. The script requires the user to be in the Azure Active Directory (Azure AD). For information about adding an external user to Azure AD as a guest, see [Add a new guest user](/azure/active-directory/fundamentals/add-users-azure-active-directory#add-a-new-guest-user). If the user isn't in Azure AD, use the portal procedure instead.
+You can add a DevTest Labs User to a lab by using the following Azure PowerShell script. The script requires the user to be in the Azure Active Directory (Azure AD). For information about adding an external user to Azure AD as a guest, see [Add a new guest user](../active-directory/fundamentals/add-users-azure-active-directory.md#add-a-new-guest-user). If the user isn't in Azure AD, use the portal procedure instead.
In the following script, update the parameter values under the `# Values to change` comment. You can get the `subscriptionId`, `labResourceGroup`, and `labName` values from the lab's main page in the Azure portal.
New-AzRoleAssignment -ObjectId $adObject.Id -RoleDefinitionName 'DevTest Labs Us
## Next steps - [Customize permissions with custom roles](devtest-lab-grant-user-permissions-to-specific-lab-policies.md)-- [Automate adding lab users](automate-add-lab-user.md)
+- [Automate adding lab users](automate-add-lab-user.md)
devtest-labs Devtest Lab Attach Detach Data Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-attach-detach-data-disk.md
Last updated 03/29/2022
# Attach or detach a data disk for a lab virtual machine in Azure DevTest Labs
-This article explains how to attach and detach a lab virtual machine (VM) data disk in Azure DevTest Labs. You can create, attach, detach, and reattach [data disks](/azure/virtual-machines/managed-disks-overview) for lab VMs that you own. This functionality is useful for managing storage or software separately from individual VMs.
+This article explains how to attach and detach a lab virtual machine (VM) data disk in Azure DevTest Labs. You can create, attach, detach, and reattach [data disks](../virtual-machines/managed-disks-overview.md) for lab VMs that you own. This functionality is useful for managing storage or software separately from individual VMs.
## Prerequisites
-To attach or detach a data disk, you need to own the lab VM, and the VM must be running. The VM size determines how many data disks you can attach. For more information, see [Sizes for virtual machines](/azure/virtual-machines/sizes).
+To attach or detach a data disk, you need to own the lab VM, and the VM must be running. The VM size determines how many data disks you can attach. For more information, see [Sizes for virtual machines](../virtual-machines/sizes.md).
## Create and attach a new data disk
Follow these steps to create and attach a new managed data disk for a DevTest La
1. Fill out the **Attach new disk** form as follows: - For **Name**, enter a unique name.
- - For **Disk type**, select a [disk type](/azure/virtual-machines/disks-types) from the drop-down list.
+ - For **Disk type**, select a [disk type](../virtual-machines/disks-types.md) from the drop-down list.
- For **Size (GiB)**, enter a size in gigabytes. 1. Select **OK**.
You can also delete a detached data disk, by selecting **Delete** from the conte
## Next steps
-For information about transferring data disks for claimable lab VMs, see [Transfer the data disk](devtest-lab-add-claimable-vm.md#transfer-the-data-disk).
+For information about transferring data disks for claimable lab VMs, see [Transfer the data disk](devtest-lab-add-claimable-vm.md#transfer-the-data-disk).
devtest-labs Devtest Lab Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-reference-architecture.md
On-premises, a [remote desktop gateway](/windows-server/remote/remote-desktop-se
### Networking components
-In this architecture, [Azure Active Directory (Azure AD)](/azure/active-directory/fundamentals/active-directory-whatis) provides identity and access management across all networks. Lab VMs usually have a local administrative account for access. If there's an Azure AD, on-premises, or [Azure AD Domain Services](../active-directory-domain-services/overview.md) domain available, you can join lab VMs to the domain. Users can then use their domain-based identities to connect to the VMs.
+In this architecture, [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) provides identity and access management across all networks. Lab VMs usually have a local administrative account for access. If there's an Azure AD, on-premises, or [Azure AD Domain Services](../active-directory-domain-services/overview.md) domain available, you can join lab VMs to the domain. Users can then use their domain-based identities to connect to the VMs.
[Azure networking topology](../networking/fundamentals/networking-overview.md) controls how lab resources access and communicate with on-premises networks and the internet. This architecture shows a common way that enterprises network DevTest Labs. The labs connect with [peered virtual networks](../virtual-network/virtual-network-peering-overview.md) in a [hub-spoke configuration](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke), through the ExpressRoute or site-to-site VPN connection, to the on-premises network.
DevTest Labs automatically benefits from built-in Azure security features. To re
Another security consideration is the permission level you grant to lab users. Lab owners use Azure role-based access control (Azure RBAC) to assign roles to users and set resource and access-level permissions. The most common DevTest Labs permissions are Owner, Contributor, and User. You can also create and assign [custom roles](devtest-lab-grant-user-permissions-to-specific-lab-policies.md). For more information, see [Add owners and users in Azure DevTest Labs](devtest-lab-add-devtest-user.md). ## Next steps
-See the next article in this series: [Deliver a proof of concept](deliver-proof-concept.md).
+See the next article in this series: [Deliver a proof of concept](deliver-proof-concept.md).
devtest-labs Devtest Lab Troubleshoot Apply Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-troubleshoot-apply-artifacts.md
To troubleshoot connectivity issues to the Azure Storage account:
- Check for added network security groups (NSGs). If a subscription policy was added to automatically configure NSGs in all virtual networks, it would affect the virtual network used for creating lab VMs. -- Verify NSG rules. Use [IP flow verify](../network-watcher/diagnose-vm-network-traffic-filtering-problem.md#use-ip-flow-verify) to determine whether an NSG rule is blocking traffic to or from a VM. You can also review effective security group rules to ensure that an inbound **Allow** NSG rule exists. For more information, see [Using effective security rules to troubleshoot VM traffic flow](/azure/virtual-network/diagnose-network-traffic-filter-problem).
+- Verify NSG rules. Use [IP flow verify](../network-watcher/diagnose-vm-network-traffic-filtering-problem.md#use-ip-flow-verify) to determine whether an NSG rule is blocking traffic to or from a VM. You can also review effective security group rules to ensure that an inbound **Allow** NSG rule exists. For more information, see [Using effective security rules to troubleshoot VM traffic flow](../virtual-network/diagnose-network-traffic-filter-problem.md).
- Check the lab's default storage account. The default storage account is the first storage account created when the lab was created. The name usually starts with the letter "a" and ends with a multi-digit number, such as a\<labname>#.
To troubleshoot connectivity issues to the Azure Storage account:
1. On the storage account **Overview** page, select **Firewalls and virtual networks** in the left navigation. 1. Ensure that **Firewalls and virtual networks** is set to **All networks**. Or, if the **Selected networks** option is selected, make sure the lab's virtual networks used to create VMs are added to the list.
-For in-depth troubleshooting, see [Configure Azure Storage firewalls and virtual networks](/azure/storage/common/storage-network-security).
+For in-depth troubleshooting, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md).
## Troubleshoot artifact failures from the lab VM
You can connect to the lab VM where the artifact failed, and investigate the iss
1. Open and inspect the *STATUS* file to view the error.
-For instructions on finding the log files on a **Linux** VM, see [Use the Azure Custom Script Extension Version 2 with Linux virtual machines](/azure/virtual-machines/extensions/custom-script-linux#troubleshooting).
+For instructions on finding the log files on a **Linux** VM, see [Use the Azure Custom Script Extension Version 2 with Linux virtual machines](../virtual-machines/extensions/custom-script-linux.md#troubleshooting).
### Check the VM Agent
-Ensure that the [Azure Virtual Machine Agent (VM Agent)](/azure/virtual-machines/extensions/agent-windows) is installed and ready.
+Ensure that the [Azure Virtual Machine Agent (VM Agent)](../virtual-machines/extensions/agent-windows.md) is installed and ready.
-When the VM first starts, or when the CSE first installs to serve the request to apply artifacts, the VM might need to either upgrade the VM Agent or wait for the VM Agent to initialize. The VM Agent might depend on services that take a long time to initialize. For further troubleshooting, see [Azure Virtual Machine Agent overview](/azure/virtual-machines/extensions/agent-windows).
+When the VM first starts, or when the CSE first installs to serve the request to apply artifacts, the VM might need to either upgrade the VM Agent or wait for the VM Agent to initialize. The VM Agent might depend on services that take a long time to initialize. For further troubleshooting, see [Azure Virtual Machine Agent overview](../virtual-machines/extensions/agent-windows.md).
To verify if the artifact appeared to stop responding because of the VM Agent:
To verify if the artifact appeared to stop responding because of the VM Agent:
In the previous example, the VM Agent took 10 minutes and 20 seconds to start. The cause was the OOBE service taking a long time to start.
-For general information about Azure extensions, see [Azure virtual machine extensions and features](/azure/virtual-machines/extensions/overview).
+For general information about Azure extensions, see [Azure virtual machine extensions and features](../virtual-machines/extensions/overview.md).
### Investigate script issues
If you need more help, try one of the following support channels:
- Contact the Azure DevTest Labs experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/). - Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums). - Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.-- Go to the [Azure support site](https://azure.microsoft.com/support/options) and select **Get Support** to file an Azure support incident.
+- Go to the [Azure support site](https://azure.microsoft.com/support/options) and select **Get Support** to file an Azure support incident.
devtest-labs Devtest Lab Vm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-vm-powershell.md
This article shows you how to create an Azure DevTest Labs virtual machine (VM)
You need the following prerequisites to work through this article: - Access to a lab in DevTest Labs. [Create a lab](devtest-lab-create-lab.md), or use an existing lab.-- Azure PowerShell. [Install Azure PowerShell](/powershell/azure/install-az-ps), or [use Azure Cloud Shell](/azure/cloud-shell/quickstart-powershell) in the Azure portal.
+- Azure PowerShell. [Install Azure PowerShell](/powershell/azure/install-az-ps), or [use Azure Cloud Shell](../cloud-shell/quickstart-powershell.md) in the Azure portal.
## PowerShell VM creation script
Set-AzResource -ResourceId $VmResourceId -Properties $VmProperties -Force
## Next steps
-[Az.DevTestLabs PowerShell reference](/powershell/module/az.devtestlabs/)
+[Az.DevTestLabs PowerShell reference](/powershell/module/az.devtestlabs/)
devtest-labs Encrypt Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/encrypt-storage.md
Azure Storage encrypts lab data with a Microsoft-managed key. Optionally, you ca
For more information and instructions on configuring customer-managed keys for Azure Storage encryption, see: -- [Use customer-managed keys with Azure Key Vault to manage Azure Storage encryption](/azure/storage/common/customer-managed-keys-overview)-- [Configure encryption with customer-managed keys stored in Azure Key Vault](/azure/storage/common/customer-managed-keys-configure-key-vault)
+- [Use customer-managed keys with Azure Key Vault to manage Azure Storage encryption](../storage/common/customer-managed-keys-overview.md)
+- [Configure encryption with customer-managed keys stored in Azure Key Vault](../storage/common/customer-managed-keys-configure-key-vault.md)
## Next steps
-For more information about managing Azure Storage, see [Optimize costs by automatically managing the data lifecycle](../storage/blobs/lifecycle-management-overview.md).
-
+For more information about managing Azure Storage, see [Optimize costs by automatically managing the data lifecycle](../storage/blobs/lifecycle-management-overview.md).
devtest-labs Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/network-isolation.md
Last updated 03/21/2022
This article walks you through creating a network-isolated lab in Azure DevTest Labs.
-By default, Azure DevTest Labs creates a new [Azure virtual network](/azure/virtual-network/virtual-networks-overview) for each lab. The virtual network acts as a security boundary to isolate lab resources from the public internet. To ensure lab resources follow organizational networking policies, you can use several other networking options:
+By default, Azure DevTest Labs creates a new [Azure virtual network](../virtual-network/virtual-networks-overview.md) for each lab. The virtual network acts as a security boundary to isolate lab resources from the public internet. To ensure lab resources follow organizational networking policies, you can use several other networking options:
- Isolate all lab [virtual machines (VMs)](devtest-lab-configure-vnet.md) and [environments](connect-environment-lab-virtual-network.md) in a pre-existing virtual network that you select. - Join an Azure virtual network to an on-premises network, to securely connect to on-premises resources. For more information, see [DevTest Labs enterprise reference architecture: Connectivity components](devtest-lab-reference-architecture.md#connectivity-components).
If you enabled network isolation for a virtual network other than the default, c
Azure Storage now allows inbound connections from the added virtual network, which enables the lab to operate successfully in a network isolated mode.
-You can automate these steps with PowerShell or Azure CLI to configure network isolation for multiple labs. For more information, see [Configure Azure Storage firewalls and virtual networks](/azure/storage/common/storage-network-security).
+You can automate these steps with PowerShell or Azure CLI to configure network isolation for multiple labs. For more information, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md).
### Configure the endpoint for the lab key vault
Here are some things to remember when using a lab in a network isolated mode:
The lab owner must explicitly enable access to a network isolated lab's storage account from an allowed endpoint. Actions like uploading a VHD to the storage account for creating custom images require this access. You can enable access by creating a lab VM, and securely accessing the lab's storage account from that VM.
-For more information, see [Connect to a storage account using an Azure Private Endpoint](/azure/private-link/tutorial-private-endpoint-storage-portal).
+For more information, see [Connect to a storage account using an Azure Private Endpoint](../private-link/tutorial-private-endpoint-storage-portal.md).
### Provide storage account to export lab usage data
For more information, see [Export or delete personal data from Azure DevTest Lab
Enabling the key vault service endpoint affects only the firewall. Make sure to configure the appropriate key vault access permissions in the key vault **Access policies** section.
-For more information, see [Assign a Key Vault access policy](/azure/key-vault/general/assign-access-policy).
+For more information, see [Assign a Key Vault access policy](../key-vault/general/assign-access-policy.md).
## Next steps - [Azure Resource Manager (ARM) templates in Azure DevTest Labs](devtest-lab-use-arm-and-powershell-for-lab-resources.md) - [Manage Azure DevTest Labs storage accounts](encrypt-storage.md)-- [Store secrets in a key vault in Azure DevTest Labs](devtest-lab-store-secrets-in-key-vault.md)
+- [Store secrets in a key vault in Azure DevTest Labs](devtest-lab-store-secrets-in-key-vault.md)
devtest-labs Start Machines Use Automation Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/start-machines-use-automation-runbooks.md
The DevTest Labs [autostart](devtest-lab-set-lab-policy.md#set-autostart) featur
- [Create and apply a tag](devtest-lab-add-tag.md) called **StartupOrder** to all lab VMs with an appropriate startup value, 0 through 10. Designate any machines that don't need starting as -1. -- Create an Azure Automation account by following instructions in [Create a standalone Azure Automation account](/azure/automation/automation-create-standalone-account). Choose the **Run As Accounts** option when you create the account.
+- Create an Azure Automation account by following instructions in [Create a standalone Azure Automation account](../automation/automation-create-standalone-account.md). Choose the **Run As Accounts** option when you create the account.
## Create the PowerShell runbook
While ($current -le 10) {
- [What is Azure Automation?](/azure/automation/automation-intro) - [Start up lab virtual machines automatically](devtest-lab-auto-startup-vm.md)-- [Use command-line tools to start and stop Azure DevTest Labs virtual machines](use-command-line-start-stop-virtual-machines.md)
+- [Use command-line tools to start and stop Azure DevTest Labs virtual machines](use-command-line-start-stop-virtual-machines.md)
devtest-labs Test App Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/test-app-azure.md
This article shows how to set up an application for testing from an Azure DevTes
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A Windows-based [DevTest Labs VM](devtest-lab-add-vm.md) to use for testing the app. - [Visual Studio](https://visualstudio.microsoft.com/free-developer-offers/) installed on a different workstation.-- A [file share](/azure/storage/files/storage-how-to-create-file-share) created in your lab's [Azure Storage Account](encrypt-storage.md).-- The [file share mounted](/azure/storage/files/storage-how-to-use-files-windows#mount-the-azure-file-share) to your Visual Studio workstation, and to the lab VM you want to use for testing.
+- A [file share](../storage/files/storage-how-to-create-file-share.md) created in your lab's [Azure Storage Account](encrypt-storage.md).
+- The [file share mounted](../storage/files/storage-how-to-use-files-windows.md#mount-the-azure-file-share) to your Visual Studio workstation, and to the lab VM you want to use for testing.
## Publish your app from Visual Studio
See the following articles to learn how to use VMs in a lab.
- [Add a VM to a lab](devtest-lab-add-vm.md) - [Restart a lab VM](devtest-lab-restart-vm.md)-- [Resize a lab VM](devtest-lab-resize-vm.md)
+- [Resize a lab VM](devtest-lab-resize-vm.md)
devtest-labs Tutorial Create Custom Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/tutorial-create-custom-lab.md
In the [next tutorial](tutorial-use-custom-lab.md), lab users, such as developer
## Prerequisite -- To create a lab, you need at least [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role in an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- To create a lab, you need at least [Contributor](../role-based-access-control/built-in-roles.md#contributor) role in an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-- To add users to a lab, you must have [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) or [Owner](/azure/role-based-access-control/built-in-roles#owner) role in the subscription the lab is in.
+- To add users to a lab, you must have [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../role-based-access-control/built-in-roles.md#owner) role in the subscription the lab is in.
## Create a lab
From the lab **Overview** page, you can select **Claimable virtual machines** in
## Add a user to the DevTest Labs User role
-To add users to a lab, you must be a [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) or [Owner](/azure/role-based-access-control/built-in-roles#owner) of the subscription the lab is in. For more information, see [Add lab owners, contributors, and users in Azure DevTest Labs](devtest-lab-add-devtest-user.md).
+To add users to a lab, you must be a [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../role-based-access-control/built-in-roles.md#owner) of the subscription the lab is in. For more information, see [Add lab owners, contributors, and users in Azure DevTest Labs](devtest-lab-add-devtest-user.md).
1. On the lab's **Overview** page, under **Settings**, select **Configuration and policies**.
If you created a resource group for the lab, you can now delete that resource gr
To learn how to access the lab and VMs as a lab user, go on to the next tutorial: > [!div class="nextstepaction"]
-> [Tutorial: Access the lab](tutorial-use-custom-lab.md)
+> [Tutorial: Access the lab](tutorial-use-custom-lab.md)
devtest-labs Tutorial Use Custom Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/tutorial-use-custom-lab.md
In this tutorial, you learn how to:
## Prerequisites
-You need at least [DevTest Labs User](/azure/role-based-access-control/built-in-roles#devtest-labs-user) access to the lab created in [Tutorial: Set up a lab in Azure DevTest Labs](tutorial-create-custom-lab.md), or to another lab that has a claimable VM.
+You need at least [DevTest Labs User](../role-based-access-control/built-in-roles.md#devtest-labs-user) access to the lab created in [Tutorial: Set up a lab in Azure DevTest Labs](tutorial-create-custom-lab.md), or to another lab that has a claimable VM.
The owner or administrator of the lab can give you the URL to access the lab in the Azure portal, and the username and password to access the lab VM.
To connect to a Windows machine through Remote Desktop Protocol (RDP), follow th
:::image type="content" source="./media/tutorial-use-custom-lab/remote-computer-verification.png" alt-text="Screenshot of remote computer verification.":::
-Once you connect to the VM, you can use it to do your work. You have [Owner](/azure/role-based-access-control/built-in-roles#owner) role on all lab VMs you claim or create, unless you unclaim them.
+Once you connect to the VM, you can use it to do your work. You have [Owner](../role-based-access-control/built-in-roles.md#owner) role on all lab VMs you claim or create, unless you unclaim them.
## Unclaim a lab VM
When you're done using a VM, you can delete it. Or, the lab owner can delete the
## Next steps
-In this tutorial, you learned how to claim and connect to claimable VMs in Azure DevTest Labs. To create your own lab VMs, see [Create lab virtual machines in Azure DevTest Labs](devtest-lab-add-vm.md).
+In this tutorial, you learned how to claim and connect to claimable VMs in Azure DevTest Labs. To create your own lab VMs, see [Create lab virtual machines in Azure DevTest Labs](devtest-lab-add-vm.md).
devtest-labs Use Paas Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/use-paas-services.md
When you create an environment, DevTest Labs can replace the `$(LabSubnetId)` to
### Use nested templates
-DevTest Labs supports [nested ARM templates](/azure/azure-resource-manager/templates/linked-templates). To use `_artifactsLocation` and `_artifactsLocationSasToken` tokens to create a URI to a nested ARM template, see [Deploy DevTest Labs environments by using nested templates](deploy-nested-template-environments.md). For more information, see the **Deployment artifacts** section of the [Azure Resource Manager Best Practices Guide](https://github.com/Azure/azure-quickstart-templates/blob/master/1-CONTRIBUTION-GUIDE/best-practices.md#deployment-artifacts-nested-templates-scripts).
+DevTest Labs supports [nested ARM templates](../azure-resource-manager/templates/linked-templates.md). To use `_artifactsLocation` and `_artifactsLocationSasToken` tokens to create a URI to a nested ARM template, see [Deploy DevTest Labs environments by using nested templates](deploy-nested-template-environments.md). For more information, see the **Deployment artifacts** section of the [Azure Resource Manager Best Practices Guide](https://github.com/Azure/azure-quickstart-templates/blob/master/1-CONTRIBUTION-GUIDE/best-practices.md#deployment-artifacts-nested-templates-scripts).
## Next steps - [Use ARM templates to create DevTest Labs environments](devtest-lab-create-environment-from-arm.md) - [Create an environment with a self-contained Service Fabric cluster in Azure DevTest Labs](create-environment-service-fabric-cluster.md) - [Connect an environment to your lab's virtual network in Azure DevTest Labs](connect-environment-lab-virtual-network.md)-- [Integrate environments into your Azure DevOps CI/CD pipelines](integrate-environments-devops-pipeline.md)-
+- [Integrate environments into your Azure DevOps CI/CD pipelines](integrate-environments-devops-pipeline.md)
digital-twins How To Use Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-data-history.md
This article shows how to set up a working data history connection between Azure
* an [Event Hubs](../event-hubs/event-hubs-about.md) namespace containing an event hub * an [Azure Data Explorer](/azure/data-explorer/data-explorer-overview) cluster containing a database
-It also contains a sample twin graph and telemetry scenario that you can use to see the historized twin updates in Azure Data Explorer.
+It also contains a sample twin graph that you can use to see the historized twin property updates in Azure Data Explorer.
>[!NOTE] >You can also work with data history using the [2021-06-30-preview](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins/preview/2021-06-30-preview) version of the rest APIs. That process isn't shown in this article.
After setting up the data history connection, you can optionally remove the role
Now that your data history connection is set up, you can test it with data from your digital twins.
-If you already have twins in your Azure Digital Twins instance that are receiving telemetry updates, you can skip this section and visualize the results using your own resources.
+If you already have twins in your Azure Digital Twins instance that are receiving property updates, you can skip this section and visualize the results using your own resources.
-Otherwise, continue through this section to set up a sample graph containing twins that can receive telemetry updates.
+Otherwise, continue through this section to set up a sample graph containing twins that receives twin property updates.
-You can set up a sample graph for this scenario using the **Azure Digital Twins Data Simulator**. The Azure Digital Twins Data Simulator continuously pushes telemetry to several twins in an Azure Digital Twins instance.
+You can set up a sample graph for this scenario using the **Azure Digital Twins Data Simulator**. The Azure Digital Twins Data Simulator continuously pushes property updates to several twins in an Azure Digital Twins instance.
### Create a sample graph
-You can use the **Azure Digital Twins Data Simulator** to provision a sample twin graph and push telemetry data to it. The twin graph created here models pasteurization processes for a dairy company.
+You can use the **Azure Digital Twins Data Simulator** to provision a sample twin graph and push property updates to it. The twin graph created here models pasteurization processes for a dairy company.
Start by opening the [Azure Digital Twins Data Simulator](https://explorer.digitaltwins.azure.net/tools/data-pusher) web application in your browser.
event-grid Transition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/transition.md
Title: Transition from Event Grid on Azure IoT Edge to Azure IoT Edge
-description: This article explains transition from Event Grid on Azure IoT Edge to Azure IoT Edge MQTT Broker or IoT Hub message routing.
+description: This article explains transition from Event Grid on Azure IoT Edge to Azure IoT Edge Hub module in Azure IoT Edge runtime.
Previously updated : 02/16/2022 Last updated : 04/13/2022
On March 31, 2023, Event Grid on Azure IoT Edge will be retired, so make sure to transition to IoT Edge native capabilities prior to that date.
-## Why are we retiring?
-There are multiple reasons for deciding to retire Event Grid on IoT Edge, which is currently in Preview, in March 2023.
+## Why are we retiring?
-- Event Grid has been evolving in the cloud native space to provide more robust capabilities not only in Azure but also in on-prem scenarios with [Kubernetes with Azure Arc](../kubernetes/overview.md).-- We've seen an increase of adoption of MQTT brokers in the IoT space, this adoption has been the motivation to allow IoT Edge team to build a new native MQTT broker that provides a better integration for pub/sub messaging scenarios. With the new MQTT broker provided natively on IoT Edge, you'll be able to connect to this broker, publish, and subscribe to messages over user-defined topics, and use IoT Hub messaging primitives. The IoT Edge MQTT broker is built in the IoT Edge hub.
+There's one major reason for deciding to retire Event Grid on IoT Edge, which is currently in Preview, in March 2023: Event Grid has been evolving in the cloud native space to provide more robust capabilities not only in Azure but also in on-prem scenarios with [Kubernetes with Azure Arc](../kubernetes/overview.md).
-Here's the list of the features that will be removed with the retirement of Event Grid on Azure IoT Edge and a list of the new IoT Edge native capabilities.
-
-| Event Grid on Azure IoT Edge | MQTT broker on Azure IoT Edge |
+| Event Grid on Azure IoT Edge | Azure IoT Edge Hub |
| - | -- |
-| - Publishing and subscribing to events locally/cloud<br/>- Forwarding events to Event Grid<br/>- Forwarding events to IoT Hub<br/>- React to Blob Storage events locally | - Connectivity to IoT Edge hub<br/>- Publish and subscribe on user-defined topics<br/>- Publish and subscribe on IoT Hub topics<br/>- Publish and subscribe between MQTT brokers |
-
+| - Publishing and subscribing to events locally/cloud<br/>- Forwarding events to Event Grid<br/>- Forwarding events to IoT Hub<br/>- React to Blob Storage events locally | - Connectivity to Azure IoT Hub<br/>- Route messages between modules or devices locally<br/>- Offline support<br/>- Message filtering |
## How to transition to Azure IoT Edge features
The following table highlights the key differences during this transition.
| Event Grid on Azure IoT Edge | Azure IoT Edge | | | -- |
-| Publish, subscribe and forward events locally or cloud | You can use Azure IoT Edge MQTT broker to publish and subscribe messages. To learn how to connect to this broker, publish and subscribe to messages over user-defined topics, and use IoT Hub messaging primitives, see [publish and subscribe with Azure IoT Edge](../../iot-edge/how-to-publish-subscribe.md). The IoT Edge MQTT broker is built in the IoT Edge hub. For more information, see [the brokering capabilities of the IoT Edge hub](../../iot-edge/iot-edge-runtime.md). </br> </br> If you're subscribing to IoT Hub, itΓÇÖs possible to create an event to publish to Event Grid if you need. For details, see [Azure IoT Hub and Event Grid](../../iot-hub/iot-hub-event-grid.md). |
-| Forward events to IoT Hub | You can use IoT Hub message routing to send device-cloud messages to different endpoints. For details, see [Understand Azure IoT Hub message routing](../../iot-hub/iot-hub-devguide-messages-d2c.md). |
-| React to Blob Storage events on IoT Edge (Preview) | You can use Azure Function Apps to react to blob storage events on cloud when a blob is created or updated. For more information, see [Azure Blob storage trigger for Azure Functions](../../azure-functions/functions-bindings-storage-blob-trigger.md) and [Tutorial: Deploy Azure Functions as modules - Azure IoT Edge](../../iot-edge/tutorial-deploy-function.md). Blob triggers in IoT Edge blob storage module aren't supported. |
+| Publish, subscribe and forward events locally or cloud | Use the message routing feature in IoT Edge Hub to facilitate local and cloud communication. It enables device-to-module, module-to-module, and device-to-device communications by brokering messages to keep devices and modules independent from each other. To learn more, see [using routing for IoT Edge hub](../../iot-edge/iot-edge-runtime.md#using-routing). </br> </br> If you're subscribing to IoT Hub, itΓÇÖs possible to create an event to publish to Event Grid if you need. For details, see [Azure IoT Hub and Event Grid](../../iot-hub/iot-hub-event-grid.md). |
+| Forward events to IoT Hub | Use IoT Edge Hub to optimize connections to send messages to the cloud with offline support. For details, see [IoT Edge Hub cloud communication](../../iot-edge/iot-edge-runtime.md#using-routing). |
+| React to Blob Storage events on IoT Edge (Preview) | You can use Azure Function Apps to react to blob storage events on cloud when a blob is created or updated. For more information, see [Azure Blob storage trigger for Azure Functions](../../azure-functions/functions-bindings-storage-blob-trigger.md) and [Tutorial: Deploy Azure Functions as modules - Azure IoT Edge](../../iot-edge/tutorial-deploy-function.md). Blob triggers in IoT Edge blob storage module aren't supported. |
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Previously updated : 03/30/2022 Last updated : 04/19/2022 # Customer intent: As an administrator, I want to evaluate Azure Firewall so I can determine if I want to use it.
Azure Firewall Standard has the following known issues:
|NAT rules with ports between 64000 and 65535 are unsupported|Azure Firewall allows any port in the 1-65535 range in network and application rules, however NAT rules only support ports in the 1-63999 range.|This is a current limitation. |Configuration updates may take five minutes on average|An Azure Firewall configuration update can take three to five minutes on average, and parallel updates aren't supported.|A fix is being investigated.| |Azure Firewall uses SNI TLS headers to filter HTTPS and MSSQL traffic|If browser or server software doesn't support the Server Name Indicator (SNI) extension, you can't connect through Azure Firewall.|If browser or server software doesn't support SNI, then you may be able to control the connection using a network rule instead of an application rule. See [Server Name Indication](https://wikipedia.org/wiki/Server_Name_Indication) for software that supports SNI.|
-|Start/Stop doesnΓÇÖt work with a firewall configured in forced-tunnel mode|Start/stop doesnΓÇÖt work with Azure firewall configured in forced-tunnel mode. Attempting to start Azure Firewall with forced tunneling configured results in the following error:<br><br>*Set-AzFirewall: AzureFirewall FW-xx management IP configuration cannot be added to an existing firewall. Redeploy with a management IP configuration if you want to use forced tunneling support.<br>StatusCode: 400<br>ReasonPhrase: Bad Request*|Under investigation.<br><br>As a workaround, you can delete the existing firewall and create a new one with the same parameters.|
|Can't add firewall policy tags using the portal or Azure Resource Manager (ARM) templates|Azure Firewall Policy has a patch support limitation that prevents you from adding a tag using the Azure portal or ARM templates. The following error is generated: *Could not save the tags for the resource*.|A fix is being investigated. Or, you can use the Azure PowerShell cmdlet `Set-AzFirewallPolicy` to update tags.| |IPv6 not currently supported|If you add an IPv6 address to a rule, the firewall fails.|Use only IPv4 addresses. IPv6 support is under investigation.| |Updating multiple IP Groups fails with conflict error.|When you update two or more IP Groups attached to the same firewall, one of the resources goes into a failed state.|This is a known issue/limitation. <br><br>When you update an IP Group, it triggers an update on all firewalls that the IPGroup is attached to. If an update to a second IP Group is started while the firewall is still in the *Updating* state, then the IPGroup update fails.<br><br>To avoid the failure, IP Groups attached to the same firewall must be updated one at a time. Allow enough time between updates to allow the firewall to get out of the *Updating* state.|
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> (Windows) or <kbd>
[!INCLUDE [azure-policy-reference-policies-azure-edge-hardware-center](../../../../includes/policy/reference/bycat/policies-azure-edge-hardware-center.md)]
-## Azure Purview
+## Microsoft Purview
[!INCLUDE [azure-policy-reference-policies-azure-purview](../../../../includes/policy/reference/bycat/policies-azure-purview.md)]
governance Policy Devops Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/policy-devops-pipelines.md
For more information, see [What is Azure Pipelines?](/azure/devops/pipelines/get
and [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline). ## Prepare
-1. Create an [Azure Policy](/azure/governance/policy/tutorials/create-and-manage) in the Azure portal.
- There are several [predefined sample policies](/azure/governance/policy/samples/)
+1. Create an [Azure Policy](./create-and-manage.md) in the Azure portal.
+ There are several [predefined sample policies](../samples/index.md)
that can be applied to a management group, subscription, and resource group. 1. In Azure DevOps, create a release pipeline that contains at least one stage, or open an existing release pipeline.
and [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline).
To learn more about the structures of policy definitions, look at this article: > [!div class="nextstepaction"]
-> [Azure Policy definition structure](../concepts/definition-structure.md)
+> [Azure Policy definition structure](../concepts/definition-structure.md)
governance Supported Tables Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/reference/supported-tables-resources.md
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.powerplatform/enterprisepolicies - microsoft.projectbabylon/accounts - microsoft.providerhubdevtest/regionalstresstests-- Microsoft.Purview/Accounts (Azure Purview accounts)
+- Microsoft.Purview/Accounts (Microsoft Purview accounts)
- Microsoft.Quantum/Workspaces (Quantum Workspaces) - Microsoft.RecommendationsService/accounts (Intelligent Recommendations Accounts) - Microsoft.RecommendationsService/accounts/modeling (Modeling)
hdinsight Apache Spark Jupyter Notebook Kernels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-notebook-kernels.md
description: Learn about the PySpark, PySpark3, and Spark kernels for Jupyter No
Previously updated : 04/24/2020 Last updated : 04/18/2022 # Kernels for Jupyter Notebook on Apache Spark clusters in Azure HDInsight HDInsight Spark clusters provide kernels that you can use with the Jupyter Notebook on [Apache Spark](./apache-spark-overview.md) for testing your applications. A kernel is a program that runs and interprets your code. The three kernels are: -- **PySpark** - for applications written in Python2.
+- **PySpark** - for applications written in Python2. (Applicable only for Spark 2.4 version clusters)
- **PySpark3** - for applications written in Python3. - **Spark** - for applications written in Scala.
An Apache Spark cluster in HDInsight. For instructions, see [Create Apache Spark
:::image type="content" source="./media/apache-spark-jupyter-notebook-kernels/kernel-jupyter-notebook-on-spark.png " alt-text="Kernels for Jupyter Notebook on Spark" border="true":::
+ > [!NOTE]
+ > For Spark 3.1, only **PySpark3**, or **Spark** will be available.
+ >
+ :::image type="content" source="./media/apache-spark-jupyter-notebook-kernels/kernel-jupyter-notebook-on-spark-for-hdi-4-0.png " alt-text="Kernels for Jupyter Notebook on Spark HDI4.0" border="true":::
+
+ 4. A notebook opens with the kernel you selected. ## Benefits of using the kernels
healthcare-apis Azure Active Directory Identity Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-active-directory-identity-configuration.md
Title: Azure Active Directory identity configuration for Azure API for FHIR description: Learn the principles of identity, authentication, and authorization for Azure FHIR servers. -+ Last updated 02/15/2022-+ # Azure Active Directory identity configuration for Azure API for FHIR
healthcare-apis Azure Api Fhir Access Token Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-access-token-validation.md
Title: Azure API for FHIR access token validation description: Walks through token validation and gives tips on how to troubleshoot access issues -+ Last updated 02/15/2022-+ # Azure API for FHIR access token validation
healthcare-apis Azure Api For Fhir Additional Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-for-fhir-additional-settings.md
--++ Last updated 02/15/2022
healthcare-apis Carin Implementation Guide Blue Button Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/carin-implementation-guide-blue-button-tutorial.md
--++ Last updated 02/15/2022
healthcare-apis Centers For Medicare Tutorial Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/centers-for-medicare-tutorial-introduction.md
--++ Last updated 02/15/2022
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/copy-to-synapse.md
In this article, you'll learn three ways to copy data from Azure API for FHIR to
> [!Note] > [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
-The **FHIR to Synapse Sync Agent** is a Microsoft OSS project released under MIT License. It's an Azure function that extracts data from a FHIR server using FHIR Resource APIs, converts it to hierarchical Parquet files, and writes it to Azure Data Lake in near real time. This also contains a script to create external tables and views in [Synapse Serverless SQL pool](https://docs.microsoft.com/azure/synapse-analytics/sql/on-demand-workspace-overview) pointing to the Parquet files.
+The **FHIR to Synapse Sync Agent** is a Microsoft OSS project released under MIT License. It's an Azure function that extracts data from a FHIR server using FHIR Resource APIs, converts it to hierarchical Parquet files, and writes it to Azure Data Lake in near real time. This also contains a script to create external tables and views in [Synapse Serverless SQL pool](../../synapse-analytics/sql/on-demand-workspace-overview.md) pointing to the Parquet files.
This solution enables you to query against the entire FHIR data with tools such as Synapse Studio, SSMS, and Power BI. You can also access the Parquet files directly from a Synapse Spark pool. You should consider this solution if you want to access all of your FHIR data in near real time, and want to defer custom transformation to downstream systems.
healthcare-apis Davinci Pdex Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/davinci-pdex-tutorial.md
--++ Last updated 02/15/2022
healthcare-apis Davinci Plan Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/davinci-plan-net.md
-+ Last updated 02/15/2022
healthcare-apis Enable Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/enable-diagnostic-logging.md
--++ Last updated 02/15/2022
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/export-data.md
Title: Executing the export by invoking $export command on Azure API for FHIR description: This article describes how to export FHIR data using $export for Azure API for FHIR-+ Last updated 02/15/2022-+ # How to export FHIR data in Azure API for FHIR
healthcare-apis Fhir App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-app-registration.md
-+ Last updated 02/15/2022
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-features-supported.md
Title: Supported FHIR features in Azure - Azure API for FHIR description: This article explains which features of the FHIR specification that are implemented in Azure API for FHIR -+
healthcare-apis How To Do Custom Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/how-to-do-custom-search.md
Last updated 02/15/2022-+ # Defining custom search parameters for Azure API for FHIR
healthcare-apis How To Run A Reindex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/how-to-run-a-reindex.md
Last updated 02/15/2022-+ # Running a reindex job in Azure API for FHIR
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/overview-of-search.md
Last updated 02/15/2022-+ # Overview of search in Azure API for FHIR
healthcare-apis Patient Everything https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/patient-everything.md
Title: Use patient-everything in Azure API for FHIR description: This article explains how to use the Patient-everything operation in the Azure API for FHIR. -+ Last updated 02/15/2022-+ # Patient-everything in FHIR
healthcare-apis Register Confidential Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-confidential-azure-ad-client-app.md
Last updated 02/15/2022-+ # Register a confidential client application in Azure Active Directory for Azure API for FHIR
healthcare-apis Register Public Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app.md
Last updated 03/21/2022-+ # Register a public client application in Azure Active Directory for Azure API for FHIR
healthcare-apis Register Resource Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-resource-azure-ad-client-app.md
Last updated 02/15/2022-+
healthcare-apis Register Service Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-service-azure-ad-client-app.md
Last updated 03/21/2022-+ # Register a service client application in Azure Active Directory for Azure API for FHIR
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/release-notes.md
Title: Azure API for FHIR monthly releases description: This article provides details about the Azure API for FHIR monthly features and enhancements. -+
healthcare-apis Search Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/search-samples.md
Last updated 02/15/2022-+ # FHIR search examples for Azure API for FHIR
healthcare-apis Store Profiles In Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/store-profiles-in-fhir.md
Last updated 02/15/2022-+ # Store profiles in Azure API for FHIR
healthcare-apis Tutorial Member Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-member-match.md
--++ Last updated 02/15/2022
healthcare-apis Tutorial Web App Fhir Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-fhir-server.md
--++ Last updated 02/15/2022
healthcare-apis Tutorial Web App Public App Reg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-public-app-reg.md
--++ Last updated 03/22/2022
healthcare-apis Tutorial Web App Test Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-test-postman.md
--++ Last updated 02/15/2022
healthcare-apis Tutorial Web App Write Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-write-web-app.md
--++ Last updated 02/15/2022
healthcare-apis Validation Against Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/validation-against-profiles.md
Title: Validate FHIR resources against profiles in Azure API for FHIR description: This article describes how to validate FHIR resources against profiles in Azure API for FHIR.-+ Last updated 02/15/2022-+ # Validate FHIR resources against profiles in Azure API for FHIR
healthcare-apis Azure Active Directory Identity Configuration Old https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/azure-active-directory-identity-configuration-old.md
Title: Azure Active Directory identity configuration for Azure Health Data Services for FHIR service description: Learn the principles of identity, authentication, and authorization for FHIR service -+ Last updated 03/01/2022-+ # Azure Active Directory identity configuration for FHIR service
healthcare-apis Carin Implementation Guide Blue Button Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/carin-implementation-guide-blue-button-tutorial.md
--++ Last updated 03/01/2022
healthcare-apis Centers For Medicare Tutorial Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/centers-for-medicare-tutorial-introduction.md
--++ Last updated 03/01/2022
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-export-data.md
Last updated 03/01/2022-+ # Configure export settings and set up a storage account
healthcare-apis Configure Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md
The final step is to set the import configuration of the FHIR service, which con
> [!NOTE] > If you haven't assigned storage access permissions to the FHIR service, the import operations ($import) will fail.
-To specify the Azure Storage account, you need to use [Rest API](https://docs.microsoft.com/rest/api/healthcareapis/services/create-or-update) to update the FHIR service.
+To specify the Azure Storage account, you need to use [Rest API](/rest/api/healthcareapis/services/create-or-update) to update the FHIR service.
To get the request URL and body, browse to the Azure portal of your FHIR service. Select **Overview**, and then **JSON View**.
In this article, you've learned the FHIR service supports $import operation and
>[Configure export settings and set up a storage account](configure-export-data.md) >[!div class="nextstepaction"]
->[Copy data from FHIR service to Azure Synapse Analytics](copy-to-synapse.md)
+>[Copy data from FHIR service to Azure Synapse Analytics](copy-to-synapse.md)
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/copy-to-synapse.md
In this article, youΓÇÖll learn three ways to copy data from the FHIR service in
> [!Note] > [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
-The **FHIR to Synapse Sync Agent** is a Microsoft OSS project released under MIT License. It's an Azure function that extracts data from a FHIR server using FHIR Resource APIs, converts it to hierarchical Parquet files, and writes it to Azure Data Lake in near real time. This also contains a script to create external tables and views in [Synapse Serverless SQL pool](https://docs.microsoft.com/azure/synapse-analytics/sql/on-demand-workspace-overview) pointing to the Parquet files.
+The **FHIR to Synapse Sync Agent** is a Microsoft OSS project released under MIT License. It's an Azure function that extracts data from a FHIR server using FHIR Resource APIs, converts it to hierarchical Parquet files, and writes it to Azure Data Lake in near real time. This also contains a script to create external tables and views in [Synapse Serverless SQL pool](../../synapse-analytics/sql/on-demand-workspace-overview.md) pointing to the Parquet files.
This solution enables you to query against the entire FHIR data with tools such as Synapse Studio, SSMS, and Power BI. You can also access the Parquet files directly from a Synapse Spark pool. You should consider this solution if you want to access all of your FHIR data in near real time, and want to defer custom transformation to downstream systems.
Follow the OSS [documentation](https://github.com/microsoft/FHIR-Analytics-Pipel
> [!Note] > [FHIR to CDM pipeline generator](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
-The **FHIR to CDM pipeline generator** is a Microsoft OSS project released under MIT License. It's a tool to generate an ADF pipeline for copying a snapshot of data from a FHIR server using $export API, transforming it to csv format, and writing to a [CDM folder](https://docs.microsoft.com/common-data-model/data-lake) in Azure Data Lake Storage Gen 2. The tool requires a user-created configuration file containing instructions to project and flatten FHIR Resources and fields into tables. You can also follow the instructions for creating a downstream pipeline in Synapse workspace to move data from CDM folder to Synapse dedicated SQL pool.
+The **FHIR to CDM pipeline generator** is a Microsoft OSS project released under MIT License. It's a tool to generate an ADF pipeline for copying a snapshot of data from a FHIR server using $export API, transforming it to csv format, and writing to a [CDM folder](/common-data-model/data-lake) in Azure Data Lake Storage Gen 2. The tool requires a user-created configuration file containing instructions to project and flatten FHIR Resources and fields into tables. You can also follow the instructions for creating a downstream pipeline in Synapse workspace to move data from CDM folder to Synapse dedicated SQL pool.
This solution enables you to transform the data into tabular format as it gets written to CDM folder. You should consider this solution if you want to transform FHIR data into a custom schema after it's extracted from the FHIR server.
In this article, you learned three different ways to copy your FHIR data into Sy
Next, you can learn about how you can de-identify your FHIR data while exporting it to Synapse in order to protect PHI. >[!div class="nextstepaction"]
->[Exporting de-identified data](./de-identified-export.md)
--------
+>[Exporting de-identified data](./de-identified-export.md)
healthcare-apis Davinci Drug Formulary Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/davinci-drug-formulary-tutorial.md
-+ Last updated 03/01/2022
healthcare-apis Davinci Pdex Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/davinci-pdex-tutorial.md
--++ Last updated 03/01/2022
healthcare-apis Davinci Plan Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/davinci-plan-net.md
-+ Last updated 03/01/2022
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/export-data.md
Last updated 02/15/2022-+ # How to export FHIR data
healthcare-apis Fhir Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-faq.md
Title: FAQs about FHIR service in Azure Health Data Services description: Get answers to frequently asked questions about FHIR service, such as the storage location of data behind FHIR APIs and version support. -+ Last updated 03/01/2022-+
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-features-supported.md
Title: Supported FHIR features in FHIR service description: This article explains which features of the FHIR specification that are implemented in Azure Health Data Services -+
healthcare-apis Fhir Service Access Token Validation Old https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-access-token-validation-old.md
Title: FHIR service access token validation description: Access token validation procedure and troubleshooting guide for FHIR service -+ Last updated 03/01/2022-+ # FHIR service access token validation
healthcare-apis How To Do Custom Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/how-to-do-custom-search.md
Last updated 03/01/2022-+ # Defining custom search parameters
healthcare-apis How To Run A Reindex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/how-to-run-a-reindex.md
Last updated 03/01/2022-+ # Running a reindex job
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview-of-search.md
Title: Overview of FHIR search in Azure Health Data Services description: This article describes an overview of FHIR search that is implemented in Azure Health Data Services-+ Last updated 03/01/2022-+ # Overview of FHIR search
healthcare-apis Patient Everything https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/patient-everything.md
Title: Patient-everything - Azure Health Data Services description: This article explains how to use the Patient-everything operation. -+ Last updated 03/01/2022-+ # Using Patient-everything in FHIR service
healthcare-apis Search Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/search-samples.md
Last updated 03/01/2022-+ # FHIR search examples
healthcare-apis Store Profiles In Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/store-profiles-in-fhir.md
Last updated 03/01/2022-+ # Store profiles in FHIR service
healthcare-apis Tutorial Member Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/tutorial-member-match.md
--++ Last updated 03/01/2022
healthcare-apis Validation Against Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/validation-against-profiles.md
Title: Validate FHIR resources against profiles in Azure Health Data Services description: This article describes how to validate FHIR resources against profiles in the FHIR service.-+ Last updated 03/01/2022-+ # Validate FHIR resources against profiles in Azure Health Data Services
industrial-iot Tutorial Deploy Industrial Iot Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-deploy-industrial-iot-platform.md
The deployment script allows to select which set of components to deploy.
- [Storage](https://azure.microsoft.com/product-categories/storage/) for Event Hubs checkpointing - Standard dependencies: Minimum + - [SignalR Service](https://azure.microsoft.com/services/signalr-service/) used to scale out asynchronous API notifications, Azure AD app registrations,
- - [Device Provisioning Service](https://docs.microsoft.com/azure/iot-dps/) used for deploying and provisioning the simulation gateways
+ - [Device Provisioning Service](../iot-dps/index.yml) used for deploying and provisioning the simulation gateways
- [Time Series Insights](https://azure.microsoft.com/services/time-series-insights/) - Workbook, Log Analytics, [Application Insights](https://azure.microsoft.com/services/monitor/) for operations monitoring - Micro
References:
Now that you have deployed the IIoT Platform, you can learn how to customize configuration of the components: > [!div class="nextstepaction"]
-> [Customize the configuration of the components](tutorial-configure-industrial-iot-components.md)
+> [Customize the configuration of the components](tutorial-configure-industrial-iot-components.md)
iot-central Concepts Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-private-endpoints.md
The standard IoT Central endpoints for device connectivity are accessible using
Use private endpoints to limit and secure device connectivity to your IoT Central application and only allow access through your private virtual network.
-Private endpoints use private IP addresses from a virtual network address space to connect your devices privately to your IoT Central application. Network traffic between devices on the virtual network and the IoT platform traverses the virtual network and a private link on the [Microsoft backbone network](/azure/networking/microsoft-global-network), eliminating exposure on the public internet.
+Private endpoints use private IP addresses from a virtual network address space to connect your devices privately to your IoT Central application. Network traffic between devices on the virtual network and the IoT platform traverses the virtual network and a private link on the [Microsoft backbone network](../../networking/microsoft-global-network.md), eliminating exposure on the public internet.
To learn more about Azure Virtual Networks, see: -- [Azure Virtual Networks](/azure/virtual-network/virtual-networks-overview)-- [Azure private endpoints](/azure/private-link/private-endpoint-overview)-- [Azure private links](/azure/private-link/private-link-overview)
+- [Azure Virtual Networks](../../virtual-network/virtual-networks-overview.md)
+