Updates from: 03/16/2022 02:16:23
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Skip Out Of Scope Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/skip-out-of-scope-deletions.md
Copy the updated text from Step 3 into the "Request Body".
Click on ΓÇ£Run QueryΓÇ¥.
-You should get the output as "Success ΓÇô Status Code 204".
+You should get the output as "Success ΓÇô Status Code 204". If you receive an error you may need to check that your account has Read/Write permissions for ServicePrincipalEndpoint. You can find this permission by clicking on the *Modify permissions* tab in Graph Explorer.
![PUT response](./media/skip-out-of-scope-deletions/skip-06.png)
active-directory Use Scim To Build Users And Groups Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-build-users-and-groups-endpoints.md
The default token validation code is configured to use an Azure AD token and req
After you deploy the SCIM endpoint, you can test to ensure that it's compliant with SCIM RFC. This example provides a set of tests in Postman that validate CRUD (create, read, update, and delete) operations on users and groups, filtering, updates to group membership, and disabling users.
-The endpoints are in the `{host}/scim/` directory, and you can use standard HTTP requests to interact with them. To modify the `/scim/` route, see *TokenController.cs* in **SCIMReferenceCode** > **Microsoft.SCIM.WebHostSample** > **Controllers**.
+The endpoints are in the `{host}/scim/` directory, and you can use standard HTTP requests to interact with them. To modify the `/scim/` route, see *ControllerConstant.cs* in **AzureADProvisioningSCIMreference** > **ScimReferenceApi** > **Controllers**.
> [!NOTE] > You can only use HTTP endpoints for local tests. The Azure AD provisioning service requires that your endpoint support HTTPS.
To develop a SCIM-compliant user and group endpoint with interoperability for a
> [!div class="nextstepaction"] > [Tutorial: Develop and plan provisioning for a SCIM endpoint](use-scim-to-provision-users-and-groups.md)
-> [Tutorial: Configure provisioning for a gallery app](configure-automatic-user-provisioning-portal.md)
+> [Tutorial: Configure provisioning for a gallery app](configure-automatic-user-provisioning-portal.md)
active-directory Application Proxy High Availability Load Balancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-high-availability-load-balancing.md
Connectors establish their connections based on principles for high availability
![Diagram showing connections between users and connectors](media/application-proxy-high-availability-load-balancing/application-proxy-connections.png) 1. A user on a client device tries to access an on-premises application published through Application Proxy.
-2. The request goes through an Azure Load Balancer to determine which Application Proxy service instance should take the request. Per region, there are tens of instances available to accept the request. This method helps to evenly distribute the traffic across the service instances.
+2. The request goes through an Azure Load Balancer to determine which Application Proxy service instance should take the request. There are tens of instances available to accept the requests for all traffic in the region. This method helps to evenly distribute the traffic across the service instances.
3. The request is sent to [Service Bus](../../service-bus-messaging/index.yml). 4. Service Bus signals to an available connector. The connector then picks up the request from Service Bus. - In step 2, requests go to different Application Proxy service instances, so connections are more likely to be made with different connectors. As a result, connectors are almost evenly used within the group.
Refer to your software vendor's documentation to understand the load-balancing r
- [Enable single-sign on](application-proxy-configure-single-sign-on-with-kcd.md) - [Enable Conditional Access](./application-proxy-integrate-with-sharepoint-server.md) - [Troubleshoot issues you're having with Application Proxy](application-proxy-troubleshoot.md)-- [Learn how Azure AD architecture supports high availability](../fundamentals/active-directory-architecture.md)
+- [Learn how Azure AD architecture supports high availability](../fundamentals/active-directory-architecture.md)
active-directory Active Directory Authentication Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/active-directory-authentication-libraries.md
The Azure Active Directory Authentication Library (ADAL) v1.0 enables applicatio
| JavaScript |ADAL.js |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-js) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-js) |[Single-page app](https://github.com/Azure-Samples/active-directory-javascript-singlepageapp-dotnet-webapi) | | | iOS, macOS |ADAL |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-objc/releases) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-objc) |[iOS app](../develop/quickstart-v2-ios.md) | [Reference](http://cocoadocs.org/docsets/ADAL/2.5.1/)| | Android |ADAL |[Maven](https://search.maven.org/search?q=g:com.microsoft.aad+AND+a:adal&core=gav) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-android) |[Android app](../develop/quickstart-v2-android.md) | [JavaDocs](https://javadoc.io/doc/com.microsoft.aad/adal/)|
-| Node.js |ADAL |[npm](https://www.npmjs.com/package/adal-node) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-nodejs) | [Node.js web app](https://github.com/Azure-Samples/active-directory-node-webapp-openidconnect)|[Reference](/javascript/api/overview/azure/activedirectory) |
+| Node.js |ADAL |[npm](https://www.npmjs.com/package/adal-node) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-nodejs) | [Node.js web app](https://github.com/Azure-Samples/active-directory-node-webapp-openidconnect)|[Reference](/javascript/api/overview/azure/active-directory) |
| Java |ADAL4J |[Maven](https://search.maven.org/#search%7Cga%7C1%7Ca%3Aadal4j%20g%3Acom.microsoft.azure) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-java) |[Java web app](https://github.com/Azure-Samples/active-directory-java-webapp-openidconnect) |[Reference](https://javadoc.io/doc/com.microsoft.azure/adal4j) | | Python |ADAL |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-python) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-python) |[Python web app](https://github.com/Azure-Samples/active-directory-python-webapp-graphapi) |[Reference](https://adal-python.readthedocs.io/) |
active-directory Cloudknox Product Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-data-sources.md
You can use the **Data Collectors** dashboard in CloudKnox Permissions Managemen
1. Select the ellipses **(...)** at the end of the row in the table. 1. Select **Edit Configuration**.
- The **M-CIEM Onboarding - Summary** box displays.
+ The **CloudKnox Onboarding - Summary** box displays.
1. Select **Edit** (the pencil icon) for each field you want to change. 1. Select **Verify now & save**.
active-directory Block Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/block-legacy-authentication.md
To give your users easy access to your cloud apps, Azure Active Directory (Azure AD) supports a broad variety of authentication protocols including legacy authentication. However, legacy authentication doesn't support multifactor authentication (MFA). MFA is in many environments a common requirement to address identity theft. > [!NOTE]
-> Effective October 1, 2022, we will begin to permanently disable Basic Authentication for Exchange Online in all Microsoft 365 tenants regardless of usage, except for SMTP Authentication.
+> Effective October 1, 2022, we will begin to permanently disable Basic Authentication for Exchange Online in all Microsoft 365 tenants regardless of usage, except for SMTP Authentication. Read more [here](/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online)
Alex Weinert, Director of Identity Security at Microsoft, in his March 12, 2020 blog post [New tools to block legacy authentication in your organization](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/new-tools-to-block-legacy-authentication-in-your-organization/ba-p/1225302#) emphasizes why organizations should block legacy authentication and what other tools Microsoft provides to accomplish this task:
active-directory Concept Conditional Access Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-users-groups.md
By default the policy will provide an option to exclude the current user from th
![Warning, don't lock yourself out!](./media/concept-conditional-access-users-groups/conditional-access-users-and-groups-lockout-warning.png)
-If you do find yourself locked out[What to do if you are locked out of the Azure portal?](troubleshoot-conditional-access.md#what-to-do-if-you-are-locked-out-of-the-azure-portal)
+If you do find yourself locked out[What to do if you are locked out of the Azure portal?](troubleshoot-conditional-access.md#what-to-do-if-youre-locked-out-of-the-azure-portal)
## Next steps
active-directory Terms Of Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/terms-of-use.md
Conditional Access policies take effect immediately. When this happens, the admi
## B2B guests
-Most organizations have a process in place for their employees to consent to their organization's terms of use policy and privacy statements. But how can you enforce the same consents for Azure AD business-to-business (B2B) guests when they're added via SharePoint or Teams? Using Conditional Access and terms of use policies, you can enforce a policy directly towards B2B guest users. During the invitation redemption flow, the user is presented with the terms of use policy. This support is currently in preview.
+Most organizations have a process in place for their employees to consent to their organization's terms of use policy and privacy statements. But how can you enforce the same consents for Azure AD business-to-business (B2B) guests when they're added via SharePoint or Teams? Using Conditional Access and terms of use policies, you can enforce a policy directly towards B2B guest users. During the invitation redemption flow, the user is presented with the terms of use policy.
Terms of use policies will only be displayed when the user has a guest account in Azure AD. SharePoint Online currently has an [ad hoc external sharing recipient experience](/sharepoint/what-s-new-in-sharing-in-targeted-release) to share a document or a folder that doesn't require the user to have a guest account. In this case, a terms of use policy isn't displayed.
active-directory Troubleshoot Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-conditional-access.md
Previously updated : 10/16/2020 Last updated : 03/15/2022 -+
Organizations should avoid the following configurations:
**For all users, all cloud apps:** - **Block access** - This configuration blocks your entire organization.-- **Require device to be marked as compliant** - For users that have not enrolled their devices yet, this policy blocks all access including access to the Intune portal. If you are an administrator without an enrolled device, this policy blocks you from getting back into the Azure portal to change the policy.
+- **Require device to be marked as compliant** - For users that haven't enrolled their devices yet, this policy blocks all access including access to the Intune portal. If you're an administrator without an enrolled device, this policy blocks you from getting back into the Azure portal to change the policy.
- **Require Hybrid Azure AD domain joined device** - This policy block access has also the potential to block access for all users in your organization if they don't have a hybrid Azure AD joined device.-- **Require app protection policy** - This policy block access has also the potential to block access for all users in your organization if you don't have an Intune policy. If you are an administrator without a client application that has an Intune app protection policy, this policy blocks you from getting back into portals such as Intune and Azure.
+- **Require app protection policy** - This policy block access has also the potential to block access for all users in your organization if you don't have an Intune policy. If you're an administrator without a client application that has an Intune app protection policy, this policy blocks you from getting back into portals such as Intune and Azure.
**For all users, all cloud apps, all device platforms:**
The first way is to review the error message that appears. For problems signing
![Sign in error - compliant device required](./media/troubleshoot-conditional-access/image1.png)
-In the above error, the message states that the application can only be accessed from devices or client applications that meet the company's mobile device management policy. In this case, the application and device do not meet that policy.
+In the above error, the message states that the application can only be accessed from devices or client applications that meet the company's mobile device management policy. In this case, the application and device don't meet that policy.
## Azure AD sign-in events
To find out which Conditional Access policy or policies applied and why do the f
![Selecting the Conditional access filter in the sign-ins log](./media/troubleshoot-conditional-access/image3.png) 1. Once the sign-in event that corresponds to the user's sign-in failure has been found select the **Conditional Access** tab. The Conditional Access tab will show the specific policy or policies that resulted in the sign-in interruption.
- 1. Information in the **Troubleshooting and support** tab may provide a clear reason as to why a sign-in failed such as a device that did not meet compliance requirements.
+ 1. Information in the **Troubleshooting and support** tab may provide a clear reason as to why a sign-in failed such as a device that didn't meet compliance requirements.
1. To investigate further, drill down into the configuration of the policies by clicking on the **Policy Name**. Clicking the **Policy Name** will show the policy configuration user interface for the selected policy for review and editing. 1. The **client user** and **device details** that were used for the Conditional Access policy assessment are also available in the **Basic Info**, **Location**, **Device Info**, **Authentication Details**, and **Additional Details** tabs of the sign-in event.
Selecting the ellipsis on the right side of the policy in a sign-in event brings
The left side provides details collected at sign-in and the right side provides details of whether those details satisfy the requirements of the applied Conditional Access policies. Conditional Access policies only apply when all conditions are satisfied or not configured.
-If the information in the event isn't enough to understand the sign-in results or adjust the policy to get desired results, then a support incident may be opened. Navigate to that sign-in event's **Troubleshooting and support** tab and select **Create a new support request**.
+If the information in the event isn't enough to understand the sign-in results or adjust the policy to get desired results, the sign-in diagnostic tool can be used. The sign-in diagnostic can be found under **Basic info** > **Troubleshoot Event**. For more information about the sign-in diagnostic, see the article [What is the sign-in diagnostic in Azure AD](../reports-monitoring/overview-sign-in-diagnostics.md).
-![The Troubleshooting and support tab of the Sign-in event](./media/troubleshoot-conditional-access/image6.png)
-
-When submitting the incident, provide the request ID and time and date from the sign-in event in the incident submission details. This information will allow Microsoft support to find the event you're concerned about.
+If you need to submit a support incident, provide the request ID and time and date from the sign-in event in the incident submission details. This information will allow Microsoft support to find the specific event you're concerned about.
### Conditional Access error codes
When submitting the incident, provide the request ID and time and date from the
| 53003 | BlockedByConditionalAccess | | 53004 | ProofUpBlockedDueToRisk |
-## What to do if you are locked out of the Azure portal?
+## What to do if you're locked out of the Azure portal?
-If you are locked out of the Azure portal due to an incorrect setting in a Conditional Access policy:
+If you're locked out of the Azure portal due to an incorrect setting in a Conditional Access policy:
- Check is there are other administrators in your organization that aren't blocked yet. An administrator with access to the Azure portal can disable the policy that is impacting your sign-in. - If none of the administrators in your organization can update the policy, submit a support request. Microsoft support can review and upon confirmation update the Conditional Access policies that are preventing access.
active-directory Consent Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/consent-framework.md
Previously updated : 10/21/2020 Last updated : 03/14/2022 --++ # Azure Active Directory consent framework
The following steps show you how the consent experience works for both the appli
1. Assume you have a web client application that needs to request specific permissions to access a resource/API. You'll learn how to do this configuration in the next section, but essentially the Azure portal is used to declare permission requests at configuration time. Like other configuration settings, they become part of the application's Azure AD registration:
- ![Permissions to other applications](./media/consent-framework/permissions.png)
+ :::image type="content" source="./media/consent-framework/permissions.png" alt-text="Permissions to other applications" lightbox="./media/consent-framework/permissions.png":::
1. Consider that your applicationΓÇÖs permissions have been updated, the application is running, and a user is about to use it for the first time. First, the application needs to obtain an authorization code from Azure ADΓÇÖs `/authorize` endpoint. The authorization code can then be used to acquire a new access and refresh token. 1. If the user is not already authenticated, Azure AD's `/authorize` endpoint prompts the user to sign in.
- ![User or administrator sign in to Azure AD](./media/consent-framework/usersignin.png)
+ :::image type="content" source="./media/consent-framework/usersignin.png" alt-text="User or administrator sign in to Azure AD":::
1. After the user has signed in, Azure AD will determine if the user needs to be shown a consent page. This determination is based on whether the user (or their organizationΓÇÖs administrator) has already granted the application consent. If consent has not already been granted, Azure AD prompts the user for consent and displays the required permissions it needs to function. The set of permissions that are displayed in the consent dialog match the ones selected in the **Delegated permissions** in the Azure portal.
- ![Shows an example of permissions displayed in the consent dialog](./media/consent-framework/consent.png)
+ :::image type="content" source="./media/consent-framework/consent.png" alt-text="Shows an example of permissions displayed in the consent dialog":::
1. After the user grants consent, an authorization code is returned to your application, which is redeemed to acquire an access token and refresh token. For more information about this flow, see [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md).
The following steps show you how the consent experience works for both the appli
1. Go to the **API permissions** page for your application 1. Click on the **Grant admin consent** button.
- ![Grant permissions for explicit admin consent](./media/consent-framework/grant-consent.png)
+ :::image type="content" source="./media/consent-framework/grant-consent.png" alt-text="Grant permissions for explicit admin consent" lightbox="./media/consent-framework/grant-consent.png":::
> [!IMPORTANT] > Granting explicit consent using the **Grant permissions** button is currently required for single-page applications (SPA) that use MSAL.js. Otherwise, the application fails when the access token is requested.
active-directory Reference Third Party Cookies Spas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-third-party-cookies-spas.md
Previously updated : 10/06/2021 Last updated : 03/14/2022
# Handle ITP in Safari and other browsers where third-party cookies are blocked
-Many browsers today are blocking third-party cookies - cookies on requests to domains that aren't the same as the one showing in the browser bar. This breaks the implicit flow and requires new authentication patterns to successfully sign in users. In the Microsoft identity platform, we use the authorization flow with Proof Key for Code Exchange (PKCE) and refresh tokens to keep users signed in when third-party cookies are blocked.
+Many browsers block _third-party cookies_, cookies on requests to domains other than the domain shown in the browser's address bar. This block breaks the implicit flow and requires new authentication patterns to successfully sign in users. In the Microsoft identity platform, we use the authorization flow with Proof Key for Code Exchange (PKCE) and refresh tokens to keep users signed in when third-party cookies are blocked.
## What is Intelligent Tracking Protection (ITP)?
There are two ways of accomplishing sign-in:
- When the popup finishes redirecting to the application after authentication, code in the redirect handler will store the code and tokens in local storage for the application to use. MSAL.js supports popups for authentication, as do most libraries. - Browsers are decreasing support for popups, so they may not be the most reliable option. User interaction with the SPA before creating the popup may be needed to satisfy browser requirements.
-> [!NOTE]
-> Apple [describes a popup method](https://webkit.org/blog/8311/intelligent-tracking-prevention-2-0/) as a temporary compatibility fix to give the original window access to third-party cookies. While Apple may remove this transferral of permissions in the future, it will not impact the guidance here. Here, the popup is being used as a first party navigation to the login page so that a session is found and an auth code can be provided. This should continue working into the future.
+ Apple [describes a popup method](https://webkit.org/blog/8311/intelligent-tracking-prevention-2-0/) as a temporary compatibility fix to give the original window access to third-party cookies. While Apple may remove this transferral of permissions in the future, it will not impact the guidance here.
+
+ Here, the popup is being used as a first party navigation to the login page so that a session is found and an auth code can be provided. This should continue working into the future.
-### A note on iframe apps
+### Using iframes
-A common pattern in web apps is to use an iframe to embed one app inside another. The top-level frame handles authenticating the user, and the application hosted in the iframe can trust that the user is signed in, fetching tokens silently using the implicit flow. Silent token acquisition no longer works when third-party cookies are blocked - the application embedded in the iframe must switch to using popups to access the user's session as it can't navigate to the login page.
+A common pattern in web apps is to use an iframe to embed one app inside anotherd: the top-level frame handles authenticating the user and the application hosted in the iframe can trust that the user is signed in, fetching tokens silently using the implicit flow.
+
+Silent token acquisition no longer works when third-party cookies are blocked - the application embedded in the iframe must switch to using popups to access the user's session as it can't navigate to the login page.
+
+You can achieve single sign-on between iframed and parent apps with same-origin _and_ cross-origin JavaScript script API access by passing a user (account) hint from the parent app to the iframed app. For more information, see [Using MSAL.js in iframed apps](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/iframe-usage.md) in the MSAL.js repository on GitHub.
## Security implications of refresh tokens in the browser
This limited-lifetime refresh token pattern was chosen as a balance between secu
## Next steps
-For more information about authorization code flow and Microsoft Authentication Library (MSAL) for JavaScript v2.0, see:
+For more information about authorization code flow and MSAL.js, see:
- [Authorization code flow](v2-oauth2-auth-code-flow.md). - [MSAL.js 2.0 quickstart](quickstart-v2-javascript-auth-code.md).
active-directory Support Fido2 Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/support-fido2-authentication.md
Don't use a domain hint to bypass [home-realm discovery](../../active-directory/
### Requiring specific credentials
-If you are using SAML, do not specify that a password is required [using the RequestedAuthnContext element](single-sign-on-saml-protocol.md#requestauthncontext).
+If you are using SAML, do not specify that a password is required [using the RequestedAuthnContext element](single-sign-on-saml-protocol.md#requestedauthncontext).
The RequestedAuthnContext element is optional, so to resolve this you can remove it from your SAML authentication requests. This is a general best practice, as using this element can also prevent other authentication options like multi-factor authentication from working correctly.
The availability of FIDO2 passwordless authentication for applications that run
## Next steps
-[Passwordless authentication options for Azure Active Directory](../../active-directory/authentication/concept-authentication-passwordless.md)
+[Passwordless authentication options for Azure Active Directory](../../active-directory/authentication/concept-authentication-passwordless.md)
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
Before you install Azure AD Connect, there are a few things that you need.
### On-premises Active Directory * The Active Directory schema version and forest functional level must be Windows Server 2003 or later. The domain controllers can run any version as long as the schema version and forest-level requirements are met.
-* If you plan to use the feature *password writeback*, the domain controllers must be on Windows Server 2016 or later.
* The domain controller used by Azure AD must be writable. Using a read-only domain controller (RODC) *isn't supported*, and Azure AD Connect doesn't follow any write redirects. * Using on-premises forests or domains by using "dotted" (name contains a period ".") NetBIOS names *isn't supported*. * We recommend that you [enable the Active Directory recycle bin](how-to-connect-sync-recycle-bin.md).
active-directory How To Connect Pta Disable Do Not Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-disable-do-not-configure.md
Title: 'Disable PTA when using Azure AD Connect "Do not configure" | Microsoft Docs'
-description: This article describes how to disable PTA with the Azure AD Connect "do not configure" feature.
+ Title: 'Disable pass-through authentication by using Azure AD Connect or PowerShell | Microsoft Docs'
+description: This article describes how to disable pass-through authentication by using the Azure AD Connect Do Not Configure feature or by using PowerShell.
-# Disable PTA
+# Disable pass-through authentication
-To disable PTA, complete the steps that are described in [Disable PTA when using Azure AD Connect](#disable-pta-when-using-azure-ad-connect) and [Disable PTA in PowerShell](#disable-pta-in-powershell) in this article.
+In this article, you learn how to disable pass-through authentication by using Azure Active Directory (Azure AD) Connect or PowerShell.
-## Disable PTA when using Azure AD Connect
+## Prerequisites
-If you are using Pass-through Authentication with Azure AD Connect and you have it set to **"Do not configure"**, you can disable it.
+Before you begin, ensure that you have the following:
->[!NOTE]
->If you have PHS already enabled then disabling PTA will result in the tenant fallback to PHS.
+- A Windows machine with pass-through authentication agent version 1.5.1742.0 or later installed. Any earlier version might not have the requisite cmdlets for completing this operation.
-Disabling PTA can be done using the following cmdlets.
+ If you don't already have an agent, you can install it by doing the following:
-## Prerequisites
-The following prerequisites are required:
-- Any Windows machine that has the PTA agent installed. -- Agent must be at version 1.5.1742.0 or later. -- An Azure global administrator account in order to run the PowerShell cmdlets to disable PTA.
+ 1. Go to the [Azure portal](https://portal.azure.com).
+ 1. Download the latest Auth Agent.
+ 1. Install the feature by running either of the following:
+ * `.\AADConnectAuthAgentSetup.exe`
+ * `.\AADConnectAuthAgentSetup.exe ENVIRONMENTNAME=<identifier>`
+ > [!IMPORTANT]
+ > If you're using the Azure Government cloud, pass in the ENVIRONMENTNAME parameter with the following value:
+ >
+ >| Environment Name | Cloud |
+ >| - | - |
+ >| AzureUSGovernment | US Gov |
->[!NOTE]
-> If your agent is older then it may not have the cmdlets required to complete this operation. You can get a new agent from Azure Portal an install it on any Windows machine and provide admin credentials. (Installing the agent does not affect the PTA status in the cloud)
+- An Azure global administrator account for running the PowerShell cmdlets.
+
+## Use Azure AD Connect
-> [!IMPORTANT]
-> If you are using the Azure Government cloud then you will have to pass in the ENVIRONMENTNAME parameter with the following value.
->
->| Environment Name | Cloud |
->| - | - |
->| AzureUSGovernment | US Gov|
+If you're using pass-through authentication with Azure AD Connect and you have it set to **Do not configure**, you can disable the setting.
+>[!NOTE]
+>If you already have password hash synchronization enabled, disabling pass-through authentication will result in a tenant fallback to password hash synchronization.
-## Disable PTA in PowerShell
+## Use PowerShell
-From within a PowerShell session, use the following to disable PTA:
+In a PowerShell session, run the following cmdlets:
1. PS C:\Program Files\Microsoft Azure AD Connect Authentication Agent> `Import-Module .\Modules\PassthroughAuthPSModule` 2. `Get-PassthroughAuthenticationEnablementStatus` 3. `Disable-PassthroughAuthentication`
-## If you don't have access to an agent
-
-If you do not have an agent machine you can use following command to install an agent.
-
-1. Download the latest Auth Agent from portal.azure.com.
-2. Install the feature: `.\AADConnectAuthAgentSetup.exe` or `.\AADConnectAuthAgentSetup.exe ENVIRONMENTNAME=<identifier>`
-- ## Next steps -- [User sign-in with Azure Active Directory Pass-through Authentication](how-to-connect-pta.md)
+- [User sign-in with Azure AD pass-through authentication](how-to-connect-pta.md)
active-directory Whatis Azure Ad Connect V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-azure-ad-connect-v2.md
Title: 'What is Azure AD Connect v2.0? | Microsoft Docs'
+ Title: 'What is Azure AD Connect V2.0? | Microsoft Docs'
description: Learn about the next version of Azure AD Connect.
# Introduction to Azure AD Connect V2.0
-Azure AD Connect was released several years ago. Since this time, several of the components that Azure AD Connect uses have been scheduled for deprecation and updated to newer versions. To attempt to update all of these components individually would take time and planning.
+The first version of Azure Active Directory (Azure AD) Connect was released several years ago. Since then, we've scheduled several components of Azure AD Connect for deprecation and updated them to newer versions.
-To address this, we wanted to bundle as many of these newer components into a new, single release, so you only have to update once. This release will be Azure AD Connect V2.0. This is a new version of the same software used to accomplish your hybrid identity goals that is built using the latest foundational components.
+Making updates to all these components individually requires a lot of time and planning. To address this drawback, we've bundled many of the newer components into a new, single release, so you have to update only once. This release, Azure AD Connect V2.0, is a new version of the same software you're already using to accomplish your hybrid identity goals, but it's updated with the latest foundational components.
## What are the major changes? ### SQL Server 2019 LocalDB
-The previous versions of Azure AD Connect shipped with a SQL Server 2012 LocalDB. V2.0 ships with a SQL Server 2019 LocalDB, which promises enhanced stability and performance and has several security-related bug fixes. SQL Server 2012 will go out of extended support in July 2022. For more information see [Microsoft SQL 2019](https://www.microsoft.com/sql-server/sql-server-2019).
+Earlier versions of Azure AD Connect shipped with the SQL Server 2012 LocalDB feature. V2.0 ships with SQL Server 2019 LocalDB, which promises enhanced stability and performance and has several security-related bug fixes. In July 2022, SQL Server 2012 will no longer have extended support. For more information, see [Microsoft SQL 2019](https://www.microsoft.com/sql-server/sql-server-2019).
### MSAL authentication library
-The previous versions of Azure AD Connect shipped with the ADAL authentication library. This library will be deprecated in June 2022. The V2.0 release ships with the newer MSAL library. For more information see [Overview of the MSAL library](../../active-directory/develop/msal-overview.md).
+Earlier versions of Azure AD Connect shipped with the Azure Active Directory Authentication Library (ADAL). This library will be deprecated in June 2022. The V2.0 release ships with the newer Microsoft Authentication Library (MSAL). For more information, see [Overview of the MSAL library](../../active-directory/develop/msal-overview.md).
-### Visual C++ Redist 14
+### Visual C++ Redistributable 14 runtime
-SQL Server 2019 requires the Visual C++ Redist 14 runtime, so we are updating the C++ runtime library to use this version. This will be installed with the Azure AD Connect V2.0 package, so you do not have to take any action for the C++ runtime update.
+SQL Server 2019 requires the Visual C++ Redistributable 14 runtime, so we've updated the C++ runtime library to use this version. The library is installed with the Azure AD Connect V2.0 package, so you don't have to take any action to get the C++ runtime update.
### TLS 1.2
-TLS1.0 and TLS 1.1 are protocols that are deemed unsafe and are being deprecated by Microsoft. This release of Azure AD Connect will only support TLS 1.2.
-All versions of Windows Server that are supported for Azure AD Connect V2.0 already default to TLS 1.2. If your server does not support TLS 1.2 you will need to enable this before you can deploy Azure AD Connect V2.0. For more information, see [TLS 1.2 enforcement for Azure AD Connect](reference-connect-tls-enforcement.md).
+The Transport Layer Security (TLS) 1.0 and TLS 1.1 protocols are deemed unsafe and are being deprecated by Microsoft. Azure AD Connect V2.0 supports only TLS 1.2. All versions of Windows Server that are supported for Azure AD Connect V2.0 already default to TLS 1.2. If your server doesn't support TLS 1.2, you need to enable it before you can deploy Azure AD Connect V2.0. For more information, see [TLS 1.2 enforcement for Azure AD Connect](reference-connect-tls-enforcement.md).
-### All binaries signed with SHA2
+### All binaries signed with SHA-2
-We noticed that some components had SHA1 signed binaries. We no longer support SHA1 for downloadable binaries and we upgraded all binaries to SHA2 signing. The digital signatures are used to ensure that the updates come directly from Microsoft and were not tampered with during delivery. Because of weaknesses in the SHA-1 algorithm and to align to industry standards, we have changed the signing of Windows updates to use the more secure SHA-2 algorithm."ΓÇ»
+We noticed that some components have Secure Hash Algorithm 1 (SHA-1) signed binaries. We no longer support SHA-1 for downloadable binaries, and we've upgraded all binaries to SHA-2 signing. The digital signatures are used to ensure that the updates come directly from Microsoft and aren't tampered with during delivery. Because of weaknesses in the SHA-1 algorithm, and to align with industry standards, we've changed the signing of Windows updates to use the more secure SHA-2 algorithm.ΓÇ»
-There is no action needed from your side.
+No action is required of you at this time.
-### Windows Server 2012 and Windows Server 2012 R2 are no longer supported
+### Windows Server 2012 and 2012 R2 are no longer supported
-SQL Server 2019 requires Windows Server 2016 or newer as a server operating system. Since AAD Connect v2 contains SQL Server 2019 components, we no longer can support older Windows Server versions.
+SQL Server 2019 requires Windows Server 2016 or later as a server operating system. Because Azure AD Connect V2.0 contains SQL Server 2019 components, we no longer support earlier Windows Server versions.
-You cannot install this version on an older Windows Server version. We suggest you upgrade your Azure AD Connect server to Windows Server 2019, which is the most recent version of the Windows Server operating system.
+You can't install this version on earlier Windows Server versions. We suggest that you upgrade your Azure AD Connect server to Windows Server 2019, which is the most recent version of the Windows Server operating system.
-This [article](/windows-server/get-started-19/install-upgrade-migrate-19) describes the upgrade from older Windows Server versions to Windows Server 2019.
+For more information about upgrading from earlier Windows Server versions to Windows Server 2019, see [Install, upgrade, or migrate to Windows Server](/windows-server/get-started-19/install-upgrade-migrate-19).
### PowerShell 5.0
-This release of Azure AD Connect contains several cmdlets that require PowerShell 5.0, so this requirement is a new prerequisite for Azure AD Connect.
+The Azure AD Connect V2.0 release contains several cmdlets that require PowerShell 5.0 or later, so this requirement is a new prerequisite for Azure AD Connect.
-More details about PowerShell prerequisites can be found [here](/powershell/scripting/windows-powershell/install/windows-powershell-system-requirements#windows-powershell-50).
+For more information, see [Windows PowerShell System Requirements](/powershell/scripting/windows-powershell/install/windows-powershell-system-requirements#windows-powershell-50).
>[!NOTE]
- >PowerShell 5 is already part of Windows Server 2016 so you probably do not have to take action as long as you are on a recent Window Server version.
+ >PowerShell 5.0 is already part of Windows Server 2016, so you probably don't have to take action as long as you're using a recent Window Server version.
## What else do I need to know? - **Why is this upgrade important for me?** </br>
-Next year several of the components in your current Azure AD Connect server installations will go out of support. If you are using unsupported products, it will be harder for our support team to provide you with the support experience your organization requires. So we recommend all customers to upgrade to this newer version as soon as they can.
+Next year, several components in your current Azure AD Connect server installations will go out of support. If you're using unsupported products, it will be harder for our support team to provide you with the support experience your organization requires. We recommend that you upgrade to this newer version as soon as possible.
-This upgrade is especially important since we have had to update our prerequisites for Azure AD Connect and you may need additional time to plan and update your servers to the newer versions of these prerequisites
+This upgrade is especially important, because we've had to update our prerequisites for Azure AD Connect. You might need additional time to plan and update your servers to the newest versions of the prerequisites.
**Is there any new functionality I need to know about?** </br>
-No ΓÇô this release does not contain any new functionality. This release only contains updates of some of the foundational components on Azure AD Connect.
+No, this release doesn't contain new functionality. It contains only updates of some of the foundational components on Azure AD Connect. However, later releases of Azure AD Connect V2 might contain new functionality.
-**Can I upgrade from any previous version to V2.0?** </br>
-Yes ΓÇô upgrades from any previous version of Azure AD Connect to Azure AD Connect V2.0 is supported. Please follow the guidance in [this article](how-to-upgrade-previous-version.md) to determine what is the best upgrade strategy for you.
+**Can I upgrade from earlier versions to V2.0?** </br>
+Yes, upgrading from earlier versions of Azure AD Connect to Azure AD Connect V2.0 is supported. To determine your best upgrade strategy, see [Azure AD Connect: Upgrade from a previous version to the latest](how-to-upgrade-previous-version.md).
**Can I export the configuration of my current server and import it in Azure AD Connect V2.0?** </br>
-Yes, you can do that, and it is a great way to migrate to Azure AD Connect V2.0 ΓÇô especially if you are also upgrading to a new operating system version. You can read more about the Import/export configuration feature and how you can use it in this [article](how-to-connect-import-export-config.md).
+Yes, and it's a great way to migrate to Azure AD Connect V2.0, especially if you're also upgrading to a new operating system version. For more information, see [Import and export Azure AD Connect configuration settings](how-to-connect-import-export-config.md).
-**I have enabled auto upgrade for Azure AD Connect ΓÇô will I get this new version automatically?** </br>
-No ΓÇô Azure AD Connect V2.0 will not be made available for auto upgrade at this time.
+**I have enabled the auto-upgrade feature for Azure AD Connect. Will I get this new version automatically?** </br>
+Yes. Your Azure AD Connect server will be upgraded to the latest release if you've enabled the auto-upgrade feature. Note that we have not yet released an auto-upgrade version for Azure AD Connect.
-**I am not ready to upgrade yet ΓÇô how much time do I have?** </br>
-You should upgrade to Azure AD Connect V2.0 as soon as you can. **__All Azure AD Connect V1 versions will be retired on 31 August, 2022.__** For the time being we will continue to support older versions of Azure AD Connect, but it may prove difficult to provide a good support experience if some of the components in Azure AD Connect have dropped out of support. This upgrade is particularly important for ADAL and TLS1.0/1.1 as these services might stop working unexpectedly after they are deprecated.
+**I am not ready to upgrade yet. How much time do I have?** </br>
+All Azure AD Connect V1 versions will be retired on August 31, 2022, so you should upgrade to Azure AD Connect V2.0 as soon as you can. For the time being, we'll continue to support earlier versions of Azure AD Connect, but it might be difficult to provide a good support experience if some Azure AD Connect components are no longer supported. This upgrade is particularly important for ADAL and TLS 1.0/1.1, because these services might stop working unexpectedly after they're deprecated.
-**I use an external SQL database and do not use SQL 2012 LocalDb ΓÇô do I still have to upgrade?** </br>
-Yes, you still need to upgrade to remain in a supported state even if you do not use SQL Server 2012, due to the TLS1.0/1.1 and ADAL deprecation. Note that SQL Server 2012 can still be used as an external SQL database with Azure AD Connect V2.0 - the SQL 2019 drivers in Azure AD Connect V2.0 are compatible with SQL Server 2012.
+**I use an external SQL database and do not use SQL Server 2012 LocalDB. Do I still have to upgrade?** </br>
+Yes, you need to upgrade to remain in a supported state, even if you don't use SQL Server 2012, because of the TLS 1.0/1.1 and ADAL deprecation. Note that you can still use SQL Server 2012 as an external SQL database with Azure AD Connect V2.0. The SQL Server 2019 drivers in Azure AD Connect V2.0 are compatible with SQL Server 2012.
-**After the upgrade of my Azure AD Connect instance to V2.0, will the SQL 2012 components automatically get uninstalled?** </br>
-No, the upgrade to SQL 2019 does not remove any SQL 2012 components from your server. If you no longer need these components then you should follow [the SQL Server uninstallation instructions](/sql/sql-server/install/uninstall-an-existing-instance-of-sql-server-setup).
+**After I've upgraded my Azure AD Connect instance to V2.0, will the SQL Server 2012 components get uninstalled automatically?** </br>
+No, the upgrade to SQL Server 2019 doesn't remove any SQL Server 2012 components from your server. If you no longer need these components, follow the instructions in [Uninstall an existing instance of SQL Server](/sql/sql-server/install/uninstall-an-existing-instance-of-sql-server-setup).
-**What happens if I do not upgrade?** </br>
-Until one of the components that are being retired are actually deprecated, you will not see any impact. Azure AD Connect will keep on working.
+**What happens if I don't upgrade?** </br>
+Until a component that's being retired is actually deprecated, your current version of Azure AD Connect will keep working and you won't see any impact.
-We expect TLS 1.0/1.1 to be deprecated in January 2022, and you need to make sure you are not using these protocols by that date as your service may stop working unexpectedly. You can manually configure your server for TLS 1.2 though, and that does not require an update of Azure AD Connect to V2.0
+We expect TLS 1.0/1.1 to be deprecated in January 2022. You need to make sure that you're no longer using these protocols by that date, because your service might stop working unexpectedly. You can manually configure your server for TLS 1.2, though, because that doesn't require an upgrade to Azure AD Connect V2.0.
-In June 2022, ADAL will go out of support. When ADAL goes out of support authentication may stop working unexpectedly and this will block the Azure AD Connect server from working properly. We strongly advise you to upgrade to Azure AD Connect V2.0 before June 2022. You cannot upgrade to a supported authentication library with your current Azure AD Connect version.
+In June 2022, ADAL is planned to go out of support. At that time, authentication might stop working unexpectedly, and the Azure AD Connect server will no longer work properly. We strongly recommend that you upgrade to Azure AD Connect V2.0 before June 2022. You can't upgrade to a supported authentication library with your current Azure AD Connect version.
-**After upgrading to 2.0 the ADSync PowerShell cmdlets do not work?** </br>
-This is a known issue. To resolve this, restart your PowerShell session after installing or upgrading to version 2.0 and then re-import the module. Use the following instructions to import the module.
+**After I upgraded to Azure AD Connect V2.0, the ADSync PowerShell cmdlets don't work. What can I do?** </br>
+This is a known issue. To resolve it, restart your PowerShell session after you've installed or upgraded to Azure AD Connect V2.0, and then reimport the module. To import the module, do the following:
- 1. Open Windows PowerShell with administrative privileges.
- 1. Type or copy and paste the following code:
+ 1. Open Windows PowerShell with administrative privileges.
+ 1. Run the following command:
```powershell Import-module -Name "C:\Program Files\Microsoft Azure AD Sync\Bin\ADSync" ```
-## License requirements for using Azure AD Connect V2.0
+## License requirements for using Azure AD Connect V2
[!INCLUDE [active-directory-free-license.md](../../../includes/active-directory-free-license.md)]
This is a known issue. To resolve this, restart your PowerShell session after in
- [Hardware and prerequisites](how-to-connect-install-prerequisites.md) - [Express settings](how-to-connect-install-express.md) - [Customized settings](how-to-connect-install-custom.md)-
-This article describes the upgrade from older Windows Server versions to Windows Server 2019.
active-directory Memo 22 09 Meet Identity Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-meet-identity-requirements.md
# Meeting identity requirements of Memorandum 22-09 with Azure Active Directory
-This series of articles offer guidance for employing Azure Active Directory (Azure AD) as a centralized identity management system for implementing Zero Trust principles as described by the US Federal GovernmentΓÇÖs Office of Management and Budget (OMB) [Memorandum M-22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf). Throughout this document wee refer to it as "The memo."
+This series of articles offer guidance for employing Azure Active Directory (Azure AD) as a centralized identity management system for implementing Zero Trust principles as described by the US Federal GovernmentΓÇÖs Office of Management and Budget (OMB) [Memorandum M-22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf). Throughout this document we refer to it as "The memo."
The release of Memorandum 22-09 is designed to support Zero trust initiatives within federal agencies; it also provides regulatory guidance in supporting Federal Cybersecurity and Data Privacy Laws. The Memo cites the [Department of Defense (DoD) Zero Trust Reference Architecture](https://dodcio.defense.gov/Portals/0/Documents/Library/(U)ZT_RA_v1.1(U)_Mar21.pdf),
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-storage-drivers.md
Title: Enable Container Storage Interface (CSI) drivers on Azure Kubernetes Serv
description: Learn how to enable the Container Storage Interface (CSI) drivers for Azure disks and Azure Files in an Azure Kubernetes Service (AKS) cluster. Previously updated : 03/10/2022 Last updated : 03/11/2022
The CSI storage driver support on AKS allows you to natively use:
- [*Azure Files*](azure-files-csi.md), which can be used to mount an SMB 3.0/3.1 share backed by an Azure Storage account to pods. With Azure Files, you can share data across multiple nodes and pods. Azure Files can use Azure Standard Storage backed by regular HDDs or Azure Premium Storage backed by high-performance SSDs. > [!IMPORTANT]
-> Starting in Kubernetes version 1.21, Kubernetes will use CSI drivers only and by default. These drivers are the future of storage support in Kubernetes.
+> Starting in Kubernetes version 1.21, AKS will use CSI drivers only and by default. CSI migration is also turned on starting from AKS 1.21, existing in-tree persistent volumes continue to function as they always have; however, behind the scenes Kubernetes hands control of all storage management operations (previously targeting in-tree drivers) to CSI drivers.
> > Please remove manual installed open source Azure Disk and Azure File CSI drivers before upgrading to AKS 1.21. > > *In-tree drivers* refers to the current storage drivers that are part of the core Kubernetes code versus the new CSI drivers, which are plug-ins.
-## Limitations
--- This feature can only be set at cluster creation time.-- The minimum Kubernetes minor version that supports CSI drivers is v1.17.-- The default storage class will be the `managed-csi` storage class.- ## Install CSI storage drivers on a new cluster with version < 1.21 Create a new cluster that can use CSI storage drivers for Azure disks and Azure Files by using the following CLI commands. Use the `--aks-custom-headers` flag to set the `EnableAzureDiskFileCSIDriver` feature.
$ echo $(kubectl get CSINode <NODE NAME> -o jsonpath="{.spec.drivers[1].allocata
- [Set up Azure File CSI driver on AKS cluster](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/install-driver-on-aks.md) ## Migrating custom in-tree storage classes to CSI
-If you have created custom storage classes based on the in-tree storage drivers, these will need to be migrated when you have upgraded your cluster to 1.21.x.
-
-Whilst explicit migration to the CSI provider is not needed for your storage classes to still be valid, to be able to use CSI features (snapshotting etc.) you will need to carry out the migration.
-
-Migration of these storage classes will involve deleting the existing storage classes, and re-provisioning them with the provisioner set to **disk.csi.azure.com** if using Azure Disks, and **files.csi.azure.com** if using Azure Files.
-
-Whilst this will update the mapping of the storage classes, the binding of the Persistent Volume to the CSI provisioner will only take place at provisioning time. This could be during a cordon & drain operation (cluster update) or by detaching and reattaching the Volume.
+If you have created in-tree driver storage classes, those storage classes will continue to work since CSI migration is turned on after upgrading your cluster to 1.21.x, while if you want to use CSI features (snapshotting etc.) you will need to carry out the migration.
+Migration of these storage classes will involve deleting the existing storage classes, and re-creating them with the provisioner set to **disk.csi.azure.com** if using Azure Disks, and **files.csi.azure.com** if using Azure Files.
### Migrating Storage Class provisioner
As an example for Azure disks:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata:
- name: managed-premium-retain
+ name: custom-managed-premium
provisioner: kubernetes.io/azure-disk
-reclaimPolicy: Retain
+reclaimPolicy: Delete
parameters:
- storageaccounttype: Premium_LRS
- kind: Managed
+ storageAccountType: Premium_LRS
``` #### CSI storage class definition
parameters:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata:
- name: managed-premium-retain
+ name: custom-managed-premium
provisioner: disk.csi.azure.com
-reclaimPolicy: Retain
+reclaimPolicy: Delete
parameters:
- storageaccounttype: Premium_LRS
- kind: Managed
+ storageAccountType: Premium_LRS
``` The CSI storage system supports the same features as the In-tree drivers, so the only change needed would be the provisioner. -
-### Migrating in-tree disk persistent volumes
+## Migrating in-tree persistent volumes
> [!IMPORTANT] > If your in-tree Persistent Volume reclaimPolicy is set to Delete you will need to change the Persistent Volume to Retain to persist your data. This can be achieved via a [patch operation on the PV](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/). For example:
The CSI storage system supports the same features as the In-tree drivers, so the
> $ kubectl patch pv pv-azuredisk --type merge --patch '{"spec": {"persistentVolumeReclaimPolicy": "Retain"}}' > ```
-If you have in-tree persistent volumes, get disk ID from `azureDisk.diskURI` and then follow this [guide][azure-disk-static-mount] to set up CSI driver persistent volumes
+### Migrating in-tree Azure Disk persistent volumes
+
+If you have in-tree Azure Disk persistent volumes, get `diskURI` from in-tree persistent volumes and then follow this [guide][azure-disk-static-mount] to set up CSI driver persistent volumes
+
+### Migrating in-tree Azure File persistent volumes
+
+If you have in-tree Azure File persistent volumes, get `secretName`, `shareName` from in-tree persistent volumes and then follow this [guide][azure-file-static-mount] to set up CSI driver persistent volumes
## Next steps
If you have in-tree persistent volumes, get disk ID from `azureDisk.diskURI` and
<!-- LINKS - internal --> [azure-disk-volume]: azure-disk-volume.md [azure-disk-static-mount]: azure-disk-volume.md#mount-disk-as-volume
+[azure-file-static-mount]: azure-files-volume.md#mount-file-share-as-a-persistent-volume
[azure-files-pvc]: azure-files-dynamic-pv.md [premium-storage]: ../virtual-machines/disks-types.md [az-disk-list]: /cli/azure/disk#az_disk_list
aks Openfaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/openfaas.md
You can also test the function within the OpenFaaS UI.
## Next Steps
-You can continue to learn with the OpenFaaS workshop through a set of hands-on labs that cover topics such as how to create your own GitHub bot, consuming secrets, viewing metrics, and auto-scaling.
+You can continue to learn with the [OpenFaaS workshop](https://github.com/openfaas/workshop) through a set of hands-on labs that cover topics such as how to create your own GitHub bot, consuming secrets, viewing metrics, and auto-scaling.
<!-- LINKS - external --> [install-mongo]: https://docs.mongodb.com/manual/installation/
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
Finally, place the driver JARs in the Tomcat classpath and restart your App Serv
2. If you created a server-level data source, restart the App Service Linux application. Tomcat will reset `CATALINA_BASE` to `/home/tomcat` and use the updated configuration.
-### JBoss EAP
+### JBoss EAP Data Sources
There are three core steps when [registering a data source with JBoss EAP](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.0/html/configuration_guide/datasource_management): uploading the JDBC driver, adding the JDBC driver as a module, and registering the module. App Service is a stateless hosting service, so the configuration commands for adding and registering the data source module must be scripted and applied as the container starts.
If you choose to pin the minor version, you will need to periodically update the
::: zone pivot="platform-linux"
-## JBoss EAP App Service Plans
+## JBoss EAP
+
+### Clustering in JBoss EAP
+
+App Service supports clustering for JBoss EAP versions 7.4.1 and greater. To enable clustering, your web app must be [integrated with a virtual network](overview-vnet-integration.md). When the web app is integrated with a virtual network, the web app will restart and JBoss EAP will automatically startup with a clustered configuration. The JBoss EAP instances will communicate over the subnet specified in the virtual network integration, using the ports shown in the `WEBSITES_PRIVATE_PORTS` environment variable at runtime. You can disable clustering by creating an app setting named `WEBSITE_DISABLE_CLUSTERING` with any value.
+
+> [!NOTE]
+> If you are enabling your virtual network integration with an ARM template, you will need to manually set the property `vnetPrivatePorts` to a value of `2`. If you enable virtual network integration from the CLI or Portal, this property will be set for you automatically.
+
+When clustering is enabled, the JBoss EAP instances use the FILE_PING JGroups discovery protocol to discover new instances and persist the cluster information like the cluster members, their identifiers, and their IP addresses. On App Service, these files are under `/home/clusterinfo/`. The first EAP instance to start will obtain read/write permissions on the cluster membership file. Other instances will read the file, find the primary node, and coordinate with that node to be included in the cluster and added to the file.
+
+### JBoss EAP App Service Plans
<a id="jboss-eap-hardware-options"></a>
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 2/10/2022 Last updated : 3/14/2022
App Service can now automate migration of your App Service Environment v2 to an
At this time, App Service Environment migrations to v3 using the migration feature support both [Internal Load Balancer (ILB)](create-ilb-ase.md) and [external (internet facing with public IP)](create-external-ase.md) App Service Environment v2 in the following regions: -- West Central US-- Canada Central-- UK South-- Germany West Central-- East Asia - Australia East
+- Australia Central
- Australia Southeast
+- Canada Central
+- Central India
+- East Asia
+- East US
+- East US 2
+- France Central
+- Germany West Central
+- Korea Central
+- Norway East
+- Switzerland North
+- UAE North
+- UK South
+- West Central US
You can find the version of your App Service Environment by navigating to your App Service Environment in the [Azure portal](https://portal.azure.com) and selecting **Configuration** under **Settings** on the left-hand side. You can also use [Azure Resource Explorer](https://resources.azure.com/) and review the value of the `kind` property for your App Service Environment.
app-service Quickstart Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-arm-template.md
ms.assetid: 582bb3c2-164b-42f5-b081-95bfcb7a502a Previously updated : 10/16/2020- Last updated : 03/10/2022+ zone_pivot_groups: app-service-platform-windows-linux adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021
adobe-target-content: ./quickstart-arm-template-uiex
# Quickstart: Create App Service app using an ARM template
-Get started with [Azure App Service](overview.md) by deploying a app to the cloud using an Azure Resource Manager template (ARM template) and [Azure CLI](/cli/azure/get-started-with-azure-cli) in Cloud Shell. Because you use a free App Service tier, you incur no costs to complete this quickstart.
+Get started with [Azure App Service](overview.md) by deploying an app to the cloud using an Azure Resource Manager template (ARM template) and [Azure CLI](/cli/azure/get-started-with-azure-cli) in Cloud Shell. Because you use a free App Service tier, you incur no costs to complete this quickstart.
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
This template contains several parameters that are predefined for your convenien
| webAppName | string | "webApp-**[`<uniqueString>`](../azure-resource-manager/templates/template-functions-string.md#uniquestring)**" | App name | | location | string | "[[resourceGroup().location](../azure-resource-manager/templates/template-functions-resource.md#resourcegroup)]" | App region | | sku | string | "F1" | Instance size (F1 = Free Tier) |
-| language | string | ".net" | Programming language stack (.net, php, node, html) |
+| language | string | ".net" | Programming language stack (.NET, php, node, html) |
| helloWorld | boolean | False | True = Deploy "Hello World" app | | repoUrl | string | " " | External Git repo (optional) | ::: zone-end
app-service Quickstart Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-custom-container.md
Title: 'Quickstart: Run a custom container on App Service'
description: Get started with containers on Azure App Service by deploying your first custom container. Previously updated : 06/30/2021 Last updated : 03/11/2022 -+ zone_pivot_groups: app-service-containers-windows-linux # Run a custom container in Azure ::: zone pivot="container-windows"
-[Azure App Service](overview.md) provides pre-defined application stacks on Windows like ASP.NET or Node.js, running on IIS. However, the preconfigured application stacks [lock down the operating system and prevent low-level access](operating-system-functionality.md). Custom Windows containers do not have these restrictions, and let developers fully customize the containers and give containerized applications full access to Windows functionality.
+[Azure App Service](overview.md) provides pre-defined application stacks on Windows like ASP.NET or Node.js, running on IIS. However, the pre-configured application stacks [lock down the operating system and prevent low-level access](operating-system-functionality.md). Custom Windows containers don't have these restrictions, and let developers fully customize the containers and give containerized applications full access to Windows functionality.
This quickstart shows how to deploy an ASP.NET app, in a Windows image, to [Azure Container Registry](../container-registry/container-registry-intro.md) from Visual Studio. You run the app in a custom container in Azure App Service.
Create an ASP.NET web app by following these steps:
1. In **Solution Explorer**, right-click the **myfirstazurewebapp** project and select **Publish**.
-1. In **Target**, select **Docker Container Registry**, and then click **Next**.
+1. In **Target**, select **Docker Container Registry**, and then select **Next**.
:::image type="content" source="./media/quickstart-custom-container/select-docker-container-registry-visual-studio-2022.png?text=Select Docker Container Registry" alt-text="Select Docker Container Registry":::
-1. In **Specific Target**, select **Azure Container Registry**, and then click **Next**.
+1. In **Specific Target**, select **Azure Container Registry**, and then select **Next**.
:::image type="content" source="./media/quickstart-custom-container/publish-to-azure-container-registry-visual-studio-2022.png?text=Publish to Azure Container Registry" alt-text="Publish from project overview page":::
Create an ASP.NET web app by following these steps:
:::image type="content" source="./media/quickstart-custom-container/create-new-azure-container-registry.png?text=Create new Azure Container Registry" alt-text="Create new Azure Container Registry":::
-1. In **Create new**, make sure the correct subscription is chosen. Under **Resource group**, select **New** and type *myResourceGroup* for the name, and click **OK**. Under **SKU**, select **Basic**. Under **Registry location**, select a location of the registry then select **Create**.
+1. In **Create new**, make sure the correct subscription is chosen. Under **Resource group**, select **New** and type *myResourceGroup* for the name, and select **OK**. Under **SKU**, select **Basic**. Under **Registry location**, select a location of the registry then select **Create**.
:::image type="content" source="./media/quickstart-custom-container/new-azure-container-registry-details.png?text=Azure Container Registry details" alt-text="Azure Container Registry details":::
Create an ASP.NET web app by following these steps:
![Configure your a Web App for Containers](media/quickstart-custom-container/configure-web-app-container.png)
- If you have a custom image elsewhere for your web application, such as in [Azure Container Registry](../container-registry/index.yml) or in any other private repository, you can configure it here.
+ If you have a custom image elsewhere for your web application, such as in [Azure Container Registry](../container-registry/index.yml) or in any other private repository, you can configure it here. Select **Review + Create** to continue.
-1. Select **Review and Create** and then **Create** and wait for Azure to create the required resources.
+1. Verify all the details and then select **Create** and wait for Azure to create the required resources.
+![Create your a Web App for Containers](media/quickstart-custom-container/web-app-container-create-start.png)
## Browse to the custom container
It may take some time for the Windows container to load. To see the progress, na
https://<app_name>.scm.azurewebsites.net/api/logstream ```
-The streamed logs looks like this:
+The streamed logs look like this:
``` 2018-07-27T12:03:11 Welcome, you are now connected to log-streaming service.
Or, check out other resources:
::: zone-end ::: zone pivot="container-linux"
-App Service on Linux provides pre-defined application stacks on Linux with support for languages such as .NET, PHP, Node.js and others. You can also use a custom Docker image to run your web app on an application stack that is not already defined in Azure. This quickstart shows you how to deploy an image from an [Azure Container Registry](../container-registry/index.yml) (ACR) to App Service.
+App Service on Linux provides pre-defined application stacks on Linux with support for languages such as .NET, PHP, Node.js and others. You can also use a custom Docker image to run your web app on an application stack that isn't already defined in Azure. This quickstart shows you how to deploy an image from an [Azure Container Registry](../container-registry/index.yml) (ACR) to App Service.
## Prerequisites
Create a container registry by following the instructions in [Quickstart: Create
## Check prerequisites
-Verify that you have Docker installed and running. The following command will display the Docker version if it is running.
+Verify that you have Docker installed and running. The following command will display the Docker version if it's running.
```bash docker --version
In this Dockerfile, the parent image is one of the built-in Java containers of A
## Deploy to container registry
-1. In the Activity Bar, click the **Docker** icon. In the **IMAGES** explorer, find the image you just built.
+1. In the Activity Bar, click the **Docker** icon. In the **IMAGES** explorer, find the image you built.
1. Expand the image, right-click on the tag you want, and click **Push**. 1. Make sure the image tag begins with `<acr-name>.azurecr.io` and press **Enter**. 1. When Visual Studio Code finishes pushing the image to your container registry, click **Refresh** at the top of the **REGISTRIES** explorer and verify that the image is pushed successfully.
In this Dockerfile, the parent image is one of the built-in Java containers of A
## Deploy to App Service
-1. In the **REGISTRIES** explorer, expand the image, right-click the tag, and click **Deploy image to Azure App Service**.
+1. In the **REGISTRIES** explorer, expand the image, right-click the tag, and select **Deploy image to Azure App Service**.
1. Follow the prompts to choose a subscription, a globally unique app name, a resource group, and an App Service plan. Choose **B1 Basic** for the pricing tier, and a region near you. After deployment, your app is available at `http://<app-name>.azurewebsites.net`.
An **App Service Plan** defines the physical resources that will be used to host
## Browse the website
-The **Output** panel shows the status of the deployment operations. When the operation completes, click **Open Site** in the pop-up notification to open the site in your browser.
+The **Output** panel shows the status of the deployment operations. When the operation completes, select **Open Site** in the pop-up notification to open the site in your browser.
> [!div class="nextstepaction"] > [I ran into an issue](https://www.research.net/r/PWZWZ52?tutorial=quickstart-docker&step=deploy-app)
app-service Quickstart Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-php.md
Title: 'Quickstart: Create a PHP web app'
description: Deploy your first PHP Hello World to Azure App Service in minutes. You deploy using Git, which is one of many ways to deploy to App Service. ms.assetid: 6feac128-c728-4491-8b79-962da9a40788 Previously updated : 05/02/2021 Last updated : 03/10/2022 ms.devlang: php+ zone_pivot_groups: app-service-platform-windows-linux- # Create a PHP web app in Azure App Service
To complete this quickstart:
## Download the sample locally
-1. In a terminal window, run the following commands. This will clone the sample application to your local machine, and navigate to the directory containing the sample code.
+1. In a terminal window, run the following commands. It will clone the sample application to your local machine, and navigate to the directory containing the sample code.
```bash git clone https://github.com/Azure-Samples/php-docs-hello-world
To complete this quickstart:
## Create a web app
-1. In the Cloud Shell, create a web app in the `myAppServicePlan` App Service plan with the [`az webapp create`](/cli/azure/webapp#az_webapp_create) command.
+1. In the Cloud Shell, create a web app in the `myAppServicePlan` App Service plan with the [`az webapp create`](/cli/azure/webapp#az_webapp_create) command.
- In the following example, replace `<app-name>` with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `PHP|7.4`. To see all supported runtimes, run [`az webapp list-runtimes`](/cli/azure/webapp#az_webapp_list_runtimes).
+ In the following example, replace `<app-name>` with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `PHP|7.4`. To see all supported runtimes, run [`az webapp list-runtimes`](/cli/azure/webapp#az_webapp_list_runtimes).
```azurecli-interactive az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime 'PHP|7.4' --deployment-local-git
To complete this quickstart:
http://<app-name>.azurewebsites.net ```
- Here is what your new web app should look like:
+ Here's what your new web app should look like:
![Empty web app page](media/quickstart-php/app-service-web-service-created.png) <pre> Counting objects: 2, done.
The PHP sample code is running in an Azure App Service web app.
![App Service page in Azure portal](media/quickstart-php/php-docs-hello-world-app-service-detail.png)
- The web app menu provides different options for configuring your app.
+ The web app menu provides different options for configuring your app.
[!INCLUDE [cli-samples-clean-up](../../includes/cli-samples-clean-up.md)]
applied-ai-services Try V3 Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-csharp-sdk.md
Previously updated : 03/08/2022 Last updated : 03/14/2022 recommendations: false- <!-- markdownlint-disable MD025 --> # Get started: Form Recognizer C# SDK v3.0 | Preview
In this quickstart, you'll use following features to analyze and extract data an
## Set up
-<!
+<!
### [Option 1: .NET Command-line interface (CLI)](#tab/cli) In a console window (such as cmd, PowerShell, or Bash), use the `dotnet new` command to create a new console app with the name `formrecognizer-quickstart`. This command creates a simple "Hello World" C# project with a single source file: *Program.cs*.
This version of the client library defaults to the 2021-09-30-preview version of
:::image type="content" source="../media/quickstarts/select-nuget-package.png" alt-text="Screenshot: select-nuget-package.png":::
- 1. Select the Browse tab and type Azure.AI.FormRecognizer.
+ 1. Select the Browse tab and type Azure.AI.FormRecognizer.
:::image type="content" source="../media/quickstarts/azure-nuget-package.png" alt-text="Screenshot: select-form-recognizer-package.png":::
This version of the client library defaults to the 2021-09-30-preview version of
<!-- --> ## Build your application
-To interact with the Form Recognizer service, you'll need to create an instance of the `DocumentAnalysisClient` class. To do so, you'll create an `AzureKeyCredential` with your apiKey and a `DocumentAnalysisClient` instance with the `AzureKeyCredential` and your Form Recognizer `endpoint`.
+To interact with the Form Recognizer service, you'll need to create an instance of the `DocumentAnalysisClient` class. To do so, you'll create an `AzureKeyCredential` with your key from the Azure portal and a `DocumentAnalysisClient` instance with the `AzureKeyCredential` and your Form Recognizer `endpoint`.
> [!NOTE] >
To interact with the Form Recognizer service, you'll need to create an instance
1. Open the **Program.cs** file.
-1. Include the following using directives:
+1. Delete the pre-existing code, including the line `Console.Writeline("Hello World!")`, and select one of the following code samples to copy and paste into your application's Program.cs file:
- ```csharp
- using Azure;
- using Azure.AI.FormRecognizer.DocumentAnalysis;
- ```
+ * [**General document model**](#general-document-model)
-1. Add the following code snippet to your Program.cs file. Set your `endpoint` and `apiKey` environment variables and create your `AzureKeyCredential` and `DocumentAnalysisClient` instance:
+ * [**Layout model**](#layout-model)
- ```csharp
- string endpoint = "<your-endpoint>";
- string apiKey = "<your-apiKey>";
- AzureKeyCredential credential = new AzureKeyCredential(apiKey);
- DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential);
- ```
-
-1. Delete the line, `Console.Writeline("Hello World!");` , and add one of the code sample scripts to the file:
-
- :::image type="content" source="../media/quickstarts/add-code-here.png" alt-text="Screenshot: add the sample code to the Main method.":::
-
-> [!TIP]
-> If you would like to try more than one code sample:
->
-> * Select one of the sample code blocks below to copy and paste into your application.
-> * [**Run your application**](#run-your-application).
-> * Comment out that sample code block but keep the set-up code and library directives.
-> * Select another sample code block to copy and paste into your application.
-> * [**Run your application**](#run-your-application).
-> * You can continue to comment out, copy/paste, and run the sample blocks of code.
-
-### Select one of the following code samples to copy and paste into your application Program.cs file:
-
-* [**General document model**](#general-document-model)
-
-* [**Layout model**](#layout-model)
-
-* [**Prebuilt model**](#prebuilt-model)
+ * [**Prebuilt model**](#prebuilt-model)
> [!IMPORTANT] >
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. For more information, _see_ the Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md) article.
+> * Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. For more information, *see* Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md).
## General document model
-Extract text, tables, structure, key-value pairs, and named entities from documents.
+Analyze and extract text, tables, structure, key-value pairs, and named entities.
> [!div class="checklist"] > > * For this example, you'll need a **form document file from a URI**. You can use our [sample form document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf) for this quickstart. > * To analyze a given file at a URI, you'll use the `StartAnalyzeDocumentFromUri` method. The returned value is an `AnalyzeResult` object containing data about the submitted document. > * We've added the file URI value to the `Uri fileUri` variable at the top of the script.
-> * For simplicity, all the entity fields that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [General document](../concept-general-document.md#named-entity-recognition-ner-categories) concept page.
+> * For simplicity, all the entity fields that the service returns are not shown here. To see the list of all supported fields and corresponding types, see the [General document](../concept-general-document.md#named-entity-recognition-ner-categories) concept page.
-#### Add the following code to the Program.cs file:
+### Add the following code to the Program.cs file:
```csharp
+using Azure;
+using Azure.AI.FormRecognizer.DocumentAnalysis;
+
+//set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal to create your `AzureKeyCredential` and `DocumentAnalysisClient` instance
+string endpoint = "<your-endpoint>";
+string key = "<your-key>";
+AzureKeyCredential credential = new AzureKeyCredential(key);
+DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential);
-// sample form document
+
+//sample form document
Uri fileUri = new Uri ("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"); AnalyzeDocumentOperation operation = await client.StartAnalyzeDocumentFromUriAsync("prebuilt-document", fileUri);
for (int i = 0; i < result.Tables.Count; i++)
```
+### General document model output
+
+Visit the Azure samples repository on GitHub to view the [general document model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/FormRecognizer/v3-csharp-sdk-general-document-output.md).
++ ## Layout model Extract text, selection marks, text styles, table structures, and bounding region coordinates from documents.
Extract text, selection marks, text styles, table structures, and bounding regio
#### Add the following code to the Program.cs file: ```csharp
+using Azure;
+using Azure.AI.FormRecognizer.DocumentAnalysis;
+
+//set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal to create your `AzureKeyCredential` and `DocumentAnalysisClient` instance
+string endpoint = "<your-endpoint>";
+string key = "<your-key>";
+AzureKeyCredential credential = new AzureKeyCredential(key);
+DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential);
+//sample document
Uri fileUri = new Uri ("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"); AnalyzeDocumentOperation operation = await client.StartAnalyzeDocumentFromUriAsync("prebuilt-layout", fileUri);
for (int i = 0; i < result.Tables.Count; i++)
```
+### Layout model output
+
+Visit the Azure samples repository on GitHub to view the [layout model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/FormRecognizer/v3-csharp-sdk-layout-output.md).
++ ## Prebuilt model
-In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
+Analyze and extract common fields from specific document types using a prebuilt model. In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
> [!TIP] > You aren't limited to invoicesΓÇöthere are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. See [**model data extraction**](../concept-model-overview.md#model-data-extraction).
-#### Try the prebuilt invoice model
- > [!div class="checklist"] > > * Analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
#### Add the following code to your Program.cs file: ```csharp
-// sample invoice document
++
+using Azure;
+using Azure.AI.FormRecognizer.DocumentAnalysis;
+
+//set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal to create your `AzureKeyCredential` and `DocumentAnalysisClient` instance
+string endpoint = "<your-endpoint>";
+string key = "<your-key>";
+AzureKeyCredential credential = new AzureKeyCredential(key);
+DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential);
+
+//sample invoice document
Uri invoiceUri = new Uri ("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf");
for (int i = 0; i < result.Documents.Count; i++)
```
+### Prebuilt model output
+
+Visit the Azure samples repository on GitHub to view the [prebuilt invoice model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/FormRecognizer/v3-csharp-sdk-prebuilt-invoice-output.md).
++ ## Run your application <!-- ### [.NET Command-line interface (CLI)](#tab/cli)
attestation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/overview.md
OE standardizes specific requirements for verification of an enclave evidence. T
Client applications can be designed to take advantage of TPM attestation by delegating security-sensitive tasks to only take place after a platform has been validated to be secure. Such applications can then make use of Azure Attestation to routinely establish trust in the platform and its ability to access sensitive data.
+### Azure Confidential VM attestation
+
+Azure [Confidential VM](/azure/confidential-computing/confidential-vm-overview) (CVM) is based on [AMD processors with SEV-SNP technology](/azure/confidential-computing/virtual-machine-solutions-amd) and aims to improve VM security posture by removing trust in host, hypervisor and Cloud Service Provider (CSP). To achieve this, CVM offers VM OS disk encryption option with platform-managed keys and binds the disk encryption keys to the virtual machine's TPM. When a CVM boots up, SNP report containing the guest VM firmware measurements will be sent to Azure Attestation. The service validates the measurements and issues an attestation token that is used to release keys from [Managed-HSM](/azure/key-vault/managed-hsm/overview) or [Azure Key Vault](/azure/key-vault/general/basic-concepts). These keys are used to decrypt the vTPM state of the guest VM, unlock the OS disk and start the CVM. The attestation and key release process is performed automatically on each CVM boot, and the process ensures the CVM boots up only upon successful attestation of the hardware.
+ ## Azure Attestation can run in a TEE Azure Attestation is critical to Confidential Computing scenarios, as it performs the following actions:
automanage Automanage Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-virtual-machines.md
In the Machine selection pane in the portal, you will notice the **Eligibility**
- User does not have permissions to the log analytics workspace's subscription. Check out the [required permissions](#required-rbac-permissions) - The Automanage resource provider is not registered on the subscription. Check out [how to register a Resource Provider](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1) with the Automanage resource provider: *Microsoft.Automanage* - Machine does not have necessary VM agents installed which the Automanage service requires. Check out the [Windows agent installation](../virtual-machines/extensions/agent-windows.md) and the [Linux agent installation](../virtual-machines/extensions/agent-linux.md)-- Arc machine is not connected. Learn more about the [Arc agent status](../azure-arc/servers/overview.md#agent-status) and [how to connect](../azure-arc/servers/agent-overview.md#connected-machine-agent-technical-overview)
+- Arc machine is not connected. Learn more about the [Arc agent status](../azure-arc/servers/overview.md#agent-status) and [how to connect](../azure-arc/servers/deployment-options.md#agent-installation-details)
Once you have selected your eligible machines, Click **Enable**, and you're done.
availability-zones Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/overview.md
Microsoft Azure services are available globally to drive your cloud operations a
Azure services deployed to Azure regions are listed on the [Azure global infrastructure products](https://azure.microsoft.com/global-infrastructure/services/?products=all) page. To better understand regions and Availability Zones in Azure, see [Regions and Availability Zones in Azure](az-overview.md).
-Azure services are built for resiliency including high availability and disaster recovery. There are no services that are dependent on a single logical data center (to avoid single points of failure). Non-regional services listed on [Azure global infrastructure products](https://azure.microsoft.com/global-infrastructure/services/?products=all) are services for which there is no dependency on a specific Azure region. Non-regional services are deployed to two or more regions and if there is a regional failure, the instance of the service in another region continues servicing customers. Certain non-regional services enable customers to specify the region where the underlying virtual machine (VM) on which service runs will be deployed. For example, [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) enables customers to specify the region location where the VM resides. All Azure services that store customer data allow the customer to specify the specific regions in which their data will be stored. The exception is [Azure Active Directory (Azure AD)](https://azure.microsoft.com/services/active-directory/), which has geo placement (such as Europe or North America). For more information about data storage residency, see the [Data residency map](https://azuredatacentermap.azurewebsites.net).
+Azure services are built for resiliency including high availability and disaster recovery. There are no services that are dependent on a single logical data center (to avoid single points of failure). Non-regional services listed on [Azure global infrastructure products](https://azure.microsoft.com/global-infrastructure/services/?products=all) are services for which there is no dependency on a specific Azure region. Non-regional services are deployed to two or more regions and if there is a regional failure, the instance of the service in another region continues servicing customers. Certain non-regional services enable customers to specify the region where the underlying virtual machine (VM) on which service runs will be deployed. For example, [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) enables customers to specify the region location where the VM resides. All Azure services that store customer data allow the customer to specify the specific regions in which their data will be stored. The exception is [Azure Active Directory (Azure AD)](https://azure.microsoft.com/services/active-directory/), which has geo placement (such as Europe or North America). For more information about data storage residency, see the [Data residency map](https://azure.microsoft.com/global-infrastructure/data-residency/).
If you need to understand dependencies between Azure services to help better architect your applications and services, you can request the **Azure service dependency documentation** by contacting your Microsoft sales or customer representative. This document lists the dependencies for Azure services, including dependencies on any common major internal services such as control plane services. To obtain this documentation, you must be a Microsoft customer and have the appropriate non-disclosure agreement (NDA) with Microsoft.
azure-app-configuration Rest Api Authentication Hmac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authentication-hmac.md
def sign_request(host,
secret): # Access Key Value verb = method.upper()
- utc_now = str(datetime.utcnow().strftime("%b, %d %Y %H:%M:%S ")) + "GMT"
+ utc_now = str(datetime.utcnow().strftime("%a, %d %b %Y %H:%M:%S ")) + "GMT"
if six.PY2: content_digest = hashlib.sha256(bytes(body)).digest()
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Azure Connected Machine agent description: This article provides a detailed overview of the Azure Arc-enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 03/03/2022 Last updated : 03/14/2022 # Overview of Azure Connected Machine agent
-The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers. This article provides a detailed overview of the agent, system and network requirements, and the different deployment methods.
-
->[!NOTE]
-> The [Azure Monitor agent](../../azure-monitor/agents/azure-monitor-agent-overview.md) (AMA) does not replace the Connected Machine agent. The Azure Monitor agent will replace the Log Analytics agent, Diagnostics extension, and Telegraf agent for both Windows and Linux machines. Review the Azure Monitor documentation about the new agent for more details.
+The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers.
## Agent component details
-The Azure Connected Machine agent package contains several logical components, which are bundled together.
+The Azure Connected Machine agent package contains several logical components, which are bundled together:
* The Hybrid Instance Metadata service (HIMDS) manages the connection to Azure and the connected machine's Azure identity.
The Azure Connected Machine agent package contains several logical components, w
* Guest assignment is stored locally for 14 days. Within the 14-day period, if the Connected Machine agent reconnects to the service, policy assignments are reapplied. * Assignments are deleted after 14 days, and are not reassigned to the machine after the 14-day period.
-* The Extension agent manages VM extensions, including install, uninstall, and upgrade. Extensions are downloaded from Azure and copied to the `%SystemDrive%\%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\downloads` folder on Windows, and for Linux to `/opt/GC_Ext/downloads`. On Windows, the extension is installed to the following path `%SystemDrive%\Packages\Plugins\<extension>`, and on Linux the extension is installed to `/var/lib/waagent/<extension>`.
+* The Extension agent manages VM extensions, including install, uninstall, and upgrade. Extensions are downloaded from Azure and copied to the `%SystemDrive%\%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\downloads` folder on Windows, and to `/opt/GC_Ext/downloads` on Linux. On Windows, the extension is installed to the following path `%SystemDrive%\Packages\Plugins\<extension>`, and on Linux the extension is installed to `/var/lib/waagent/<extension>`.
+
+>[!NOTE]
+> The [Azure Monitor agent](../../azure-monitor/agents/azure-monitor-agent-overview.md) (AMA) is a separate agent that collects monitoring data, and it does not replace the Connected Machine agent; the AMA only replaces the Log Analytics agent, Diagnostics extension, and Telegraf agent for both Windows and Linux machines.
## Instance metadata
-Metadata information about the connected machine is collected after the Connected Machine agent registers with Azure Arc-enabled servers. Specifically:
+Metadata information about a connected machine is collected after the Connected Machine agent registers with Azure Arc-enabled servers. Specifically:
* Operating system name, type, and version * Computer name
The following metadata information is requested by the agent from Azure:
* Guest configuration policy assignments * Extension requests - install, update, and delete.
-## Download agents
-
-You can download the Azure Connected Machine agent package for Windows and Linux from the locations listed below.
-
-* [Windows agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center.
-
-* Linux agent package is distributed from Microsoft's [package repository](https://packages.microsoft.com/) using the preferred package format for the distribution (.RPM or .DEB).
-
-The Azure Connected Machine agent for Windows and Linux can be upgraded to the latest release manually or automatically depending on your requirements. For more information, see [here](manage-agent.md).
-
-## Prerequisites
-
-### Supported environments
-
-Azure Arc-enabled servers supports the installation of the Connected Machine agent on any physical server and virtual machine hosted *outside* of Azure. This includes support for virtual machines running on platforms like:
-
-* VMware
-* Azure Stack HCI
-* Other cloud environments
-
-Azure Arc-enabled servers *does not* support installing the agent on virtual machines running in Azure, or virtual machines running on Azure Stack Hub or Azure Stack Edge as they are already modeled as Azure VMs.
-
-### Supported operating systems
-
-The following versions of the Windows and Linux operating system are officially supported for the Azure Connected Machine agent:
-
-* Windows Server 2008 R2 SP1, 2012 R2, 2016, 2019, and 2022
- * Both Desktop and Server Core experiences are supported
- * Azure Editions are supported when running as a virtual machine on Azure Stack HCI
-* Azure Stack HCI
-* Ubuntu 16.04, 18.04, and 20.04 LTS (x64)
-* CentOS Linux 7 and 8 (x64)
-* SUSE Linux Enterprise Server (SLES) 12 and 15 (x64)
-* Red Hat Enterprise Linux (RHEL) 7 and 8 (x64)
-* Amazon Linux 2 (x64)
-* Oracle Linux 7 and 8 (x64)
-
-> [!WARNING]
-> The Linux hostname or Windows computer name cannot use one of the reserved words or trademarks in the name, otherwise attempting to register the connected machine with Azure will fail. For a list of reserved words, see [Resolve reserved resource name errors](../../azure-resource-manager/templates/error-reserved-resource-name.md).
-
-> [!NOTE]
-> While Azure Arc-enabled servers supports Amazon Linux, the following features are not support by this distribution:
->
-> * The Dependency agent used by Azure Monitor VM insights
-> * Azure Automation Update Management
-
-### Software requirements
-
-* NET Framework 4.6 or later is required. [Download the .NET Framework](/dotnet/framework/install/guide-for-developers).
-* Windows PowerShell 5.1 is required. [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616).
-
-### Required permissions
-
-* To onboard machines, you are a member of the **Azure Connected Machine Onboarding** or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role in the resource group.
-
-* To read, modify, and delete a machine, you are a member of the **Azure Connected Machine Resource Administrator** role in the resource group.
-
-* To select a resource group from the drop-down list when using the **Generate script** method, at a minimum you are a member of the [Reader](../../role-based-access-control/built-in-roles.md#reader) role for that resource group.
-
-### Azure subscription and service limits
-
-Before configuring your machines with Azure Arc-enabled servers, review the Azure Resource Manager [subscription limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#subscription-limits) and [resource group limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#resource-group-limits) to plan for the number of machines to be connected.
-
-Azure Arc-enabled servers supports up to 5,000 machine instances in a resource group.
-
-### Register Azure resource providers
-
-Azure Arc-enabled servers depend on the following Azure resource providers in your subscription in order to use this service:
-
-* **Microsoft.HybridCompute**
-* **Microsoft.GuestConfiguration**
-* **Microsoft.HybridConnectivity**
-
-If these resource providers are not already registered, you can register them using the following commands:
-
-Azure PowerShell:
-
-```azurepowershell-interactive
-Login-AzAccount
-Set-AzContext -SubscriptionId [subscription you want to onboard]
-Register-AzResourceProvider -ProviderNamespace Microsoft.HybridCompute
-Register-AzResourceProvider -ProviderNamespace Microsoft.GuestConfiguration
-Register-AzResourceProvider -ProviderNamespace Microsoft.HybridConnectivity
-```
-
-Azure CLI:
-
-```azurecli-interactive
-az account set --subscription "{Your Subscription Name}"
-az provider register --namespace 'Microsoft.HybridCompute'
-az provider register --namespace 'Microsoft.GuestConfiguration'
-az provider register --namespace 'Microsoft.HybridConnectivity'
-```
-
-You can also register the resource providers in the [Azure portal](../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal).
-
-### Transport Layer Security 1.2 protocol
-
-To ensure the security of data in transit to Azure, we strongly encourage you to configure machine to use Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**.
-
-|Platform/Language | Support | More Information |
-| | | |
-|Linux | Linux distributions tend to rely on [OpenSSL](https://www.openssl.org) for TLS 1.2 support. | Check the [OpenSSL Changelog](https://www.openssl.org/news/changelog.html) to confirm your version of OpenSSL is supported.|
-| Windows Server 2012 R2 and higher | Supported, and enabled by default. | To confirm that you are still using the [default settings](/windows-server/security/tls/tls-registry-settings).|
-
-## Networking configuration
-
-The Azure Connected Machine agent for Linux and Windows communicates outbound securely to Azure Arc over TCP port 443. By default, the agent uses the default route to the internet to reach Azure services. You can optionally [configure the agent to use a proxy server](manage-agent.md#update-or-remove-proxy-settings) if your network requires it. Proxy servers don't make the Connected Machine agent more secure because the traffic is already encrypted.
-
-To further secure your network connectivity to Azure Arc, instead of using public networks and proxy servers, you can implement an [Azure Arc Private Link Scope](private-link-security.md) (preview).
-
-> [!NOTE]
-> Azure Arc-enabled servers does not support using a [Log Analytics gateway](../../azure-monitor/agents/gateway.md) as a proxy for the Connected Machine agent.
-
-If outbound connectivity is restricted by your firewall or proxy server, make sure the URLs listed below are not blocked. When you only allow the IP ranges or domain names required for the agent to communicate with the service, you need to allow access to the following Service Tags and URLs.
-
-Service Tags:
-
-* AzureActiveDirectory
-* AzureTrafficManager
-* AzureResourceManager
-* AzureArcInfrastructure
-* Storage
-
-URLs:
+## Deployment options and requirements
-| Agent resource | Description | When required| Endpoint used with private link |
-|||--||
-|`aka.ms`|Used to resolve the download script during installation|At installation time, only| Public |
-|`download.microsoft.com`|Used to download the Windows installation package|At installation time, only| Public |
-|`packages.microsoft.com`|Used to download the Linux installation package|At installation time, only| Public |
-|`login.windows.net`|Azure Active Directory|Always| Public |
-|`login.microsoftonline.com`|Azure Active Directory|Always| Public |
-|`pas.windows.net`|Azure Active Directory|Always| Public |
-|`management.azure.com`|Azure Resource Manager - to create or delete the Arc server resource|When connecting or disconnecting a server, only| Public, unless a [resource management private link](../../azure-resource-manager/management/create-private-link-access-portal.md) is also configured |
-|`*.his.arc.azure.com`|Metadata and hybrid identity services|Always| Private |
-|`*.guestconfiguration.azure.com`| Extension management and guest configuration services |Always| Private |
-|`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`|Notification service for extension and connectivity scenarios|Always| Private |
-|`azgn*.servicebus.windows.net`|Notification service for extension and connectivity scenarios|Always| Public |
-|`*.blob.core.windows.net`|Download source for Azure Arc-enabled servers extensions|Always, except when using private endpoints| Not used when private link is configured |
-|`dc.services.visualstudio.com`|Agent telemetry|Optional| Public |
+To deploy the agent and connect a machine, certain [prerequisites](prerequisites.md) must be met. There are also [networking requirements](network-requirements.md) to be aware of.
-For a list of IP addresses for each service tag/region, see the JSON file [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). Microsoft publishes weekly updates containing each Azure Service and the IP ranges it uses. This information in the JSON file is the current point-in-time list of the IP ranges that correspond to each service tag. The IP addresses are subject to change. If IP address ranges are required for your firewall configuration, then the **AzureCloud** Service Tag should be used to allow access to all Azure services. Do not disable security monitoring or inspection of these URLs, allow them as you would other Internet traffic.
-
-For more information, see [Virtual network service tags](../../virtual-network/service-tags-overview.md).
-
-## Installation and configuration
-
-Connecting machines in your hybrid environment directly with Azure can be accomplished using different methods, depending on your requirements and the tools you prefer to use. The following table highlights each method so that you can determine which works best for your deployment.
-
-| Method | Description |
-|--|-|
-| Interactively | Manually install the agent on a single or small number of machines by [connecting machines using a deployment script](onboard-portal.md).<br> From the Azure portal, you can generate a script and execute it on the machine to automate the install and configuration steps of the agent.|
-| Interactively | [Connect machines from Windows Admin Center](onboard-windows-admin-center.md) |
-| Interactively or at scale | [Connect machines using PowerShell](onboard-powershell.md) |
-| Interactively or at scale | [Connect machines using Windows PowerShell Desired State Configuration (DSC)](onboard-dsc.md) |
-| At scale | [Connect machines using a service principal](onboard-service-principal.md) to install the agent at scale non-interactively.|
-| At scale | [Connect machines by running PowerShell scripts with Configuration Manager](onboard-configuration-manager-powershell.md)
-| At scale | [Connect machines with a Configuration Manager custom task sequence](onboard-configuration-manager-custom-task.md)
-| At scale | [Connect machines from Automation Update Management](onboard-update-management-machines.md) to create a service principal that installs and configures the agent for multiple machines managed with Azure Automation Update Management to connect machines non-interactively. |
-
-> [!IMPORTANT]
-> The Connected Machine agent cannot be installed on an Azure Windows virtual machine. If you attempt to, the installation detects this and rolls back.
-
-## Connected Machine agent technical overview
-
-### Windows agent installation details
-
-The Connected Machine agent for Windows can be installed by using one of the following three methods:
-
-* Running the file `AzureConnectedMachineAgent.msi`.
-* Manually by running the Windows Installer package `AzureConnectedMachineAgent.msi` from the Command shell.
-* From a PowerShell session using a scripted method.
-
-Installing, upgrading, or removing the Connected Machine agent will not require you to restart your server.
-
-After installing the Connected Machine agent for Windows, the following system-wide configuration changes are applied.
-
-* The following installation folders are created during setup.
-
- |Folder |Description |
- |-||
- |%ProgramFiles%\AzureConnectedMachineAgent |azcmagent CLI and instance metadata service executables.|
- |%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\GC | Extension service executables.|
- |%ProgramFiles%\AzureConnectedMachineAgent\GuestConfig\GC | Guest configuration (policy) service executables.|
- |%ProgramData%\AzureConnectedMachineAgent |Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
- |%ProgramData%\GuestConfig |Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
-
-* The following Windows services are created on the target machine during installation of the agent.
-
- |Service name |Display name |Process name |Description |
- |-|-|-||
- |himds |Azure Hybrid Instance Metadata Service |himds |This service implements the Hybrid Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.|
- |GCArcService |Guest configuration Arc Service |gc_service |Monitors the desired state configuration of the machine.|
- |ExtensionService |Guest configuration Extension Service | gc_service |Installs the required extensions targeting the machine.|
-
-* The following virtual service account is created during agent installation.
-
- | Virtual Account | Description |
- |||
- | NT SERVICE\\himds | Unprivileged account used to run the Hybrid Instance Metadata Service. |
-
- > [!TIP]
- > This account requires the "Log on as a service" right. This right is automatically granted during agent installation, but if your organization configures user rights assignments with Group Policy, you may need to adjust your Group Policy Object to grant the right to "NT SERVICE\\himds" or "NT SERVICE\\ALL SERVICES" to allow the agent to function.
-
-* The following local security group is created during agent installation.
-
- | Security group name | Description |
- ||-|
- | Hybrid agent extension applications | Members of this security group can request Azure Active Directory tokens for the system-assigned managed identity |
-
-* The following environmental variables are created during agent installation.
-
- |Name |Default value |Description |
- |--|--||
- |IDENTITY_ENDPOINT |<`http://localhost:40342/metadata/identity/oauth2/token`> ||
- |IMDS_ENDPOINT |<`http://localhost:40342`> ||
-
-* There are several log files available for troubleshooting. They are described in the following table.
-
- |Log |Description |
- |-||
- |%ProgramData%\AzureConnectedMachineAgent\Log\himds.log |Records details of the heartbeat and identity agent component.|
- |%ProgramData%\AzureConnectedMachineAgent\Log\azcmagent.log |Contains the output of the azcmagent tool commands.|
- |%ProgramData%\GuestConfig\arc_policy_logs\ |Records details about the guest configuration (policy) agent component.|
- |%ProgramData%\GuestConfig\ext_mgr_logs|Records details about the Extension agent component.|
- |%ProgramData%\GuestConfig\extension_logs\\\<Extension>|Records details from the installed extension.|
-
-* The local security group **Hybrid agent extension applications** is created.
-
-* During uninstall of the agent, the following artifacts are not removed.
-
- * %ProgramData%\AzureConnectedMachineAgent\Log
- * %ProgramData%\AzureConnectedMachineAgent and subdirectories
- * %ProgramData%\GuestConfig
-
-### Linux agent installation details
-
-The Connected Machine agent for Linux is provided in the preferred package format for the distribution (.RPM or .DEB) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/). The agent is installed and configured with the shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent).
-
-Installing, upgrading, or removing the Connected Machine agent will not require you to restart your server.
-
-After installing the Connected Machine agent for Linux, the following system-wide configuration changes are applied.
-
-* The following installation folders are created during setup.
-
- |Folder |Description |
- |-||
- |/opt/azcmagent/ |azcmagent CLI and instance metadata service executables.|
- |/opt/GC_Ext/ | Extension service executables.|
- |/opt/GC_Service/ |Guest configuration (policy) service executables.|
- |/var/opt/azcmagent/ |Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
- |/var/lib/GuestConfig/ |Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
-
-* The following daemons are created on the target machine during installation of the agent.
-
- |Service name |Display name |Process name |Description |
- |-|-|-||
- |himdsd.service |Azure Connected Machine Agent Service |himds |This service implements the Hybrid Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.|
- |gcad.service |GC Arc Service |gc_linux_service |Monitors the desired state configuration of the machine. |
- |extd.service |Extension Service |gc_linux_service | Installs the required extensions targeting the machine.|
-
-* There are several log files available for troubleshooting. They are described in the following table.
-
- |Log |Description |
- |-||
- |/var/opt/azcmagent/log/himds.log |Records details of the heartbeat and identity agent component.|
- |/var/opt/azcmagent/log/azcmagent.log |Contains the output of the azcmagent tool commands.|
- |/var/lib/GuestConfig/arc_policy_logs |Records details about the guest configuration (policy) agent component.|
- |/var/lib/GuestConfig/ext_mgr_logs |Records details about the extension agent component.|
- |/var/lib/GuestConfig/extension_logs|Records details from extension install/update/uninstall operations.|
-
-* The following environmental variables are created during agent installation. These variables are set in `/lib/systemd/system.conf.d/azcmagent.conf`.
-
- |Name |Default value |Description |
- |--|--||
- |IDENTITY_ENDPOINT |<`http://localhost:40342/metadata/identity/oauth2/token`> ||
- |IMDS_ENDPOINT |<`http://localhost:40342`> ||
-
-* During uninstall of the agent, the following artifacts are not removed.
-
- * /var/opt/azcmagent
- * /var/lib/GuestConfig
-
-### Agent resource governance
-
-Azure Connected Machine agent is designed to manage agent and system resource consumption. The agent approaches resource governance under the following conditions:
-
-* The Guest Configuration agent is limited to use up to 5% of the CPU to evaluate policies.
-* The Extension Service agent is limited to use up to 5% of the CPU to install and manage extensions.
-
- * Once installed, each extension is limited to use up to 5% of the CPU while running. For example, if you have 2 extensions installed, they can use a combined total of 10% of the CPU.
- * The Log Analytics agent and Azure Monitor Agent are allowed to use up to 60% of the CPU during their install/upgrade/uninstall operations on Red Hat Linux, CentOS, and other enterprise Linux variants. The limit is higher for this combination of extensions and operating systems to accommodate the performance impact of [SELinux](https://www.redhat.com/en/topics/linux/what-is-selinux) on these systems.
+We provide several options for deploying the agent. For more information, see [Plan for deployment](plan-at-scale-deployment.md) and [Deployment options](deployment-options.md).
## Next steps
-* To begin evaluating Azure Arc-enabled servers, follow the article [Connect hybrid machines with Azure Arc-enabled servers](learn/quick-enable-hybrid-vm.md).
-
+* To begin evaluating Azure Arc-enabled servers, see [Quickstart: Connect hybrid machines with Azure Arc-enabled servers](learn/quick-enable-hybrid-vm.md).
* Before you deploy the Azure Arc-enabled servers agent and integrate with other Azure management and monitoring services, review the [Planning and deployment guide](plan-at-scale-deployment.md).-
-* Troubleshooting information can be found in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).
+* Review troubleshooting information in the [agent connection issues troubleshooting guide](troubleshoot-agent-onboard.md).
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
This page is updated monthly, so revisit it regularly. If you're looking for ite
- Local configuration of agent settings now available using the [azcmagent config command](manage-agent.md#config). - Proxy server settings can be [configured using agent-specific settings](manage-agent.md#update-or-remove-proxy-settings) instead of environment variables.-- Extension operations will execute faster using a new notification pipeline. You may need to adjust your firewall or proxy server rules to allow the new network addresses for this notification service (see [networking configuration](agent-overview.md#networking-configuration)). The extension manager will fall back to the existing behavior of checking every 5 minutes when the notification service cannot be reached.
+- Extension operations will execute faster using a new notification pipeline. You may need to adjust your firewall or proxy server rules to allow the new network addresses for this notification service (see [networking configuration](network-requirements.md)). The extension manager will fall back to the existing behavior of checking every 5 minutes when the notification service cannot be reached.
- Detection of the AWS account ID, instance ID, and region information for servers running in Amazon Web Services. ## Version 1.12 - October 2021
azure-arc Deployment Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deployment-options.md
+
+ Title: Azure Connected Machine agent deployment options
+description: Learn about the different options to onboard machines to Azure Arc-enabled servers.
Last updated : 03/14/2022+++
+# Azure Connected Machine agent deployment options
+
+Connecting machines in your hybrid environment directly with Azure can be accomplished using different methods, depending on your requirements and the tools you prefer to use.
+
+## Onboarding methods
+
+ The following table highlights each method so that you can determine which works best for your deployment. For detailed information, follow the links to view the steps for each topic.
+
+| Method | Description |
+|--|-|
+| Interactively | Manually install the agent on a single or small number of machines by [connecting machines using a deployment script](onboard-portal.md).<br> From the Azure portal, you can generate a script and execute it on the machine to automate the install and configuration steps of the agent.|
+| Interactively | [Connect machines from Windows Admin Center](onboard-windows-admin-center.md) |
+| Interactively or at scale | [Connect machines using PowerShell](onboard-powershell.md) |
+| Interactively or at scale | [Connect machines using Windows PowerShell Desired State Configuration (DSC)](onboard-dsc.md) |
+| At scale | [Connect machines using a service principal](onboard-service-principal.md) to install the agent at scale non-interactively.|
+| At scale | [Connect machines by running PowerShell scripts with Configuration Manager](onboard-configuration-manager-powershell.md)
+| At scale | [Connect machines with a Configuration Manager custom task sequence](onboard-configuration-manager-custom-task.md)
+| At scale | [Connect machines from Automation Update Management](onboard-update-management-machines.md) to create a service principal that installs and configures the agent for multiple machines managed with Azure Automation Update Management to connect machines non-interactively. |
+
+> [!IMPORTANT]
+> The Connected Machine agent cannot be installed on an Azure Windows virtual machine. If you attempt to, the installation detects this and rolls back.
+
+Be sure to review the basic [prerequisites](prerequisites.md) and [network configuration requirements](network-requirements.md) before deploying the agent, as well as any specific requirements listed in the steps for the onboarding method you choose.
+
+## Agent installation details
+
+Review the following details to understand more about how the Connected Machine agent is installed on Windows or Linux machines.
+
+### Windows agent installation details
+
+You can download the [Windows agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center.
+
+The Connected Machine agent for Windows can be installed by using one of the following three methods:
+
+* Running the file `AzureConnectedMachineAgent.msi`.
+* Manually by running the Windows Installer package `AzureConnectedMachineAgent.msi` from the Command shell.
+* From a PowerShell session using a scripted method.
+
+Installing, upgrading, and removing the Connected Machine agent will not require you to restart your server.
+
+After installing the Connected Machine agent for Windows, the following system-wide configuration changes are applied.
+
+* The following installation folders are created during setup.
+
+ |Folder |Description |
+ |-||
+ |%ProgramFiles%\AzureConnectedMachineAgent |azcmagent CLI and instance metadata service executables.|
+ |%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\GC | Extension service executables.|
+ |%ProgramFiles%\AzureConnectedMachineAgent\GuestConfig\GC | Guest configuration (policy) service executables.|
+ |%ProgramData%\AzureConnectedMachineAgent |Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
+ |%ProgramData%\GuestConfig |Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
+
+* The following Windows services are created on the target machine during installation of the agent.
+
+ |Service name |Display name |Process name |Description |
+ |-|-|-||
+ |himds |Azure Hybrid Instance Metadata Service |himds |This service implements the Hybrid Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.|
+ |GCArcService |Guest configuration Arc Service |gc_service |Monitors the desired state configuration of the machine.|
+ |ExtensionService |Guest configuration Extension Service | gc_service |Installs the required extensions targeting the machine.|
+
+* The following virtual service account is created during agent installation.
+
+ | Virtual Account | Description |
+ |||
+ | NT SERVICE\\himds | Unprivileged account used to run the Hybrid Instance Metadata Service. |
+
+ > [!TIP]
+ > This account requires the "Log on as a service" right. This right is automatically granted during agent installation, but if your organization configures user rights assignments with Group Policy, you may need to adjust your Group Policy Object to grant the right to "NT SERVICE\\himds" or "NT SERVICE\\ALL SERVICES" to allow the agent to function.
+* The following local security group is created during agent installation.
+
+ | Security group name | Description |
+ ||-|
+ | Hybrid agent extension applications | Members of this security group can request Azure Active Directory tokens for the system-assigned managed identity |
+
+* The following environmental variables are created during agent installation.
+
+ |Name |Default value |Description |
+ |--|--||
+ |IDENTITY_ENDPOINT |<`http://localhost:40342/metadata/identity/oauth2/token`> ||
+ |IMDS_ENDPOINT |<`http://localhost:40342`> ||
+
+* There are several log files available for troubleshooting. They are described in the following table.
+
+ |Log |Description |
+ |-||
+ |%ProgramData%\AzureConnectedMachineAgent\Log\himds.log |Records details of the heartbeat and identity agent component.|
+ |%ProgramData%\AzureConnectedMachineAgent\Log\azcmagent.log |Contains the output of the azcmagent tool commands.|
+ |%ProgramData%\GuestConfig\arc_policy_logs\ |Records details about the guest configuration (policy) agent component.|
+ |%ProgramData%\GuestConfig\ext_mgr_logs|Records details about the Extension agent component.|
+ |%ProgramData%\GuestConfig\extension_logs\\\<Extension>|Records details from the installed extension.|
+
+* The local security group **Hybrid agent extension applications** is created.
+
+* During uninstall of the agent, the following artifacts are not removed.
+
+ * %ProgramData%\AzureConnectedMachineAgent\Log
+ * %ProgramData%\AzureConnectedMachineAgent and subdirectories
+ * %ProgramData%\GuestConfig
+
+### Linux agent installation details
+
+The Connected Machine agent for Linux is provided in the preferred package format for the distribution (.RPM or .DEB) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/). The agent is installed and configured with the shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent).
+
+Installing, upgrading, and removing the Connected Machine agent will not require you to restart your server.
+
+After installing the Connected Machine agent for Linux, the following system-wide configuration changes are applied.
+
+* The following installation folders are created during setup.
+
+ |Folder |Description |
+ |-||
+ |/opt/azcmagent/ |azcmagent CLI and instance metadata service executables.|
+ |/opt/GC_Ext/ | Extension service executables.|
+ |/opt/GC_Service/ |Guest configuration (policy) service executables.|
+ |/var/opt/azcmagent/ |Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
+ |/var/lib/GuestConfig/ |Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
+
+* The following daemons are created on the target machine during installation of the agent.
+
+ |Service name |Display name |Process name |Description |
+ |-|-|-||
+ |himdsd.service |Azure Connected Machine Agent Service |himds |This service implements the Hybrid Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.|
+ |gcad.service |GC Arc Service |gc_linux_service |Monitors the desired state configuration of the machine. |
+ |extd.service |Extension Service |gc_linux_service | Installs the required extensions targeting the machine.|
+
+* There are several log files available for troubleshooting. They are described in the following table.
+
+ |Log |Description |
+ |-||
+ |/var/opt/azcmagent/log/himds.log |Records details of the heartbeat and identity agent component.|
+ |/var/opt/azcmagent/log/azcmagent.log |Contains the output of the azcmagent tool commands.|
+ |/var/lib/GuestConfig/arc_policy_logs |Records details about the guest configuration (policy) agent component.|
+ |/var/lib/GuestConfig/ext_mgr_logs |Records details about the extension agent component.|
+ |/var/lib/GuestConfig/extension_logs|Records details from extension install/update/uninstall operations.|
+
+* The following environmental variables are created during agent installation. These variables are set in `/lib/systemd/system.conf.d/azcmagent.conf`.
+
+ |Name |Default value |Description |
+ |--|--||
+ |IDENTITY_ENDPOINT |<`http://localhost:40342/metadata/identity/oauth2/token`> ||
+ |IMDS_ENDPOINT |<`http://localhost:40342`> ||
+
+* During uninstall of the agent, the following artifacts are not removed.
+
+ * /var/opt/azcmagent
+ * /var/lib/GuestConfig
+
+## Agent resource governance
+
+The Azure Connected Machine agent is designed to manage agent and system resource consumption. The agent approaches resource governance under the following conditions:
+
+* The Guest Configuration agent is limited to use up to 5% of the CPU to evaluate policies.
+* The Extension Service agent is limited to use up to 5% of the CPU to install and manage extensions.
+
+ * Once installed, each extension is limited to use up to 5% of the CPU while running. For example, if you have two extensions installed, they can use a combined total of 10% of the CPU.
+ * The Log Analytics agent and Azure Monitor Agent are allowed to use up to 60% of the CPU during their install/upgrade/uninstall operations on Red Hat Linux, CentOS, and other enterprise Linux variants. The limit is higher for this combination of extensions and operating systems to accommodate the performance impact of [SELinux](https://www.redhat.com/en/topics/linux/what-is-selinux) on these systems.
+
+## Next steps
+
+* Learn about the Azure Connected Machine agent [prerequisites](prerequisites.md) and [network requirements](network-requirements.md).
+* Review the [Planning and deployment guide for Azure Arc-enabled servers](plan-at-scale-deployment.md)
+* Learn about [reconfiguring, upgrading, and removing the Connected Machine agent](manage-agent.md).
azure-arc Quick Enable Hybrid Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/learn/quick-enable-hybrid-vm.md
* Deploying the Azure Arc-enabled servers Hybrid Connected Machine agent requires that you have administrator permissions on the machine to install and configure the agent. On Linux, by using the root account, and on Windows, with an account that is a member of the Local Administrators group.
-* Before you get started, be sure to review the agent [prerequisites](../agent-overview.md#prerequisites) and verify the following:
+* Before you get started, be sure to review the agent [prerequisites](../prerequisites.md) and verify the following:
- * Your target machine is running a supported [operating system](../agent-overview.md#supported-operating-systems).
+ * Your target machine is running a supported [operating system](../prerequisites.md#supported-operating-systems).
- * Your account is granted assignment to the [required Azure roles](../agent-overview.md#required-permissions).
+ * Your account is granted assignment to the [required Azure roles](../prerequisites.md#required-permissions).
- * If the machine connects through a firewall or proxy server to communicate over the Internet, make sure the URLs [listed](../agent-overview.md#networking-configuration) are not blocked.
+ * If the machine connects through a firewall or proxy server to communicate over the Internet, make sure the URLs [listed](../network-requirements.md#urls) are not blocked.
* Azure Arc-enabled servers supports only the regions specified [here](../overview.md#supported-regions).
azure-arc Manage Vm Extensions Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-template.md
New-AzResourceGroupDeployment -ResourceGroupName "ContosoEngineering" -TemplateF
To use the Custom Script extension, the following sample is provided to run on Windows and Linux. If you are unfamiliar with the Custom Script extension, see [Custom Script extension for Windows](../../virtual-machines/extensions/custom-script-windows.md) or [Custom Script extension for Linux](../../virtual-machines/extensions/custom-script-linux.md). There are a couple of differing characteristics that you should understand when using this extension with hybrid machines:
-* The list of supported operating systems with the Azure VM Custom Script extension is not applicable to Azure Arc-enabled servers. The list of supported OSs for Azure Arc-enabled servers can be found [here](agent-overview.md#supported-operating-systems).
+* The list of supported operating systems with the Azure VM Custom Script extension is not applicable to Azure Arc-enabled servers. The list of supported OSs for Azure Arc-enabled servers can be found [here](prerequisites.md#supported-operating-systems).
* Configuration details regarding Azure Virtual Machine Scale Sets or Classic VMs are not applicable.
azure-arc Manage Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions.md
This feature depends on the following Azure resource providers in your subscript
- **Microsoft.HybridCompute** - **Microsoft.GuestConfiguration**
-If they aren't already registered, follow the steps under [Register Azure resource providers](agent-overview.md#register-azure-resource-providers).
+If they aren't already registered, follow the steps under [Register Azure resource providers](prerequisites.md#azure-resource-providers).
Be sure to review the documentation for each VM extension referenced in the previous table to understand if it has any network or system requirements. This can help you avoid experiencing any connectivity issues with an Azure service or feature that relies on that VM extension.
Before you deploy the extension, you need to complete the following:
### Connected Machine agent
-Verify your machine matches the [supported versions](agent-overview.md#supported-operating-systems) of Windows and Linux operating system for the Azure Connected Machine agent.
+Verify your machine matches the [supported versions](prerequisites.md#supported-operating-systems) of Windows and Linux operating system for the Azure Connected Machine agent.
The minimum version of the Connected Machine agent that is supported with this feature on Windows and Linux is the 1.0 release.
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/network-requirements.md
+
+ Title: Connected Machine agent network requirements
+description: Learn about the networking requirements for using the Connected Machine agent for Azure Arc-enabled servers.
Last updated : 03/14/2022+++
+# Connected Machine agent network requirements
+
+This topic describes the networking requirements for using the Connected Machine agent to onboard a physical server or virtual machine to Azure Arc-enabled servers.
+
+## Networking configuration
+
+The Azure Connected Machine agent for Linux and Windows communicates outbound securely to Azure Arc over TCP port 443. By default, the agent uses the default route to the internet to reach Azure services. You can optionally [configure the agent to use a proxy server](manage-agent.md#update-or-remove-proxy-settings) if your network requires it. Proxy servers don't make the Connected Machine agent more secure because the traffic is already encrypted.
+
+To further secure your network connectivity to Azure Arc, instead of using public networks and proxy servers, you can implement an [Azure Arc Private Link Scope](private-link-security.md) (preview).
+
+> [!NOTE]
+> Azure Arc-enabled servers does not support using a [Log Analytics gateway](../../azure-monitor/agents/gateway.md) as a proxy for the Connected Machine agent.
+
+If outbound connectivity is restricted by your firewall or proxy server, make sure the URLs and Service Tags listed below are not blocked.
+
+## Service tags
+
+Be sure to allow access to the following Service Tags:
+
+* AzureActiveDirectory
+* AzureTrafficManager
+* AzureResourceManager
+* AzureArcInfrastructure
+* Storage
+
+For a list of IP addresses for each service tag/region, see the JSON file [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). Microsoft publishes weekly updates containing each Azure Service and the IP ranges it uses. This information in the JSON file is the current point-in-time list of the IP ranges that correspond to each service tag. The IP addresses are subject to change. If IP address ranges are required for your firewall configuration, then the **AzureCloud** Service Tag should be used to allow access to all Azure services. Do not disable security monitoring or inspection of these URLs, allow them as you would other Internet traffic.
+
+For more information, see [Virtual network service tags](../../virtual-network/service-tags-overview.md).
+
+## URLs
+
+The table below lists the URLs that must be available in order to install and use the Connected Machine agent.
+
+| Agent resource | Description | When required| Endpoint used with private link |
+|||--||
+|`aka.ms`|Used to resolve the download script during installation|At installation time, only| Public |
+|`download.microsoft.com`|Used to download the Windows installation package|At installation time, only| Public |
+|`packages.microsoft.com`|Used to download the Linux installation package|At installation time, only| Public |
+|`login.windows.net`|Azure Active Directory|Always| Public |
+|`login.microsoftonline.com`|Azure Active Directory|Always| Public |
+|`pas.windows.net`|Azure Active Directory|Always| Public |
+|`management.azure.com`|Azure Resource Manager - to create or delete the Arc server resource|When connecting or disconnecting a server, only| Public, unless a [resource management private link](../../azure-resource-manager/management/create-private-link-access-portal.md) is also configured |
+|`*.his.arc.azure.com`|Metadata and hybrid identity services|Always| Private |
+|`*.guestconfiguration.azure.com`| Extension management and guest configuration services |Always| Private |
+|`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`|Notification service for extension and connectivity scenarios|Always| Private |
+|`azgn*.servicebus.windows.net`|Notification service for extension and connectivity scenarios|Always| Public |
+|`*.blob.core.windows.net`|Download source for Azure Arc-enabled servers extensions|Always, except when using private endpoints| Not used when private link is configured |
+|`dc.services.visualstudio.com`|Agent telemetry|Optional| Public |
+
+## Transport Layer Security 1.2 protocol
+
+To ensure the security of data in transit to Azure, we strongly encourage you to configure machine to use Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**.
+
+|Platform/Language | Support | More Information |
+| | | |
+|Linux | Linux distributions tend to rely on [OpenSSL](https://www.openssl.org) for TLS 1.2 support. | Check the [OpenSSL Changelog](https://www.openssl.org/news/changelog.html) to confirm your version of OpenSSL is supported.|
+| Windows Server 2012 R2 and higher | Supported, and enabled by default. | To confirm that you are still using the [default settings](/windows-server/security/tls/tls-registry-settings).|
+
+## Next steps
+
+* Review additional [prerequisites for deploying the Connected Machine agent](prerequisites.md).
+* Before you deploy the Azure Arc-enabled servers agent and integrate with other Azure management and monitoring services, review the [Planning and deployment guide](plan-at-scale-deployment.md).
+* To resolve problems, review the [agent connection issues troubleshooting guide](troubleshoot-agent-onboard.md).
azure-arc Onboard Configuration Manager Custom Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-configuration-manager-custom-task.md
Microsoft Endpoint Configuration Manager facilitates comprehensive management of
You can use a custom task sequence, that can deploy the Connected Machine Agent to onboard a collection of devices to Azure Arc-enabled servers.
-Before you get started, be sure to review the [prerequisites](agent-overview.md#prerequisites) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
azure-arc Onboard Configuration Manager Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-configuration-manager-powershell.md
Microsoft Endpoint Configuration Manager facilitates comprehensive management of
You can use Configuration Manager to run a PowerShell script that automates at-scale onboarding to Azure Arc-enabled servers.
-Before you get started, be sure to review the [prerequisites](agent-overview.md#prerequisites) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
azure-arc Onboard Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-portal.md
You can enable Azure Arc-enabled servers for one or a small number of Windows or
This method requires that you have administrator permissions on the machine to install and configure the agent. On Linux, by using the root account, and on Windows, you are member of the Local Administrators group.
-Before you get started, be sure to review the [prerequisites](agent-overview.md#prerequisites) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions).
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions).
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
azure-arc Onboard Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-powershell.md
For servers enabled with Azure Arc, you can take manual steps to enable them for
This method requires that you have administrator permissions on the machine to install and configure the agent. On Linux, by using the root account, and on Windows, you are member of the Local Administrators group. You can complete this process interactively or remotely on a Windows server by using [PowerShell remoting](/powershell/scripting/learn/ps101/08-powershell-remoting).
-Before you get started, review the [prerequisites](agent-overview.md#prerequisites) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions).
+Before you get started, review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions).
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
azure-arc Onboard Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-service-principal.md
To connect the machines to Azure Arc-enabled servers, you can use an Azure Activ
The installation methods to install and configure the Connected Machine agent requires that the automated method you use has administrator permissions on the machines: on Linux by using the root account, and on Windows as a member of the Local Administrators group.
-Before you get started, be sure to review the [prerequisites](agent-overview.md#prerequisites) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
azure-arc Onboard Update Management Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-update-management-machines.md
You can enable Azure Arc-enabled servers for one or more of your Windows or Linux virtual machines or physical servers hosted on-premises or other cloud environment that are managed with Azure Automation Update Management. This onboarding process automates the download and installation of the [Connected Machine agent](agent-overview.md). To connect the machines to Azure Arc-enabled servers, an Azure Active Directory [service principal](../../active-directory/develop/app-objects-and-service-principals.md) is used instead of your privileged identity to [interactively connect](onboard-portal.md) the machine. This service principal is created automatically as part of the onboarding process for these machines.
-Before you get started, be sure to review the [prerequisites](agent-overview.md#prerequisites) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions).
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions).
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
azure-arc Onboard Windows Admin Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-windows-admin-center.md
You can enable Azure Arc-enabled servers for one or more Windows machines in you
## Prerequisites
-* Azure Arc-enabled servers - Review the [prerequisites](agent-overview.md#prerequisites) and verify that your subscription, your Azure account, and resources meet the requirements.
+* Azure Arc-enabled servers - Review the [prerequisites](prerequisites.md) and verify that your subscription, your Azure account, and resources meet the requirements.
* Windows Admin Center - Review the requirements to [prepare your environment](/windows-server/manage/windows-admin-center/deploy/prepare-environment) to deploy and [configure Azure integration ](/windows-server/manage/windows-admin-center/azure/azure-integration).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/overview.md
For example, if the machine is registered with Azure Arc in the East US region,
## Supported environments
-Azure Arc-enabled servers support the management of physical servers and virtual machines hosted *outside* of Azure. For specific details of which hybrid cloud environments hosting VMs are supported, see [Connected Machine agent prerequisites](agent-overview.md#supported-environments).
+Azure Arc-enabled servers support the management of physical servers and virtual machines hosted *outside* of Azure. For specific details of which hybrid cloud environments hosting VMs are supported, see [Connected Machine agent prerequisites](prerequisites.md#supported-environments).
> [!NOTE] > Azure Arc-enabled servers is not designed or supported to enable management of virtual machines running in Azure.
azure-arc Plan At Scale Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/plan-at-scale-deployment.md
Title: How to plan and deploy Azure Arc-enabled servers
+ Title: Plan and deploy Azure Arc-enabled servers
description: Learn how to enable a large number of machines to Azure Arc-enabled servers to simplify configuration of essential security, management, and monitoring capabilities in Azure. Previously updated : 02/22/2022- Last updated : 03/14/2022+ # Plan and deploy Azure Arc-enabled servers
To learn more about our at-scale deployment recommendations, you can also refer
## Prerequisites
-* Your machines run a [supported operating system](agent-overview.md#supported-operating-systems) for the Connected Machine agent.
-* Your machines have connectivity from your on-premises network or other cloud environment to resources in Azure, either directly or through a proxy server.
-* To install and configure the Azure Connected Machine agent, an account with elevated (that is, an administrator or as root) privileges on the machines.
-* To onboard machines, you are a member of the **Azure Connected Machine Onboarding** role.
-* To read, modify, and delete a machine, you are a member of the **Azure Connected Machine Resource Administrator** role.
+Consider the following basic requirements when planning your deployment:
+
+* Your machines must run a [supported operating system](prerequisites.md#supported-operating-systems) for the Connected Machine agent.
+* Your machines must have connectivity from your on-premises network or other cloud environment to resources in Azure, either directly or through a proxy server.
+* To install and configure the Azure Connected Machine agent, you must have an account with elevated privileges (that is, an administrator or as root)on the machines.
+* To onboard machines, you must have the **Azure Connected Machine Onboarding** Azure built-in role.
+* To read, modify, and delete a machine, you must have the **Azure Connected Machine Resource Administrator** Azure built-in role.
+
+For more details, see the [prerequisites](prerequisites.md) and [network requirements](network-requirements.md) for installing the Connected Machine agent.
## Pilot
-Before deploying to all production machines, start by evaluating this deployment process before adopting it broadly in your environment. For a pilot, identify a representative sampling of machines that aren't critical to your companies ability to conduct business. You'll want to be sure to allow enough time to run the pilot and assess its impact: we recommend a minimum of 30 days.
+Before deploying to all production machines, start by evaluating the deployment process before adopting it broadly in your environment. For a pilot, identify a representative sampling of machines that aren't critical to your companies ability to conduct business. You'll want to be sure to allow enough time to run the pilot and assess its impact: we recommend a minimum of 30 days.
Establish a formal plan describing the scope and details of the pilot. The following is a sample of what a plan should include to help get you started.
Establish a formal plan describing the scope and details of the pilot. The follo
## Phase 1: Build a foundation
-In this phase, system engineers or administrators enable the core features in their organizations Azure subscription to start the foundation before enabling your machines for management by Azure Arc-enabled servers and other Azure services.
+In this phase, system engineers or administrators enable the core features in their organization's Azure subscription to start the foundation before enabling machines for management by Azure Arc-enabled servers and other Azure services.
-|Task |Detail |Duration |
+|Task |Detail |Estimated duration |
|--|-|| | [Create a resource group](../../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) | A dedicated resource group to include only Azure Arc-enabled servers and centralize management and monitoring of these resources. | One hour | | Apply [Tags](../../azure-resource-manager/management/tag-resources.md) to help organize machines. | Evaluate and develop an IT-aligned [tagging strategy](/azure/cloud-adoption-framework/decision-guides/resource-tagging/) that can help reduce the complexity of managing your Azure Arc-enabled servers and simplify making management decisions. | One day |
In this phase, system engineers or administrators enable the core features in th
| Configure [Role based access control](../../role-based-access-control/overview.md) (RBAC) | Develop an access plan to control who has access to manage Azure Arc-enabled servers and ability to view their data from other Azure services and solutions. | One day | | Identify machines with Log Analytics agent already installed | Run the following log query in [Log Analytics](../../azure-monitor/logs/log-analytics-overview.md) to support conversion of existing Log Analytics agent deployments to extension-managed agent:<br> Heartbeat <br> &#124; where TimeGenerated > ago(30d) <br> &#124; where ResourceType == "machines" and (ComputerEnvironment == "Non-Azure") <br> &#124; summarize by Computer, ResourceProvider, ResourceType, ComputerEnvironment | One hour |
-<sup>1</sup> An important consideration as part of evaluating your Log Analytics workspace design, is integration with Azure Automation in support of its Update Management and Change Tracking and Inventory feature, as well as Microsoft Defender for Cloud and Microsoft Sentinel. If your organization already has an Automation account and enabled its management features linked with a Log Analytics workspace, evaluate whether you can centralize and streamline management operations, as well as minimize cost, by using those existing resources versus creating a duplicate account, workspace, etc.
+<sup>1</sup> When evaluating your Log Analytics workspace design, consider integration with Azure Automation in support of its Update Management and Change Tracking and Inventory feature, as well as Microsoft Defender for Cloud and Microsoft Sentinel. If your organization already has an Automation account and enabled its management features linked with a Log Analytics workspace, evaluate whether you can centralize and streamline management operations, as well as minimize cost, by using those existing resources versus creating a duplicate account, workspace, etc.
## Phase 2: Deploy Azure Arc-enabled servers
-Next, we add to the foundation laid in phase 1 by preparing for and deploying the Azure Connected Machine agent.
+Next, we add to the foundation laid in Phase 1 by preparing for and [deploying the Azure Connected Machine agent](deployment-options.md).
-|Task |Detail |Duration |
+|Task |Detail |Estimated duration |
|--|-|| | Download the pre-defined installation script | Review and customize the pre-defined installation script for at-scale deployment of the Connected Machine agent to support your automated deployment requirements.<br><br> Sample at-scale onboarding resources:<br><br> <ul><li> [At-scale basic deployment script](onboard-service-principal.md)</ul></li> <ul><li>[At-scale onboarding VMware vSphere Windows Server VMs](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/vmware_scaled_powercli_win/_index.md)</ul></li> <ul><li>[At-scale onboarding VMware vSphere Linux VMs](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/vmware_scaled_powercli_linux/_index.md)</ul></li> <ul><li>[At-scale onboarding AWS EC2 instances using Ansible](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/aws_scaled_ansible/_index.md)</ul></li> | One or more days depending on requirements, organizational processes (for example, Change and Release Management), and automation method used. | | [Create service principal](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) |Create a service principal to connect machines non-interactively using Azure PowerShell or from the portal.| One hour |
Next, we add to the foundation laid in phase 1 by preparing for and deploying th
## Phase 3: Manage and operate
-Phase 3 sees administrators or system engineers enable automation of manual tasks to manage and operate the Connected Machine agent and the machine during their lifecycle.
+Phase 3 is when administrators or system engineers can enable automation of manual tasks to manage and operate the Connected Machine agent and the machines during their lifecycle.
-|Task |Detail |Duration |
+|Task |Detail |Estimated duration |
|--|-|| |Create a Resource Health alert |If a server stops sending heartbeats to Azure for longer than 15 minutes, it can mean that it is offline, the network connection has been blocked, or the agent is not running. Develop a plan for how youΓÇÖll respond and investigate these incidents and use [Resource Health alerts](../..//service-health/resource-health-alert-monitor-guide.md) to get notified when they start.<br><br> Specify the following when configuring the alert:<br> **Resource type** = **Azure Arc-enabled servers**<br> **Current resource status** = **Unavailable**<br> **Previous resource status** = **Available** | One hour | |Create an Azure Advisor alert | For the best experience and most recent security and bug fixes, we recommend keeping the Azure Connected Machine agent up to date. Out-of-date agents will be identified with an [Azure Advisor alert](../../advisor/advisor-alerts-portal.md).<br><br> Specify the following when configuring the alert:<br> **Recommendation type** = **Upgrade to the latest version of the Azure Connected Machine agent** | One hour |
Phase 3 sees administrators or system engineers enable automation of manual task
## Next steps
-* Troubleshooting information can be found in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).
-
+* Learn about [reconfiguring, upgrading, and removing the Connected Machine agent](manage-agent.md).
+* Review troubleshooting information in the [agent connection issues troubleshooting guide](troubleshoot-agent-onboard.md).
* Learn how to simplify deployment with other Azure services like Azure Automation [State Configuration](../../automation/automation-dsc-overview.md) and other supported [Azure VM extensions](manage-vm-extensions.md).
azure-arc Plan Evaluate On Azure Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/plan-evaluate-on-azure-virtual-machine.md
While you cannot install Azure Arc-enabled servers on an Azure VM for production
## Prerequisites * Your account is assigned to the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor) role.
-* The Azure virtual machine is running an [operating system supported by Azure Arc-enabled servers](agent-overview.md#supported-operating-systems). If you don't have an Azure VM, you can deploy a [simple Windows VM](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.compute%2fvm-simple-windows%2fazuredeploy.json) or a [simple Ubuntu Linux 18.04 LTS VM](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.compute%2fvm-simple-windows%2fazuredeploy.json).
+* The Azure virtual machine is running an [operating system supported by Azure Arc-enabled servers](prerequisites.md#supported-operating-systems). If you don't have an Azure VM, you can deploy a [simple Windows VM](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.compute%2fvm-simple-windows%2fazuredeploy.json) or a [simple Ubuntu Linux 18.04 LTS VM](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.compute%2fvm-simple-windows%2fazuredeploy.json).
* Your Azure VM can communicate outbound to download the Azure Connected Machine agent package for Windows from the [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent), and Linux from the Microsoft [package repository](https://packages.microsoft.com/). If outbound connectivity to the Internet is restricted following your IT security policy, you will need to download the agent package manually and copy it to a folder on the Azure VM. * An account with elevated (that is, an administrator or as root) privileges on the VM, and RDP or SSH access to the VM. * To register and manage the Azure VM with Azure Arc-enabled servers, you are a member of the [Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator) or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role in the resource group.
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
+
+ Title: Connected Machine agent prerequisites
+description: Learn about the prerequisites for installing the Connected Machine agent for Azure Arc-enabled servers.
Last updated : 03/14/2022+++
+# Connected Machine agent prerequisites
+
+This topic describes the basic requirements for installing the Connected Machine agent to onboard a physical server or virtual machine to Azure Arc-enabled servers. Some [onboarding methods](deployment-options.md) may have additional requirements.
+
+## Supported environments
+
+Azure Arc-enabled servers supports the installation of the Connected Machine agent on physical servers and virtual machines hosted outside of Azure. This includes support for virtual machines running on platforms like:
+
+* VMware
+* Azure Stack HCI
+* Other cloud environments
+
+Azure Arc-enabled servers does not support installing the agent on virtual machines running in Azure, or on virtual machines running on Azure Stack Hub or Azure Stack Edge, as they are already modeled as Azure VMs and able to be managed directly in Azure.
+
+## Supported operating systems
+
+The following versions of the Windows and Linux operating system are officially supported for the Azure Connected Machine agent:
+
+* Windows Server 2008 R2 SP1, 2012 R2, 2016, 2019, and 2022
+ * Both Desktop and Server Core experiences are supported
+ * Azure Editions are supported when running as a virtual machine on Azure Stack HCI
+* Azure Stack HCI
+* Ubuntu 16.04, 18.04, and 20.04 LTS (x64)
+* CentOS Linux 7 and 8 (x64)
+* SUSE Linux Enterprise Server (SLES) 12 and 15 (x64)
+* Red Hat Enterprise Linux (RHEL) 7 and 8 (x64)
+* Amazon Linux 2 (x64)
+* Oracle Linux 7 and 8 (x64)
+
+> [!WARNING]
+> If the Linux hostname or Windows computer name uses a reserved word or trademark, attempting to register the connected machine with Azure will fail. For a list of reserved words, see [Resolve reserved resource name errors](../../azure-resource-manager/templates/error-reserved-resource-name.md).
+
+> [!NOTE]
+> While Azure Arc-enabled servers supports Amazon Linux, the following features are not supported by this distribution:
+>
+> * The Dependency agent used by Azure Monitor VM insights
+> * Azure Automation Update Management
+
+## Software requirements
+
+* NET Framework 4.6 or later is required. [Download the .NET Framework](/dotnet/framework/install/guide-for-developers).
+* Windows PowerShell 5.1 is required. [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616).
+
+## Required permissions
+
+The following Azure built-in roles are required for different aspects of managing connected machines:
+
+* To onboard machines, you must have the [Azure Connected Machine Onboarding](../../role-based-access-control/built-in-roles.md#azure-connected-machine-onboarding) or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the resource group in which the machines will be managed.
+* To read, modify, and delete a machine, you must have the [Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator) role for the resource group.
+* To select a resource group from the drop-down list when using the **Generate script** method, you must have the [Reader](../../role-based-access-control/built-in-roles.md#reader) role for that resource group (or another role which includes **Reader** access).
+
+## Azure subscription and service limits
+
+Azure Arc-enabled servers supports up to 5,000 machine instances in a resource group.
+
+Before configuring your machines with Azure Arc-enabled servers, review the Azure Resource Manager [subscription limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#subscription-limits) and [resource group limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#resource-group-limits) to plan for the number of machines to be connected.
+
+## Azure resource providers
+
+To use Azure Arc-enabled servers, the following [Azure resource providers](../../azure-resource-manager/management/resource-providers-and-types.md) must be registered in your subscription:
+
+* **Microsoft.HybridCompute**
+* **Microsoft.GuestConfiguration**
+* **Microsoft.HybridConnectivity**
+
+If these resource providers are not already registered, you can register them using the following commands:
+
+Azure PowerShell:
+
+```azurepowershell-interactive
+Login-AzAccount
+Set-AzContext -SubscriptionId [subscription you want to onboard]
+Register-AzResourceProvider -ProviderNamespace Microsoft.HybridCompute
+Register-AzResourceProvider -ProviderNamespace Microsoft.GuestConfiguration
+Register-AzResourceProvider -ProviderNamespace Microsoft.HybridConnectivity
+```
+
+Azure CLI:
+
+```azurecli-interactive
+az account set --subscription "{Your Subscription Name}"
+az provider register --namespace 'Microsoft.HybridCompute'
+az provider register --namespace 'Microsoft.GuestConfiguration'
+az provider register --namespace 'Microsoft.HybridConnectivity'
+```
+
+You can also register the resource providers in the [Azure portal](../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal).
+
+## Next steps
+
+* Review the [networking requirements for deploying Azure Arc-enabled servers](network-requirements.md).
+* Before you deploy the Azure Arc-enabled servers agent and integrate with other Azure management and monitoring services, review the [Planning and deployment guide](plan-at-scale-deployment.md).* To resolve problems, review the [agent connection issues troubleshooting guide](troubleshoot-agent-onboard.md).
azure-arc Troubleshoot Agent Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-agent-onboard.md
The following table lists some of the known errors and suggestions on how to tro
|Failed to acquire authorization token from SPN |`Invalid client secret is provided` |Wrong or invalid service principal secret. |Verify the service principal secret. | | Failed to acquire authorization token from SPN |`Application with identifier 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' was not found in the directory 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant` |Incorrect service principal and/or Tenant ID. |Verify the service principal and/or the tenant ID.| |Get ARM Resource Response |`The client 'username@domain.com' with object id 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' does not have authorization to perform action 'Microsoft.HybridCompute/machines/read' over scope '/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.HybridCompute/machines/MSJC01' or the scope is invalid. If access was recently granted, please refresh your credentials."}}" Status Code=403` |Wrong credentials and/or permissions |Verify you or the service principal is a member of the **Azure Connected Machine Onboarding** role. |
-|Failed to AzcmagentConnect ARM resource |`The subscription is not registered to use namespace 'Microsoft.HybridCompute'` |Azure resource providers are not registered. |Register the [resource providers](./agent-overview.md#register-azure-resource-providers). |
+|Failed to AzcmagentConnect ARM resource |`The subscription is not registered to use namespace 'Microsoft.HybridCompute'` |Azure resource providers are not registered. |Register the [resource providers](prerequisites.md#azure-resource-providers). |
|Failed to AzcmagentConnect ARM resource |`Get https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.HybridCompute/machines/MSJC01?api-version=2019-03-18-preview: Forbidden` |Proxy server or firewall is blocking access to `management.azure.com` endpoint. |Verify connectivity to the endpoint and it is not blocked by a firewall or proxy server. | <a name="footnote1"></a><sup>1</sup>If this GPO is enabled and applies to machines with the Connected Machine agent, it deletes the user profile associated with the built-in account specified for the *himds* service. As a result, it also deletes the authentication certificate used to communicate with the service that is cached in the local certificate store for 30 days. Before the 30-day limit, an attempt is made to renew the certificate. To resolve this issue, follow the steps to [disconnect the agent](manage-agent.md#disconnect) and then re-register it with the service running `azcmagent connect`.
azure-arc Manage Vmware Vms In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md
Before you can install an extension, you must enable guest management on the VMw
1. Make sure your target machine:
- - is running a [supported operating system](../servers/agent-overview.md#supported-operating-systems).
+ - is running a [supported operating system](../servers/prerequisites.md#supported-operating-systems).
- - is able to connect through the firewall to communicate over the internet and these [URLs](../servers/agent-overview.md#networking-configuration) are not blocked.
+ - is able to connect through the firewall to communicate over the internet and these [URLs](../servers/network-requirements.md#urls) are not blocked.
- has VMware tools installed and running.
azure-functions Bring Dependency To Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/bring-dependency-to-functions.md
One of the simplest ways to bring in dependencies is to put the files/artifact t
| - local.settings.json | - pom.xml ```
-For java specifically, you need to specifically include the artifacts into the build/target folder when copying resources. Here's an example on how to do it in Maven:
+For Java specifically, you need to specifically include the artifacts into the build/target folder when copying resources. Here's an example on how to do it in Maven:
```xml ...
azure-functions Functions Bindings Expressions Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-expressions-patterns.md
module.exports = async function (context, info) {
### Dot notation
-If some of the properties in your JSON payload are objects with properties, you can refer to those directly by using dot notation. The dot notation does not work or [Cosmos DB](./functions-bindings-cosmosdb-v2.md) or [Table storage](./functions-bindings-storage-table-output.md) bindings.
+If some of the properties in your JSON payload are objects with properties, you can refer to those directly by using dot (`.`) notation. This notation doesn't work for [Cosmos DB](./functions-bindings-cosmosdb-v2.md) or [Table storage](./functions-bindings-storage-table-output.md) bindings.
For example, suppose your JSON looks like this:
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md
When you set the `isSessionsEnabled` property or attribute on [the trigger](func
|**maxAutoLockRenewalDuration**|`00:05:00`|The maximum duration within which the message lock will be renewed automatically. This setting only applies for functions that receive a single message at a time.| |**maxConcurrentCalls**|`16`|The maximum number of concurrent calls to the callback that the should be initiate per scaled instance. By default, the Functions runtime processes multiple messages concurrently. This setting only applies for functions that receive a single message at a time.| |**maxConcurrentSessions**|`8`|The maximum number of sessions that can be handled concurrently per scaled instance. This setting only applies for functions that receive a single message at a time.|
-|**maxMessages**|`1000`|The maximum number of messages that will be passed to each function call. This setting only applies for functions that receive a batch of messages.|
+|**maxMessageBatchSize**|`1000`|The maximum number of messages that will be passed to each function call. This setting only applies for functions that receive a batch of messages.|
|**sessionIdleTimeout**|n/a|The maximum amount of time to wait for a message to be received for the currently active session. After this time has elapsed, the processor will close the session and attempt to process another session. This setting only applies for functions that receive a single message at a time.| |**enableCrossEntityTransactions**|`false`|Whether or not to enable transactions that span multiple entities on a Service Bus namespace.|
azure-functions Functions Develop Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-local.md
The following application settings can be included in the **`Values`** array whe
| Setting | Values | Description | |--|--|--|
-|**`AzureWebJobsStorage`**| Storage account connection string, or<br/>`UseDevelopmentStorage=true`| Contains the connection string for an Azure storage account. Required when using triggers other than HTTP. For more information, see the [`AzureWebJobsStorage`] reference.<br/>When you have the [Azure Storage Emulator](../storage/common/storage-use-emulator.md) installed locally and you set [`AzureWebJobsStorage`] to `UseDevelopmentStorage=true`, Core Tools uses the emulator. The emulator is useful during development, but you should test with an actual storage connection before deployment.|
+|**`AzureWebJobsStorage`**| Storage account connection string, or<br/>`UseDevelopmentStorage=true`| Contains the connection string for an Azure storage account. Required when using triggers other than HTTP. For more information, see the [`AzureWebJobsStorage`] reference.<br/>When you have the [Azurite Emulator](../storage/common/storage-use-azurite.md) installed locally and you set [`AzureWebJobsStorage`] to `UseDevelopmentStorage=true`, Core Tools uses the emulator. The emulator is useful during development, but you should test with an actual storage connection before deployment.|
|**`AzureWebJobs.<FUNCTION_NAME>.Disabled`**| `true`\|`false` | To disable a function when running locally, add `"AzureWebJobs.<FUNCTION_NAME>.Disabled": "true"` to the collection, where `<FUNCTION_NAME>` is the name of the function. To learn more, see [How to disable functions in Azure Functions](disable-function.md#localsettingsjson) | |**`FUNCTIONS_WORKER_RUNTIME`** | `dotnet`<br/>`node`<br/>`java`<br/>`powershell`<br/>`python`| Indicates the targeted language of the Functions runtime. Required for version 2.x and higher of the Functions runtime. This setting is generated for your project by Core Tools. To learn more, see the [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) reference.| | **`FUNCTIONS_WORKER_RUNTIME_VERSION`** | `~7` |Indicates that PowerShell 7 be used when running locally. If not set, then PowerShell Core 6 is used. This setting is only used when running locally. When running in Azure, the PowerShell runtime version is determined by the `powerShellVersion` site configuration setting, which can be [set in the portal](functions-reference-powershell.md#changing-the-powershell-version). |
azure-functions Functions How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-github-actions.md
env:
AZURE_FUNCTIONAPP_NAME: your-app-name # set this to your function app name on Azure POM_XML_DIRECTORY: '.' # set this to the directory which contains pom.xml file POM_FUNCTIONAPP_NAME: your-app-name # set this to the function app name in your local development environment
- JAVA_VERSION: '1.8.x' # set this to the java version to use
+ JAVA_VERSION: '1.8.x' # set this to the Java version to use
jobs: build-and-deploy:
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
The following are some changes to be aware of before upgrading a 3.x app to 4.x.
- Default and maximum timeouts are now enforced in 4.x Linux consumption function apps. ([#1915](https://github.com/Azure/Azure-Functions/issues/1915))
+- Azure Functions 4.x uses Azure.Identity and Azure.Security.KeyVault.Secrets for the Key Vault provider and has deprecated the use of Microsoft.Azure.KeyVault. See the Key Vault option in [Secret Repositories](security-concepts.md#secret-repositories) for more information on how to configure function app settings. ([#2048](https://github.com/Azure/Azure-Functions/issues/2048))
+ - Function apps that share storage accounts will fail to start if their computed hostnames are the same. Use a separate storage account for each function app. ([#2049](https://github.com/Azure/Azure-Functions/issues/2049)) ::: zone pivot="programming-language-csharp"
azure-maps Power Bi Visual Add Pie Chart Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-pie-chart-layer.md
+
+ Title: Add a pie chart layer to an Azure Maps Power BI visual
+
+description: In this article, you will learn how to use the pie chart layer in an Azure Maps Power BI visual.
++ Last updated : 03/15/2022+++++
+# Add a pie chart layer
+
+In this article, you will learn how to add a pie chart layer to an Azure Maps Power BI visual.
+
+A pie chart is a visual representation of data in the form of a circular chart or *pie* where each slice represents an element of the dataset that is shown as a percentage of the whole. A list of numerical variables along with categorical (location) variables are required to represent data in the form of a pie chart.
++
+> [!NOTE]
+> The data used in this article comes from the [Power BI Sales and Marketing Sample](/power-bi/create-reports/sample-datasets#download-original-sample-power-bi-files).
+
+## Prerequisites
+
+- [Get started with Azure Maps Power BI visual](./power-bi-visual-get-started.md).
+- Understand [layers in the Azure Maps Power BI visual](./power-bi-visual-understanding-layers.md).
+
+## Add the pie chart layer
+
+The pie chart layer is added automatically based on what fields in the **Visualizations** pane have values, these fields include location, size and legend.
++
+The following steps will walk you through creating a pie chart layer.
+
+1. Select two location sources from the **Fields** pane, such as city/state, to add to the **Location** field.
+1. Select a numerical field from your table, such as sales, and add it to the **Size** field in the **Visualizations** pane. This field must contain the numerical values used in the pie chart.
+1. Select a data field from your table that can be used as the category that the numerical field applies to, such as *manufacturer*, and add it to the **Legend** field in the **Visualizations** pane. This will appear as the slices of the pie, the size of each slice is a percentage of the whole based on the value in the size field, such as the number of sales broken out by manufacturer.
+1. Next, in the **Format** tab of the **Visualizations** pane, switch the **Bubbles** toggle to **On**.
+
+The pie chart layer should now appear. Next you can adjust the Pie chart settings such as size and transparency.
+
+## Pie chart layer settings
+
+Pie Chart layer is an extension of the bubbles layer, so all settings are made in the **Bubbles** section. If a field is passed into the **Legend** bucket of the **Fields** pane, the pie charts will be populated and will be colored based on their categorization. The outline of the pie chart is white by default but can be changed to a new color. The following are the settings in the **Format** tab of the **Visualizations** pane that are available to a **Pie Chart layer**.
++
+| Setting | Description |
+|--|-|
+| Size | The size of each bubble. |
+| Fill transparency | Transparency of each pie chart. |
+| Outline color | Color that outlines the pie chart. |
+| Outline transparency | Transparency of the outline. |
+| Outline width | Width of the outline in pixels. |
+| Min zoom | Minimum zoom level tiles are available. |
+| Max zoom | Maximum zoom level tiles are available. |
+| Layer position | Specifies the position of the layer relative to other map layers. |
+
+## Next steps
+
+Change how your data is displayed on the map:
+
+> [!div class="nextstepaction"]
+> [Add a bar chart layer](power-bi-visual-add-bar-chart-layer.md)
+
+> [!div class="nextstepaction"]
+> [Add a heat map layer](power-bi-visual-add-heat-map-layer.md)
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
The following prerequisites must be met prior to installing the Azure Monitor ag
|:|:|:| | <ul><li>[Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)</li><li>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, virtual machine scale sets</li><li>Arc-enabled servers</li></ul> | To deploy the agent | | Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li></ul> | To deploy ARM templates | -- For installing the agent on physical servers and virtual machines hosted *outside* of Azure (i.e. on-premises), you must [install the Azure Arc agent](../../azure-arc/servers/agent-overview.md#installation-and-configuration) first (at no added cost)
+- For installing the agent on physical servers and virtual machines hosted *outside* of Azure (i.e. on-premises), you must [install the Azure Arc Connected Machine agent](../../azure-arc/servers/agent-overview.md) first (at no added cost)
- [Managed system identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) must be enabled on Azure virtual machines. This is not required for Azure Arc-enabled servers. The system identity will be enabled automatically if the agent is installed via [creating and assigning a data collection rule using the Azure portal](data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal). - The [AzureResourceManager service tag](../../virtual-network/service-tags-overview.md) must be enabled on the virtual network for the virtual machine. - The virtual machine must have access to the following HTTPS endpoints:
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
You can, for example:
> ## Troubleshooting+
+### Delayed telemetry, overloading network, or inefficient transmission
+System.Diagnostics.Tracing has an [Autoflush feature](https://docs.microsoft.com/dotnet/api/system.diagnostics.trace.autoflush). This causes SDK to flush with every telemetry item, which is undesirable, and can cause logging adapter issues like delayed telemetry, overloading network, inefficient transmission, etc.
+++ ### How do I do this for Java? In Java codeless instrumentation (recommended) the logs are collected out of the box, use [Java 3.0 agent](./java-in-process-agent.md).
-If you are using the Java SDK, use the [Java log adapters](java-2x-trace-logs.md).
+If you're using the Java SDK, use the [Java log adapters](java-2x-trace-logs.md).
### There's no Application Insights option on the project context menu * Make sure that Developer Analytics Tools is installed on the development machine. At Visual Studio **Tools** > **Extensions and Updates**, look for **Developer Analytics Tools**. If it isn't on the **Installed** tab, open the **Online** tab and install it.
If you are using the Java SDK, use the [Java log adapters](java-2x-trace-logs.md
### There's no log adapter option in the configuration tool * Install the logging framework first.
-* If you're using System.Diagnostics.Trace, make sure that you have it [configured in *web.config*](/dotnet/api/system.diagnostics.eventlogtracelistener).
+* If you're using System.Diagnostics.Trace, make sure that you've it [configured in *web.config*](/dotnet/api/system.diagnostics.eventlogtracelistener).
* Make sure that you have the latest version of Application Insights. In Visual Studio, go to **Tools** > **Extensions and Updates**, and open the **Updates** tab. If **Developer Analytics Tools** is there, select it to update it. ### <a name="emptykey"></a>I get the "Instrumentation key cannot be empty" error message
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Diagnostics.ApplicationMonitoringWi
## Release notes
+### 2.8.44
+
+- Updated ApplicationInsights .NET/.NET Core SDK to 2.20.1-redfield.
+- Enabled SQL query collection.
+- Enabled support for Azure Active Directory (AAD) authentication.
+ ### 2.8.42 - Updated ApplicationInsights .NET/.NET Core SDK to 2.18.1-redfield.
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Enabling monitoring on your ASP.NET Core based web applications running on [Azur
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] -
-## Enable agent-based monitoring
+## Enable auto-instrumentation monitoring
# [Windows](#tab/Windows)
To check which version of the extension you're running, go to `https://yoursiten
Starting with version 2.8.9 the pre-installed site extension is used. If you're using an earlier version, you can update via one of two ways:
-* [Upgrade by enabling via the portal](#enable-agent-based-monitoring). (Even if you have the Application Insights extension for Azure App Service installed, the UI shows only **Enable** button. Behind the scenes, the old private site extension will be removed.)
+* [Upgrade by enabling via the portal](#enable-auto-instrumentation-monitoring). (Even if you have the Application Insights extension for Azure App Service installed, the UI shows only **Enable** button. Behind the scenes, the old private site extension will be removed.)
* [Upgrade through PowerShell](#enable-through-powershell):
Below is our step-by-step troubleshooting guide for extension/agent based monito
- Confirm that the `Application Insights Extension Status` is `Pre-Installed Site Extension, version 2.8.x.xxxx, is running.`
- If it isn't running, follow the [enable Application Insights monitoring instructions](#enable-agent-based-monitoring).
+ If it isn't running, follow the [enable Application Insights monitoring instructions](#enable-auto-instrumentation-monitoring).
- Confirm that the status source exists and looks like: `Status source D:\home\LogFiles\ApplicationInsights\status\status_RD0003FF0317B6_4248_1.json`
azure-monitor Azure Web Apps Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md
Enabling monitoring on your ASP.NET based web applications running on [Azure App Services](../../app-service/index.yml) is now easier than ever. Whereas previously you needed to manually instrument your app, the latest extension/agent is now built into the App Service image by default. This article will walk you through enabling Azure Monitor application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments. > [!NOTE]
-> Manually adding an Application Insights site extension via **Development Tools** > **Extensions** is deprecated. This method of extension installation was dependent on manual updates for each new version. The latest stable release of the extension is now [preinstalled](https://github.com/projectkudu/kudu/wiki/Azure-Site-Extensions) as part of the App Service image. The files are located in `d:\Program Files (x86)\SiteExtensions\ApplicationInsightsAgent` and are automatically updated with each stable release. If you follow the agent-based instructions to enable monitoring below, it will automatically remove the deprecated extension for you.
+> Manually adding an Application Insights site extension via **Development Tools** > **Extensions** is deprecated. This method of extension installation was dependent on manual updates for each new version. The latest stable release of the extension is now [preinstalled](https://github.com/projectkudu/kudu/wiki/Azure-Site-Extensions) as part of the App Service image. The files are located in `d:\Program Files (x86)\SiteExtensions\ApplicationInsightsAgent` and are automatically updated with each stable release. If you follow the auto-instrumentation instructions to enable monitoring below, it will automatically remove the deprecated extension for you.
> [!NOTE]
-> If both agent-based monitoring and manual SDK-based instrumentation is detected, only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this, check out the [troubleshooting section](#troubleshooting) below.
+> If both auto-instrumentation monitoring and manual SDK-based instrumentation are detected, only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this, check out the [troubleshooting section](#troubleshooting) below.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-## Enable agent-based monitoring
+## Enable auto-instrumentation monitoring
> [!NOTE] > The combination of APPINSIGHTS_JAVASCRIPT_ENABLED and urlCompression is not supported. For more info see the explanation in the [troubleshooting section](#appinsights_javascript_enabled-and-urlcompression-is-not-supported).
To check which version of the extension you're running, go to `https://yoursiten
Starting with version 2.8.9 the pre-installed site extension is used. If you are an earlier version, you can update via one of two ways:
-* [Upgrade by enabling via the portal](#enable-agent-based-monitoring). (Even if you have the Application Insights extension for Azure App Service installed, the UI shows only **Enable** button. Behind the scenes, the old private site extension will be removed.)
+* [Upgrade by enabling via the portal](#enable-auto-instrumentation-monitoring). (Even if you have the Application Insights extension for Azure App Service installed, the UI shows only **Enable** button. Behind the scenes, the old private site extension will be removed.)
* [Upgrade through PowerShell](#enable-through-powershell):
Below is our step-by-step troubleshooting guide for extension/agent based monito
- Confirm that the `Application Insights Extension Status` is `Pre-Installed Site Extension, version 2.8.x.xxxx, is running.`
- If it is not running, follow the [enable Application Insights monitoring instructions](#enable-agent-based-monitoring).
+ If it is not running, follow the [enable Application Insights monitoring instructions](#enable-auto-instrumentation-monitoring).
- Confirm that the status source exists and looks like: `Status source D:\home\LogFiles\ApplicationInsights\status\status_RD0003FF0317B6_4248_1.json`
azure-monitor Azure Web Apps Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-nodejs.md
The easiest way to enable application monitoring for Node.js applications runnin
Turning on application monitoring in Azure portal will automatically instrument your application with Application Insights, and doesn't require any code changes. > [!NOTE]
-> If both agent-based monitoring and manual SDK-based instrumentation is detected, only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this, check out the [troubleshooting section](#troubleshooting) below.
+> If both auto-instrumentation monitoring and manual SDK-based instrumentation are detected, only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this, check out the [troubleshooting section](#troubleshooting) below.
### Auto-instrumentation through Azure portal
For the latest updates and bug fixes, [consult the release notes](web-app-extens
* [Monitor service health metrics](../data-platform.md) to make sure your service is available and responsive. * [Receive alert notifications](../alerts/alerts-overview.md) whenever operational events happen or metrics cross a threshold. * Use [Application Insights for JavaScript apps and web pages](javascript.md) to get client telemetry from the browsers that visit a web page.
-* [Set up Availability web tests](monitor-web-app-availability.md) to be alerted if your site is down.
+* [Set up Availability web tests](monitor-web-app-availability.md) to be alerted if your site is down.
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps.md
Enabling monitoring on your ASP.NET, ASP.NET Core, Java, and Node.js based web a
There are two ways to enable application monitoring for Azure App Services hosted applications: -- **Agent-based application monitoring** (ApplicationInsightsAgent).
+- **Auto-instrumentation application monitoring** (ApplicationInsightsAgent).
- This method is the easiest to enable, and no code change or advanced configurations are required. It is often referred to as "runtime" monitoring. For Azure App Services we recommend at a minimum enabling this level of monitoring, and then based on your specific scenario you can evaluate whether more advanced monitoring through manual instrumentation is needed.
- - The following are support for agent-based monitoring:
+ - The following are support for auto-instrumentation monitoring:
- [.NET Core](./azure-web-apps-net-core.md) - [.NET](./azure-web-apps-net.md) - [Java](./azure-web-apps-java.md)
There are two ways to enable application monitoring for Azure App Services hoste
* This approach is much more customizable, but it requires the following approaches: SDK for [.NET Core](./asp-net-core.md), [.NET](./asp-net.md), [Node.js](./nodejs.md), [Python](./opencensus-python.md), and a standalone agent for [Java](./java-in-process-agent.md). This method, also means you have to manage the updates to the latest version of the packages yourself.
- * If you need to make custom API calls to track events/dependencies not captured by default with agent-based monitoring, you would need to use this method. Check out the [API for custom events and metrics article](./api-custom-events-metrics.md) to learn more.
+ * If you need to make custom API calls to track events/dependencies not captured by default with auto-instrumentation monitoring, you would need to use this method. Check out the [API for custom events and metrics article](./api-custom-events-metrics.md) to learn more.
> [!NOTE]
-> If both agent-based monitoring and manual SDK-based instrumentation is detected, in .NET only the manual instrumentation settings will be honored, while in Java only the agent-based instrumentation will be emitting the telemetry. This is to prevent duplicate data from being sent.
+> If both auto-instrumentation monitoring and manual SDK-based instrumentation are detected, in .NET only the manual instrumentation settings will be honored, while in Java only the auto-instrumentation will be emitting the telemetry. This is to prevent duplicate data from being sent.
> [!NOTE] > Snapshot debugger and profiler are only available in .NET and .NET Core ## Next Steps-- Learn how to enable agent-based application monitoring for your [.NET Core](./azure-web-apps-net-core.md), [.NET](./azure-web-apps-net.md), [Java](./azure-web-apps-java.md) or [Nodejs](./azure-web-apps-nodejs.md) application running on App Service.
+- Learn how to enable auto-instrumentation application monitoring for your [.NET Core](./azure-web-apps-net-core.md), [.NET](./azure-web-apps-net.md), [Java](./azure-web-apps-java.md) or [Nodejs](./azure-web-apps-nodejs.md) application running on App Service.
azure-monitor Data Model Request Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-request-telemetry.md
# Request telemetry: Application Insights data model
-A request telemetry item (in [Application Insights](./app-insights-overview.md)) represents the logical sequence of execution triggered by an external request to your application. Every request execution is identified by unique `ID` and `url` containing all the execution parameters. You can group requests by logical `name` and define the `source` of this request. Code execution can result in `success` or `fail` and has a certain `duration`. Both success and failure executions may be grouped further by `resultCode`. Start time for the request telemetry defined on the envelope level.
+A request telemetry item (in [Application Insights](./app-insights-overview.md)) represents the logical sequence of execution triggered by an external request to your application. Every request execution is identified by unique `ID` and `url` containing all the execution parameters. You can group requests by logical `name` and define the `source` of this request. Code execution can result in `success` or `fail` and has a certain `duration`. Both success and failure executions may be grouped further by `resultCode`. Start time for the request telemetry defined on the envelope level.
Request telemetry supports the standard extensibility model using custom `properties` and `measurements`. + ## Name Name of the request represents code path taken to process the request. Low cardinality value to allow better grouping of requests. For HTTP requests it represents the HTTP method and URL path template like `GET /values/{id}` without the actual `id` value.
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
The Application Insights service stores and analyzes the telemetry. To see the a
You can have data exported from the Application Insights service, for example to a database or to external tools. You provide each tool with a special key that you obtain from the service. The key can be revoked if necessary. Application Insights SDKs are available for a range of application types: web services hosted in your own Java EE or ASP.NET servers, or in Azure; web clients - that is, the code running in a web page; desktop apps and services; device apps such as Windows Phone, iOS, and Android. They all send telemetry to the same service. ## What data does it collect? There are three sources of data:
azure-monitor Export Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md
Want to keep your telemetry for longer than the standard retention period? Or pr
> Continuous export is only supported for classic Application Insights resources. [Workspace-based Application Insights resources](./create-workspace-resource.md) must use [diagnostic settings](./create-workspace-resource.md#export-telemetry). > + Before you set up continuous export, there are some alternatives you might want to consider: * The Export button at the top of a metrics or search tab lets you transfer tables and charts to an Excel spreadsheet.
azure-monitor Ilogger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ilogger.md
Depending on the Application Insights logging package that you use, there will b
To add Application Insights telemetry to ASP.NET Core applications, use the `Microsoft.ApplicationInsights.AspNetCore` NuGet package. You can configure this through [Visual Studio as a connected service](/visualstudio/azure/azure-app-insights-add-connected-service), or manually.
-By default, ASP.NET Core applications have an Application Insights logging provider registered when they're configured through the [code](./asp-net-core.md) or [codeless](./azure-web-apps-net-core.md#enable-agent-based-monitoring) approach. The registered provider is configured to automatically capture log events with a severity of <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType> or greater. You can customize severity and categories. For more information, see [Logging level](#logging-level).
+By default, ASP.NET Core applications have an Application Insights logging provider registered when they're configured through the [code](./asp-net-core.md) or [codeless](./azure-web-apps-net-core.md#enable-auto-instrumentation-monitoring) approach. The registered provider is configured to automatically capture log events with a severity of <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType> or greater. You can customize severity and categories. For more information, see [Logging level](#logging-level).
1. Ensure that the NuGet package is installed:
namespace WebApplication
In the preceding code, `ApplicationInsightsLoggerProvider` is configured with your `"APPINSIGHTS_INSTRUMENTATIONKEY"` instrumentation key. Filters are applied, setting the log level to <xref:Microsoft.Extensions.Logging.LogLevel.Trace?displayProperty=nameWithType>. + #### Example Startup.cs ```csharp
azure-monitor Java 2X Collectd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-collectd.md
Take a copy of the instrumentation key, which identifies the resource.
![Browse all, open your resource, and then in the Essentials drop-down, select, and copy the Instrumentation Key](./media/java-collectd/instrumentation-key-001.png) + ## Install collectd and the plug-in On your Linux server machines:
azure-monitor Java 2X Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-get-started.md
Application Insights is an extensible analytics service for web developers that
![In the new resource overview, click Properties and copy the Instrumentation Key](./media/java-get-started/instrumentation-key-001.png)
+ [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
+ ## Add the Application Insights SDK for Java to your project *Choose your project type.*
azure-monitor Java 2X Micrometer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-micrometer.md
Steps
1. Build your application and run 2. The above should get you up and running with pre-aggregated metrics auto collected to Azure Monitor. For details on how to fine-tune Application Insights Spring Boot starter refer to the [readme on GitHub](https://github.com/Azure/azure-sdk-for-jav). + ## Using Spring 2.x Add the following dependencies to your pom.xml or build.gradle file:
azure-monitor Java 2X Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-trace-logs.md
If you're using Logback or Log4J (v1.2 or v2.0) for tracing, you can have your t
> [!TIP] > You only need to set your Application Insights Instrumentation Key once for your application. If you are using a framework like Java Spring, you may have already registered the key elsewhere in your app's configuration. + ## Using the Application Insights Java agent By default, the Application Insights Java agent automatically captures logging performed at `WARN` level and above.
azure-monitor Java 2X Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-troubleshoot.md
Questions or problems with [Azure Application Insights in Java][java]? Here are
* Please also look at [GitHub issues page](https://github.com/microsoft/ApplicationInsights-Java/issues) for known issues with the SDK. * Please ensure to use same version of Application Insights core, web, agent and logging appenders to avoid any version conflict issues. + #### I used to see data, but it has stopped * Have you hit your monthly quota of data points? Open Settings/Quota and Pricing to find out. If so, you can upgrade your plan, or pay for additional capacity. See the [pricing scheme](https://azure.microsoft.com/pricing/details/application-insights/). * Have you recently upgraded your SDK? Please ensure that only Unique SDK jars are present inside the project directory. There should not be two different versions of SDK present.
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
> Please review all the configuration options below carefully, as the json structure has completely changed, > in addition to the file name itself which went all lowercase. + ## Connection string and role name Connection string and role name are the most common settings needed to get started:
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md
To begin, create a configuration file named *applicationinsights.json*. Save it
When a span is started, the attributes present on the span at that time are used to check if any of the sampling overrides match.
+Matches can be either `strict` or `regexp`. Regular expression matches are performed against the entire attribute value,
+so if you want to match a value that contains `abc` anywhere in it, then you need to use `.*abc.*`.
+ If one of the sampling overrides match, then its sampling percentage is used to decide whether to sample the span or not.
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors.md
To configure this option, under `include` or `exclude` (or both), specify at lea
The include-exclude configuration allows more than one specified condition. All specified conditions must evaluate to true to result in a match.
-* **Required field**: `matchType` controls how items in `spanNames` arrays and `attributes` arrays are interpreted. Possible values are `regexp` and `strict`.
+* **Required field**: `matchType` controls how items in `spanNames` arrays and `attributes` arrays are interpreted.
+ Possible values are `regexp` and `strict`. Regular expression matches are performed against the entire attribute value,
+ so if you want to match a value that contains `abc` anywhere in it, then you need to use `.*abc.*`.
* **Optional fields**: * `spanNames` must match at least one of the items.
To configure this option, under `include` or `exclude` (or both), specify at lea
The include-exclude configuration allows more than one specified condition. All specified conditions must evaluate to true to result in a match.
-* **Required field**: `matchType` controls how items in `spanNames` arrays and `attributes` arrays are interpreted. Possible values are `regexp` and `strict`.
+* **Required field**: `matchType` controls how items in `spanNames` arrays and `attributes` arrays are interpreted.
+ Possible values are `regexp` and `strict`. Regular expression matches are performed against the entire attribute value,
+ so if you want to match a value that contains `abc` anywhere in it, then you need to use `.*abc.*`.
* **Optional fields**: * `spanNames` must match at least one of the items.
The include-exclude configuration allows more than one specified condition.
All specified conditions must evaluate to true to result in a match. * **Required field**:
- * `matchType` controls how items in `attributes` arrays are interpreted. Possible values are `regexp` and `strict`.
+ * `matchType` controls how items in `attributes` arrays are interpreted. Possible values are `regexp` and `strict`.
+ Regular expression matches are performed against the entire attribute value,
+ so if you want to match a value that contains `abc` anywhere in it, then you need to use `.*abc.*`.
* `attributes` specifies the list of attributes to match. All of these attributes must match exactly to result in a match. > [!NOTE]
To configure this option, under `exclude`, specify the `matchType` one or more `
* **Required field**: * `matchType` controls how items in `metricNames` are matched. Possible values are `regexp` and `strict`.
- * `metricNames` must match at least one of the items.
+ Regular expression matches are performed against the entire attribute value,
+ so if you want to match a value that contains `abc` anywhere in it, then you need to use `.*abc.*`.
+ * `metricNames` must match at least one of the items.
### Sample usage
azure-monitor Java Standalone Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-troubleshoot.md
In this case, the server side is the Application Insights ingestion endpoint or
#### How to add the missing cipher suites:
-If using Java 9 or later, please check if the JVM has `jdk.crypto.cryptoki` module included in the jmods folder. Also if you are building a custom java runtime using `jlink` please make sure to include the same module.
+If using Java 9 or later, please check if the JVM has `jdk.crypto.cryptoki` module included in the jmods folder. Also if you are building a custom Java runtime using `jlink` please make sure to include the same module.
Otherwise, these cipher suites should already be part of modern Java 8+ distributions, so it is recommended to check where you installed your Java distribution from, and investigate why the security
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
that was pointing to the 2.x agent.
The rest of this document describes limitations and changes that you may encounter when upgrading from 2.x to 3.x, as well as some workarounds that you may find helpful. ++ ## TelemetryInitializers and TelemetryProcessors The 2.x SDK TelemetryInitializers and TelemetryProcessors will not be run when using the 3.x agent.
or configuring [telemetry processors](./java-standalone-telemetry-processors.md)
This use case is supported in Application Insights Java 3.x using [Instrumentation keys overrides (preview)](./java-standalone-config.md#instrumentation-keys-overrides-preview). + ## Operation names In the Application Insights Java 2.x SDK, in some cases, the operation names contained the full path, e.g.
azure-monitor Javascript Click Analytics Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-click-analytics-plugin.md
ms.devlang: javascript
This plugin automatically tracks click events on web pages and uses data-* attributes on HTML elements to populate event telemetry. + ## Getting started Users can set up the Click Analytics Auto-collection plugin via npm.
azure-monitor Javascript Sdk Load Failure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk-load-failure.md
If there are exceptions being reported in the SDK script (for example ai.2.min.j
To check for faulty configuration, change the configuration passed into the snippet (if not already) so that it only includes your instrumentation key as a string value. + ```js src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js", cfg: {
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
Find out about the performance and usage of your web page or app. If you add [Application Insights](app-insights-overview.md) to your page script, you get timings of page loads and AJAX calls, counts, and details of browser exceptions and AJAX failures, as well as users and session counts. All these can be segmented by page, client OS and browser version, geo location, and other dimensions. You can set alerts on failure counts or slow page loading. And by inserting trace calls in your JavaScript code, you can track how the different features of your web page application are used.
-Application Insights can be used with any web pages - you just add a short piece of JavaScript. If your web service is [Java](java-in-process-agent.md) or [ASP.NET](asp-net.md), you can use the server-side SDKs in conjunction with the client-side JavaScript SDK to get an end-to-end understanding of your app's performance.
+Application Insights can be used with any web pages - you just add a short piece of JavaScript. If your web service is [Java](java-in-process-agent.md) or [ASP.NET](asp-net.md), you can use the server-side SDKs with the client-side JavaScript SDK to get an end-to-end understanding of your app's performance.
+ ## Adding the JavaScript SDK
Application Insights can be used with any web pages - you just add a short piece
> [Connection Strings](./sdk-connection-string.md?tabs=js) are recommended over instrumentation keys. New Azure regions **require** the use of connection strings instead of instrumentation keys. Connection string identifies the resource that you want to associate your telemetry data with. It also allows you to modify the endpoints your resource will use as a destination for your telemetry. You will need to copy the connection string and add it to your application's code or to an environment variable. 1. First you need an Application Insights resource. If you don't already have a resource and instrumentation key, follow the [create a new resource instructions](create-new-resource.md).
-2. Copy the _instrumentation key_ (also known as "iKey") or [connection string](#connection-string-setup) for the resource where you want your JavaScript telemetry to be sent (from step 1.) You will add it to the `instrumentationKey` or `connectionString` setting of the Application Insights JavaScript SDK.
+2. Copy the _instrumentation key_ (also known as "iKey") or [connection string](#connection-string-setup) for the resource where you want your JavaScript telemetry to be sent (from step 1.) You'll add it to the `instrumentationKey` or `connectionString` setting of the Application Insights JavaScript SDK.
3. Add the Application Insights JavaScript SDK to your web page or app via one of the following two options: * [npm Setup](#npm-based-setup) * [JavaScript Snippet](#snippet-based-setup)
appInsights.trackPageView(); // Manually call trackPageView to establish the cur
### Snippet based setup
-If your app does not use npm, you can directly instrument your webpages with Application Insights by pasting this snippet at the top of each your pages. Preferably, it should be the first script in your `<head>` section so that it can monitor any potential issues with all of your dependencies and optionally any JavaScript errors. If you are using Blazor Server App, add the snippet at the top of the file `_Host.cshtml` in the `<head>` section.
+If your app doesn't use npm, you can directly instrument your webpages with Application Insights by pasting this snippet at the top of each your pages. Preferably, it should be the first script in your `<head>` section so that it can monitor any potential issues with all of your dependencies and optionally any JavaScript errors. If you're using Blazor Server App, add the snippet at the top of the file `_Host.cshtml` in the `<head>` section.
To assist with tracking which version of the snippet your application is using, starting from version 2.5.5 the page view event will include a new tag "ai.internal.snippet" that will contain the identified snippet version.
cfg: { // Application Insights Configuration
#### Reporting Script load failures
-This version of the snippet detects and reports failures when loading the SDK from the CDN as an exception to the Azure Monitor portal (under the failures &gt; exceptions &gt; browser), this exception provides visibility into failures of this type so that you are aware that your application is not reporting telemetry (or other exceptions) as expected. This signal is an important measurement in understanding that you have lost telemetry because the SDK did not load or initialize which can lead to:
+This version of the snippet detects and reports failures when loading the SDK from the CDN as an exception to the Azure Monitor portal (under the failures &gt; exceptions &gt; browser), this exception provides visibility into failures of this type so that you're aware that your application isn't reporting telemetry (or other exceptions) as expected. This signal is an important measurement in understanding that you have lost telemetry because the SDK didn't load or initialize which can lead to:
- Under-reporting of how users are using (or trying to use) your site; - Missing telemetry on how your end users are using your site; - Missing JavaScript errors that could potentially be blocking your end users from successfully using your site. For details on this exception see the [SDK load failure](javascript-sdk-load-failure.md) troubleshooting page.
-Reporting of this failure as an exception to the portal does not use the configuration option ```disableExceptionTracking``` from the application insights configuration and therefore if this failure occurs it will always be reported by the snippet, even when the window.onerror support is disabled.
+Reporting of this failure as an exception to the portal doesn't use the configuration option ```disableExceptionTracking``` from the application insights configuration and therefore if this failure occurs it will always be reported by the snippet, even when the window.onerror support is disabled.
-Reporting of SDK load failures is specifically NOT supported on IE 8 (or less). This assists with reducing the minified size of the snippet by assuming that most environments are not exclusively IE 8 or less. If you have this requirement and you wish to receive these exceptions, you will need to either include a fetch poly fill or create you own snippet version that uses ```XDomainRequest``` instead of ```XMLHttpRequest```, it is recommended that you use the [provided snippet source code](https://github.com/microsoft/ApplicationInsights-JS/blob/master/AISKU/snippet/snippet.js) as a starting point.
+Reporting of SDK load failures is not supported on Internet Explorer 8 or earlier. This reduces the minified size of the snippet by assuming that most environments aren't exclusively IE 8 or less. If you have this requirement and you wish to receive these exceptions, you'll need to either include a fetch poly fill or create your own snippet version that uses ```XDomainRequest``` instead of ```XMLHttpRequest```, it's recommended that you use the [provided snippet source code](https://github.com/microsoft/ApplicationInsights-JS/blob/master/AISKU/snippet/snippet.js) as a starting point.
> [!NOTE] > If you are using a previous version of the snippet, it is highly recommended that you update to the latest version so that you will receive these previously unreported issues. #### Snippet configuration options
-All configuration options have now been move towards the end of the script to help avoid accidentally introducing JavaScript errors that would not just cause the SDK to fail to load, but also it would disable the reporting of the failure.
+All configuration options have now been move towards the end of the script to help avoid accidentally introducing JavaScript errors that wouldn't just cause the SDK to fail to load, but also it would disable the reporting of the failure.
Each configuration option is shown above on a new line, if you don't wish to override the default value of an item listed as [optional] you can remove that line to minimize the resulting size of your returned page.
The available configuration options are
| Name | Type | Description |||- | src | string **[required]** | The full URL for where to load the SDK from. This value is used for the "src" attribute of a dynamically added &lt;script /&gt; tag. You can use the public CDN location or your own privately hosted one.
-| name | string *[optional]* | The global name for the initialized SDK, defaults to `appInsights`. So ```window.appInsights``` will be a reference to the initialized instance. Note: if you provide a name value or a previous instance appears to be assigned (via the global name appInsightsSDK) then this name value will also be defined in the global namespace as ```window.appInsightsSDK=<name value>```, this is required by the SDK initialization code to ensure it's initializing and updating the correct snippet skeleton and proxy methods.
+| name | string *[optional]* | The global name for the initialized SDK, defaults to `appInsights`. So ```window.appInsights``` will be a reference to the initialized instance. Note: if you provide a name value or a previous instance appears to be assigned (via the global name appInsightsSDK) then this name value will also be defined in the global namespace as ```window.appInsightsSDK=<name value>```. The SDK initialization code uses this reference to ensure it's initializing and updating the correct snippet skeleton and proxy methods.
| ld | number in ms *[optional]* | Defines the load delay to wait before attempting to load the SDK. Default value is 0ms and any negative value will immediately add a script tag to the &lt;head&gt; region of the page, which will then block the page load event until to script is loaded (or fails). | useXhr | boolean *[optional]* | This setting is used only for reporting SDK load failures. Reporting will first attempt to use fetch() if available and then fallback to XHR, setting this value to true just bypasses the fetch check. Use of this value is only be required if your application is being used in an environment where fetch would fail to send the failure events.
-| crossOrigin | string *[optional]* | By including this setting, the script tag added to download the SDK will include the crossOrigin attribute with this string value. When not defined (the default) no crossOrigin attribute is added. Recommended values are not defined (the default); ""; or "anonymous" (For all valid values see [HTML attribute: `crossorigin`](https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/crossorigin) documentation)
+| crossOrigin | string *[optional]* | By including this setting, the script tag added to download the SDK will include the crossOrigin attribute with this string value. When not defined (the default) no crossOrigin attribute is added. Recommended values aren't defined (the default); ""; or "anonymous" (For all valid values see [HTML attribute: `crossorigin`](https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/crossorigin) documentation)
| cfg | object **[required]** | The configuration passed to the Application Insights SDK during initialization. ### Connection String Setup
-For either the NPM or Snippet setup, you can also configure your instance of Application Insights using a Connection String. Simply replace the `instrumentationKey` field with the `connectionString` field.
+For either the NPM or Snippet setup, you can also configure your instance of Application Insights using a Connection String. Replace the `instrumentationKey` field with the `connectionString` field.
```js import { ApplicationInsights } from '@microsoft/applicationinsights-web'
appInsights.trackPageView();
### Sending telemetry to the Azure portal
-By default the Application Insights JavaScript SDK autocollects a number of telemetry items that are helpful in determining the health of your application and the underlying user experience. These include:
+By default the Application Insights JavaScript SDK autocollects many telemetry items that are helpful in determining the health of your application and the underlying user experience. These include:
- **Uncaught exceptions** in your app, including information on - Stack trace
By default the Application Insights JavaScript SDK autocollects a number of tele
- **Session information** ### Telemetry initializers
-Telemetry initializers are used to modify the contents of collected telemetry before being sent from the user's browser. They can also be used to prevent certain telemetry from being sent, by returning `false`. Multiple telemetry initializers can be added to your Application Insights instance, and they are executed in order of adding them.
+Telemetry initializers are used to modify the contents of collected telemetry before being sent from the user's browser. They can also be used to prevent certain telemetry from being sent, by returning `false`. Multiple telemetry initializers can be added to your Application Insights instance, and they're executed in order of adding them.
-The input argument to `addTelemetryInitializer` is a callback that takes a [`ITelemetryItem`](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#addTelemetryInitializer) as an argument and returns a `boolean` or `void`. If returning `false`, the telemetry item is not sent, else it proceeds to the next telemetry initializer, if any, or is sent to the telemetry collection endpoint.
+The input argument to `addTelemetryInitializer` is a callback that takes a [`ITelemetryItem`](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#addTelemetryInitializer) as an argument and returns a `boolean` or `void`. If returning `false`, the telemetry item isn't sent, else it proceeds to the next telemetry initializer, if any, or is sent to the telemetry collection endpoint.
An example of using telemetry initializers: ```ts
Most configuration fields are named such that they can be defaulted to false. Al
| accountId | An optional account ID, if your app groups users into accounts. No spaces, commas, semicolons, equals, or vertical bars | string<br/>null | | sessionRenewalMs | A session is logged if the user is inactive for this amount of time in milliseconds. | numeric<br/>1800000<br/>(30 mins) | | sessionExpirationMs | A session is logged if it has continued for this amount of time in milliseconds. | numeric<br/>86400000<br/>(24 hours) |
-| maxBatchSizeInBytes | Max size of telemetry batch. If a batch exceeds this limit, it is immediately sent and a new batch is started | numeric<br/>10000 |
+| maxBatchSizeInBytes | Max size of telemetry batch. If a batch exceeds this limit, it's immediately sent and a new batch is started | numeric<br/>10000 |
| maxBatchInterval | How long to batch telemetry for before sending (milliseconds) | numeric<br/>15000 |
-| disable&#8203;ExceptionTracking | If true, exceptions are not autocollected. | boolean<br/> false |
-| disableTelemetry | If true, telemetry is not collected or sent. | boolean<br/>false |
-| enableDebug | If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting will result in dropped telemetry whenever an internal error occurs. This can be useful for quickly identifying issues with your configuration or usage of the SDK. If you do not want to lose telemetry while debugging, consider using `consoleLoggingLevel` or `telemetryLoggingLevel` instead of `enableDebug`. | boolean<br/>false |
+| disable&#8203;ExceptionTracking | If true, exceptions aren't autocollected. | boolean<br/> false |
+| disableTelemetry | If true, telemetry isn't collected or sent. | boolean<br/>false |
+| enableDebug | If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting will result in dropped telemetry whenever an internal error occurs. This can be useful for quickly identifying issues with your configuration or usage of the SDK. If you don't want to lose telemetry while debugging, consider using `consoleLoggingLevel` or `telemetryLoggingLevel` instead of `enableDebug`. | boolean<br/>false |
| loggingLevelConsole | Logs **internal** Application Insights errors to console. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 0 | | loggingLevelTelemetry | Sends **internal** Application Insights errors as telemetry. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 1 | | diagnosticLogInterval | (internal) Polling interval (in ms) for internal logging queue | numeric<br/> 10000 | | samplingPercentage | Percentage of events that will be sent. Default is 100, meaning all events are sent. Set this if you wish to preserve your data cap for large-scale applications. | numeric<br/>100 |
-| autoTrackPageVisitTime | If true, on a pageview, the _previous_ instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. It is sent as a custom metric named `PageVisitTime` in `milliseconds` and is calculated via the Date [now()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now) function (if available) and falls back to (new Date()).[getTime()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/getTime) if now() is unavailable (IE8 or less). Default is false. | boolean<br/>false |
-| disableAjaxTracking | If true, Ajax calls are not autocollected. | boolean<br/> false |
-| disableFetchTracking | If true, Fetch requests are not autocollected.|boolean<br/>true |
+| autoTrackPageVisitTime | If true, on a pageview, the _previous_ instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. It's sent as a custom metric named `PageVisitTime` in `milliseconds` and is calculated via the Date [now()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now) function (if available) and falls back to (new Date()).[getTime()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/getTime) if now() is unavailable (IE8 or less). Default is false. | boolean<br/>false |
+| disableAjaxTracking | If true, Ajax calls aren't autocollected. | boolean<br/> false |
+| disableFetchTracking | If true, Fetch requests aren't autocollected.|boolean<br/>true |
| overridePageViewDuration | If true, default behavior of trackPageView is changed to record end of page view duration interval when trackPageView is called. If false and no custom duration is provided to trackPageView, the page view performance is calculated using the navigation timing API. |boolean<br/> | maxAjaxCallsPerView | Default 500 - controls how many Ajax calls will be monitored per page view. Set to -1 to monitor all (unlimited) Ajax calls on the page. | numeric<br/> 500 | | disableDataLossAnalysis | If false, internal telemetry sender buffers will be checked at startup for items not yet sent. | boolean<br/> true |
Most configuration fields are named such that they can be defaulted to false. Al
| correlationHeader&#8203;ExcludedDomains | Disable correlation headers for specific domains | string[]<br/>undefined | | correlationHeader&#8203;ExcludePatterns | Disable correlation headers using regular expressions | regex[]<br/>undefined | | correlationHeader&#8203;Domains | Enable correlation headers for specific domains | string[]<br/>undefined |
-| disableFlush&#8203;OnBeforeUnload | If true, flush method will not be called when onBeforeUnload event triggers | boolean<br/> false |
+| disableFlush&#8203;OnBeforeUnload | If true, flush method won't be called when onBeforeUnload event triggers | boolean<br/> false |
| enableSessionStorageBuffer | If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load | boolean<br />true | | cookieCfg | Defaults to cookie usage enabled see [ICookieCfgConfig](#icookiemgrconfig) settings for full defaults. | [ICookieCfgConfig](#icookiemgrconfig)<br>(Since 2.6.0)<br/>undefined |
-| ~~isCookieUseDisabled~~<br>disableCookiesUsage | If true, the SDK will not store or read any data from cookies. Note that this disables the User and Session cookies and renders the usage blades and experiences useless. isCookieUseDisable is deprecated in favor of disableCookiesUsage, when both are provided disableCookiesUsage takes precedence.<br>(Since v2.6.0) And if `cookieCfg.enabled` is also defined it will take precedence over these values, Cookie usage can be re-enabled after initialization via the core.getCookieMgr().setEnabled(true). | alias for [`cookieCfg.enabled`](#icookiemgrconfig)<br>false |
+| ~~isCookieUseDisabled~~<br>disableCookiesUsage | If true, the SDK won't store or read any data from cookies. Disables the User and Session cookies and renders the usage blades and experiences useless. isCookieUseDisable is deprecated in favor of disableCookiesUsage, when both are provided disableCookiesUsage takes precedence.<br>(Since v2.6.0) And if `cookieCfg.enabled` is also defined it will take precedence over these values, Cookie usage can be re-enabled after initialization via the core.getCookieMgr().setEnabled(true). | alias for [`cookieCfg.enabled`](#icookiemgrconfig)<br>false |
| cookieDomain | Custom cookie domain. This is helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined it will take precedence over this value. | alias for [`cookieCfg.domain`](#icookiemgrconfig)<br>null | | cookiePath | Custom cookie path. This is helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined it will take precedence over this value. | alias for [`cookieCfg.path`](#icookiemgrconfig)<br>(Since 2.6.0)<br/>null | | isRetryDisabled | If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected) | boolean<br/>false |
-| isStorageUseDisabled | If true, the SDK will not store or read any data from local and session storage. | boolean<br/> false |
+| isStorageUseDisabled | If true, the SDK won't store or read any data from local and session storage. | boolean<br/> false |
| isBeaconApiDisabled | If false, the SDK will send all telemetry using the [Beacon API](https://www.w3.org/TR/beacon) | boolean<br/>true | | onunloadDisableBeacon | When tab is closed, the SDK will send all remaining telemetry using the [Beacon API](https://www.w3.org/TR/beacon) | boolean<br/> false | | sdkExtension | Sets the sdk extension name. Only alphabetic characters are allowed. The extension name is added as a prefix to the 'ai.internal.sdkVersion' tag (for example, 'ext_javascript:2.0.0'). | string<br/> null | | isBrowserLink&#8203;TrackingEnabled | If true, the SDK will track all [Browser Link](/aspnet/core/client-side/using-browserlink) requests. | boolean<br/>false |
-| appId | AppId is used for the correlation between AJAX dependencies happening on the client-side with the server-side requests. When Beacon API is enabled, it cannot be used automatically, but can be set manually in the configuration. |string<br/> null |
+| appId | AppId is used for the correlation between AJAX dependencies happening on the client-side with the server-side requests. When Beacon API is enabled, it canΓÇÖt be used automatically, but can be set manually in the configuration. |string<br/> null |
| enable&#8203;CorsCorrelation | If true, the SDK will add two headers ('Request-Id' and 'Request-Context') to all CORS requests to correlate outgoing AJAX dependencies with corresponding requests on the server side. | boolean<br/>false | | namePrefix | An optional value that will be used as name postfix for localStorage and cookie name. | string<br/>undefined | | enable&#8203;AutoRoute&#8203;Tracking | Automatically track route changes in Single Page Applications (SPA). If true, each route change will send a new Pageview to Application Insights. Hash route changes (`example.com/foo#bar`) are also recorded as new page views.| boolean<br/>false |
Most configuration fields are named such that they can be defaulted to false. Al
| enable&#8203;AjaxPerfTracking |Flag to enable looking up and including additional browser window.performance timings in the reported `ajax` (XHR and fetch) reported metrics. | boolean<br/> false | | maxAjaxPerf&#8203;LookupAttempts | The maximum number of times to look for the window.performance timings (if available), this is required as not all browsers populate the window.performance before reporting the end of the XHR request and for fetch requests this is added after its complete.| numeric<br/> 3 | | ajaxPerfLookupDelay | The amount of time to wait before re-attempting to find the window.performance timings for an `ajax` request, time is in milliseconds and is passed directly to setTimeout(). | numeric<br/> 25 ms |
-| enableUnhandled&#8203;PromiseRejection&#8203;Tracking | If true, unhandled promise rejections will be autocollected and reported as a JavaScript error. When disableExceptionTracking is true (don't track exceptions), the config value will be ignored and unhandled promise rejections will not be reported. | boolean<br/> false |
+| enableUnhandled&#8203;PromiseRejection&#8203;Tracking | If true, unhandled promise rejections will be autocollected and reported as a JavaScript error. When disableExceptionTracking is true (don't track exceptions), the config value will be ignored and unhandled promise rejections won't be reported. | boolean<br/> false |
| disable&#8203;InstrumentationKey&#8203;Validation | If true, instrumentation key validation check is bypassed. | boolean<br/>false | | enablePerfMgr | When enabled (true) this will create local perfEvents for code that has been instrumented to emit perfEvents (via the doPerf() helper). This can be used to identify performance issues within the SDK based on your usage or optionally within your own instrumented code. [More details are available by the basic documentation](https://github.com/microsoft/ApplicationInsights-JS/blob/master/docs/PerformanceMonitoring.md). Since v2.5.7 | boolean<br/>false |
-| perfEvtsSendAll | When _enablePerfMgr_ is enabled and the [IPerfManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfManager.ts) fires a [INotificationManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/INotificationManager.ts).perfEvent() this flag determines whether an event is fired (and sent to all listeners) for all events (true) or only for 'parent' events (false &lt;default&gt;).<br />A parent [IPerfEvent](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfEvent.ts) is an event where no other IPerfEvent is still running at the point of this event being created and it's _parent_ property is not null or undefined. Since v2.5.7 | boolean<br />false |
-| idLength | Identifies the default length used to generate new random session and user id values. Defaults to 22, previous default value was 5 (v2.5.8 or less), if you need to keep the previous maximum length you should set this value to 5. | numeric<br />22 |
+| perfEvtsSendAll | When _enablePerfMgr_ is enabled and the [IPerfManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfManager.ts) fires a [INotificationManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/INotificationManager.ts).perfEvent() this flag determines whether an event is fired (and sent to all listeners) for all events (true) or only for 'parent' events (false &lt;default&gt;).<br />A parent [IPerfEvent](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfEvent.ts) is an event where no other IPerfEvent is still running at the point of this event being created and its _parent_ property isn't null or undefined. Since v2.5.7 | boolean<br />false |
+| idLength | The default length used to generate new random session and user id values. Defaults to 22, previous default value was 5 (v2.5.8 or less), if you need to keep the previous maximum length you should set this value to 5. | numeric<br />22 |
## Cookie Handling
The instance based cookie management also replaces the previous CoreUtils global
### ICookieMgrConfig
-Cookie Configuration for instance based cookie management added in version 2.6.0.
+Cookie Configuration for instance-based cookie management added in version 2.6.0.
| Name | Description | Type and Default | ||-||
-| enabled | A boolean that indicates whether the use of cookies by the SDK is enabled by the current instance. If false, the instance of the SDK initialized by this configuration will not store or read any data from cookies | boolean<br/> true |
+| enabled | A boolean that indicates whether the use of cookies by the SDK is enabled by the current instance. If false, the instance of the SDK initialized by this configuration won't store or read any data from cookies | boolean<br/> true |
| domain | Custom cookie domain. This is helpful if you want to share Application Insights cookies across subdomains. If not provided uses the value from root `cookieDomain` value. | string<br/>null | | path | Specifies the path to use for the cookie, if not provided it will use any value from the root `cookiePath` value. | string <br/> / | | getCookie | Function to fetch the named cookie value, if not provided it will use the internal cookie parsing / caching. | `(name: string) => string` <br/> null |
Cookie Configuration for instance based cookie management added in version 2.6.0
## Enable time-on-page tracking
-By setting `autoTrackPageVisitTime: true`, the time a user spends on each page is tracked. On each new PageView, the duration the user spent on the *previous* page is sent as a [custom metric](../essentials/metrics-custom-overview.md) named `PageVisitTime`. This custom metric is viewable in the [Metrics Explorer](../essentials/metrics-getting-started.md) as a "log-based metric".
+By setting `autoTrackPageVisitTime: true`, the time in milliseconds a user spends on each page is tracked. On each new PageView, the duration the user spent on the *previous* page is sent as a [custom metric](../essentials/metrics-custom-overview.md) named `PageVisitTime`. This custom metric is viewable in the [Metrics Explorer](../essentials/metrics-getting-started.md) as a "log-based metric".
## Enable Correlation
cfg: { // Application Insights Configuration
```
-If any of your third-party servers that the client communicates with cannot accept the `Request-Id` and `Request-Context` headers, and you cannot update their configuration, then you'll need to put them into an exclude list via the `correlationHeaderExcludedDomains` configuration property. This property supports wildcards.
+If any of your third-party servers that the client communicates with canΓÇÖt accept the `Request-Id` and `Request-Context` headers, and you canΓÇÖt update their configuration, then you'll need to put them into an exclude list via the `correlationHeaderExcludedDomains` configuration property. This property supports wildcards.
-The server-side needs to be able to accept connections with those headers present. Depending on the `Access-Control-Allow-Headers` configuration on the server-side it is often necessary to extend the server-side list by manually adding `Request-Id` and `Request-Context`.
+The server-side needs to be able to accept connections with those headers present. Depending on the `Access-Control-Allow-Headers` configuration on the server-side it's often necessary to extend the server-side list by manually adding `Request-Id` and `Request-Context`.
Access-Control-Allow-Headers: `Request-Id`, `Request-Context`, `<your header>`
Access-Control-Allow-Headers: `Request-Id`, `Request-Context`, `<your header>`
By default, this SDK will **not** handle state-based route changing that occurs in single page applications. To enable automatic route change tracking for your single page application, you can add `enableAutoRouteTracking: true` to your setup configuration.
-Currently, we offer a separate [React plugin](javascript-react-plugin.md), which you can initialize with this SDK. It will also accomplish route change tracking for you, as well as collect other React specific telemetry.
+Currently, we offer a separate [React plugin](javascript-react-plugin.md), which you can initialize with this SDK. It will also accomplish route change tracking for you, and collect other React specific telemetry.
> [!NOTE] > Use `enableAutoRouteTracking: true` only if you are **not** using the React plugin. Both are capable of sending new PageViews when the route changes. If both are enabled, duplicate PageViews may be sent.
Currently, we offer a separate [React plugin](javascript-react-plugin.md), which
## Explore browser/client-side data
-Browser/client-side data can be viewed by going to **Metrics** and adding individual metrics you are interested in:
+Browser/client-side data can be viewed by going to **Metrics** and adding individual metrics you're interested in:
![Screenshot of the Metrics page in Application Insights showing graphic displays of metrics data for a web application.](./media/javascript/page-view-load-time.png)
Select **Browser** and then choose **Failures** or **Performance**.
### Analytics
-To query your telemetry collected by the JavaScript SDK, select the **View in Logs (Analytics)** button. By adding a `where` statement of `client_Type == "Browser"`, you will only see data from the JavaScript SDK and any server-side telemetry collected by other SDKs will be excluded.
+To query your telemetry collected by the JavaScript SDK, select the **View in Logs (Analytics)** button. By adding a `where` statement of `client_Type == "Browser"`, you'll only see data from the JavaScript SDK and any server-side telemetry collected by other SDKs will be excluded.
```kusto // average pageView duration by name
For a lightweight experience, you can instead install the basic version of Appli
``` npm i --save @microsoft/applicationinsights-web-basic ```
-This version comes with the bare minimum number of features and functionalities and relies on you to build it up as you see fit. For example, it performs no autocollection (uncaught exceptions, AJAX, etc.). The APIs to send certain telemetry types, like `trackTrace`, `trackException`, etc., are not included in this version, so you will need to provide your own wrapper. The only API that is available is `track`. A [sample](https://github.com/Azure-Samples/applicationinsights-web-sample1/blob/master/testlightsku.html) is located here.
+This version comes with the bare minimum number of features and functionalities and relies on you to build it up as you see fit. For example, it performs no autocollection (uncaught exceptions, AJAX, etc.). The APIs to send certain telemetry types, like `trackTrace`, `trackException`, etc., aren't included in this version, so you'll need to provide your own wrapper. The only API that is available is `track`. A [sample](https://github.com/Azure-Samples/applicationinsights-web-sample1/blob/master/testlightsku.html) is located here.
## Examples
For runnable examples, see [Application Insights JavaScript SDK Samples](https:/
## Upgrading from the old version of Application Insights Breaking changes in the SDK V2 version:-- To allow for better API signatures, some of the API calls, such as trackPageView and trackException, have been updated. Running in Internet Explorer 8 and earlier versions of the browser is not supported.
+- To allow for better API signatures, some of the API calls, such as trackPageView and trackException, have been updated. Running in Internet Explorer 8 and earlier versions of the browser isn't supported.
- The telemetry envelope has field name and structure changes due to data schema updates. - Moved `context.operation` to `context.telemetryTrace`. Some fields were also changed (`operation.id` --> `telemetryTrace.traceID`). - To manually refresh the current pageview ID (for example, in SPA apps), use `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`.
Test in internal environment to verify monitoring telemetry is working as expect
At just 36 KB gzipped, and taking only ~15 ms to initialize, Application Insights adds a negligible amount of loadtime to your website. By using the snippet, minimal components of the library are quickly loaded. In the meantime, the full script is downloaded in the background.
-While the script is downloading from the CDN, all tracking of your page is queued. Once the downloaded script finishes asynchronously initializing, all events that were queued are tracked. As a result, you will not lose any telemetry during the entire life cycle of your page. This setup process provides your page with a seamless analytics system, invisible to your users.
+While the script is downloading from the CDN, all tracking of your page is queued. Once the downloaded script finishes asynchronously initializing, all events that were queued are tracked. As a result, you won't lose any telemetry during the entire life cycle of your page. This setup process provides your page with a seamless analytics system, invisible to your users.
> Summary: > - ![npm version](https://badge.fury.io/js/%40microsoft%2Fapplicationinsights-web.svg)
Chrome Latest Γ£ö | Firefox Latest Γ£ö | IE 9+ & Edge Γ£ö<br>IE 8- Compatible |
## ES3/IE8 Compatibility
-As an SDK there are numerous users that cannot control the browsers that their customers use. As such we need to ensure that this SDK continues to "work" and does not break the JS execution when loaded by an older browser. While it would be ideal to not support IE8 and older generation (ES3) browsers, there are numerous large customers/users that continue to require pages to "work" and as noted they may or cannot control which browser that their end users choose to use.
+As an SDK there are numerous users that canΓÇÖt control the browsers that their customers use. As such we need to ensure that this SDK continues to "work" and doesn't break the JS execution when loaded by an older browser. While it would be ideal to not support IE8 and older generation (ES3) browsers, there are numerous large customers/users that continue to require pages to "work" and as noted they may or canΓÇÖt control which browser that their end users choose to use.
-This does NOT mean that we will only support the lowest common set of features, just that we need to maintain ES3 code compatibility and when adding new features they will need to be added in a manner that would not break ES3 JavaScript parsing and added as an optional feature.
+This does NOT mean that we'll only support the lowest common set of features, just that we need to maintain ES3 code compatibility and when adding new features they'll need to be added in a manner that wouldn't break ES3 JavaScript parsing and added as an optional feature.
[See GitHub for full details on IE8 support](https://github.com/Microsoft/ApplicationInsights-JS#es3ie8-compatibility)
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Live Metrics are currently supported for ASP.NET, ASP.NET Core, Azure Functions,
3. [Secure the control channel](#secure-the-control-channel) if you might use sensitive data such as customer names in your filters. + ### Enable LiveMetrics using code for any .NET application Even though LiveMetrics is enabled by default when onboarding using recommended instructions for .NET Applications, the following shows how to setup Live Metrics
azure-monitor Mobile Center Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/mobile-center-quickstart.md
To complete this tutorial, you need:
If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. + ## Sign up with App Center To begin, create an account and [sign up with App Center](https://appcenter.ms/signup?utm_source=ApplicationInsights&utm_medium=Azure&utm_campaign=docs).
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
Application Insights collects log, performance, and error data, and automaticall
The required Application Insights instrumentation is built into Azure Functions. The only thing you need is a valid instrumentation key to connect your function app to an Application Insights resource. The instrumentation key should be added to your application settings when your function app resource is created in Azure. If your function app doesn't already have this key, you can set it manually. For more information read more about [monitoring Azure Functions](../../azure-functions/functions-monitoring.md?tabs=cmd). + ## Distributed tracing for Java applications (public preview) > [!IMPORTANT]
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
Before you begin, make sure that you have an Azure subscription, or [get a new o
1. Sign in to the [Azure portal][portal]. 2. [Create an Application Insights resource](create-new-resource.md) + ### <a name="sdk"></a> Set up the Node.js client library Include the SDK in your app, so it can gather data.
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
You may have noted that OpenCensus is converging into [OpenTelemetry](https://op
- Python installation. This article uses [Python 3.7.0](https://www.python.org/downloads/release/python-370/), although other versions will likely work with minor changes. The Opencensus Python SDK only supports Python v2.7 and v3.4+. - Create an Application Insights [resource](./create-new-resource.md). You'll be assigned your own instrumentation key (ikey) for your resource. + ## Introducing Opencensus Python SDK [OpenCensus](https://opencensus.io) is a set of open source libraries to allow collection of distributed tracing, metrics and logging telemetry. Through the use of [Azure Monitor exporters](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure), you will be able to send this collected telemetry to Application insights. This article walks you through the process of setting up OpenCensus and Azure Monitor Exporters for Python to send your monitoring data to Azure Monitor.
azure-monitor Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md
Additional properties are available via the cmdlets:
Refer to the [detailed documentation](/powershell/module/az.applicationinsights) for the parameters for these cmdlets. + ## Set the data retention Below are three methods to programmatically set the data retention on an Application Insights resource.
azure-monitor Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pricing.md
The volume of data you send can be managed using the following techniques:
* **Throttling**: Throttling limits the data rate to 32,000 events per second, averaged over 1 minute per instrumentation key. The volume of data that your app sends is assessed every minute. If it exceeds the per-second rate averaged over the minute, the server refuses some requests. The SDK buffers the data and then tries to resend it. It spreads out a surge over several minutes. If your app consistently sends data at more than the throttling rate, some data will be dropped. (The ASP.NET, Java, and JavaScript SDKs try to resend data this way; other SDKs might drop throttled data.) If throttling occurs, a notification warning alerts you that this has occurred. + ## Manage your maximum daily data volume You can use the daily volume cap to limit the data collected. However, if the cap is met, a loss of all telemetry sent from your application for the remainder of the day occurs. It *isn't advisable* to have your application hit the daily cap. You can't track the health and performance of your application after it reaches the daily cap.
azure-monitor Profiler Cloudservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-cloudservice.md
Application Insights Profiler is installed with the Azure Diagnostics extension.
> After the Visual Studio 15.5 Azure SDK release, only the instrumentation keys that are used by the application and the ApplicationInsightsProfiler sink need to match each other. 1. Deploy your service with the new Diagnostics configuration, and Application Insights Profiler is configured to run on your service.+ ## Next steps
azure-monitor Profiler Servicefabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-servicefabric.md
To set up your environment, take the following actions:
* Generate traffic to your application (for example, launch an [availability test](monitor-web-app-availability.md)). Then, wait 10 to 15 minutes for traces to start to be sent to the Application Insights instance. * See [Profiler traces](profiler-overview.md?toc=/azure/azure-monitor/toc.json) in the Azure portal. * For help with troubleshooting Profiler issues, see [Profiler troubleshooting](profiler-troubleshooting.md?toc=/azure/azure-monitor/toc.json).+
azure-monitor Profiler Trackrequests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-trackrequests.md
To view profiles for your application on the Performance page, Azure Application
For other applications, such as Azure Cloud Services worker roles and Service Fabric stateless APIs, you need to write code to tell Application Insights where your requests begin and end. After you've written this code, requests telemetry is sent to Application Insights. You can view the telemetry on the Performance page, and profiles are collected for those requests. + To manually track requests, do the following: 1. Early in the application lifetime, add the following code:
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
Insert a line like `samplingPercentage: 10,` before the instrumentation key:
appInsights.trackPageView(); </script> ``` For the sampling percentage, choose a percentage that is close to 100/N where N is an integer. Currently sampling doesn't support other values.
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
The key value pairs provide an easy way for users to define a prefix suffix comb
> [!TIP] > We recommend the use of connection strings over instrumentation keys. + ## Scenario overview Customer scenarios where we visualize this having the most impact:
azure-monitor Separate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/separate-resources.md
When you are developing the next version of a web application, you don't want to
(If your system is an Azure Cloud Service, there's [another method of setting separate ikeys](../../azure-monitor/app/cloudservices.md).) + ## About resources and instrumentation keys When you set up Application Insights monitoring for your web app, you create an Application Insights *resource* in Microsoft Azure. You open this resource in the Azure portal in order to see and analyze the telemetry collected from your app. The resource is identified by an *instrumentation key* (ikey). When you install the Application Insights package to monitor your app, you configure it with the instrumentation key, so that it knows where to send the telemetry.
azure-monitor Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sharepoint.md
Add the code just before the </head> tag.
![Screenshot that shows where to add the code to your site page.](./media/sharepoint/04-code.png) + #### Or on individual pages To monitor a limited set of pages, add the script separately to each page.
azure-monitor Snapshot Collector Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-collector-release-notes.md
This article contains the releases notes for the Microsoft.ApplicationInsights.S
For bug reports and feedback, open an issue on GitHub at https://github.com/microsoft/ApplicationInsights-SnapshotCollector + ## Release notes ## [1.4.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.2)
azure-monitor Snapshot Debugger Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-troubleshoot.md
If that doesn't solve the problem, then refer to the following manual troublesho
Make sure you're using the correct instrumentation key in your published application. Usually, the instrumentation key is read from the ApplicationInsights.config file. Verify the value is the same as the instrumentation key for the Application Insights resource that you see in the portal. + ## <a id="SSL"></a>Check TLS/SSL client settings (ASP.NET) If you have an ASP.NET application that it is hosted in Azure App Service or in IIS on a virtual machine, your application could fail to connect to the Snapshot Debugger service due to a missing SSL security protocol.
azure-monitor Snapshot Debugger Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-vm.md
If your application runs in Azure Service Fabric, Cloud Service, Virtual Machine
} } ```- ## Next steps - Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.
azure-monitor Statsbeat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/statsbeat.md
N/A
|Throttle Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`| |Exception Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`| #### Attach Statsbeat |Metric Name|Unit|Supported dimensions|
azure-monitor Status Monitor V2 Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-api-reference.md
This article describes a cmdlet that's a member of the [Az.ApplicationMonitor Po
> - To get started, you need an instrumentation key. For more information, see [Create a resource](create-new-resource.md#copy-the-instrumentation-key). > - This cmdlet requires that you review and accept our license and privacy statement. + > [!IMPORTANT] > This cmdlet requires a PowerShell session with Admin permissions and an elevated execution policy. For more information, see [Run PowerShell as administrator with an elevated execution policy](status-monitor-v2-detailed-instructions.md#run-powershell-as-admin-with-an-elevated-execution-policy). > - This cmdlet requires that you review and accept our license and privacy statement.
azure-monitor Status Monitor V2 Detailed Instructions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-detailed-instructions.md
We've also provided manual download instructions in case you don't have internet
To get started, you need an instrumentation key. For more information, see [Create an Application Insights resource](create-new-resource.md#copy-the-instrumentation-key). + ## Run PowerShell as Admin with an elevated execution policy ### Run as Admin
azure-monitor Transaction Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-diagnostics.md
The unified diagnostics experience automatically correlates server-side telemetry from across all your Application Insights monitored components into a single view. It doesn't matter if you have multiple resources with separate instrumentation keys. Application Insights detects the underlying relationship and allows you to easily diagnose the application component, dependency, or exception that caused a transaction slowdown or failure. + ## What is a Component? Components are independently deployable parts of your distributed/microservices application. Developers and operations teams have code-level visibility or access to telemetry generated by these application components.
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md
The best experience is obtained by installing Application Insights both in your
}}); </script> ```
- To learn more advanced configurations for monitoring websites, check out the [JavaScript SDK reference article](./javascript.md).
+To learn more advanced configurations for monitoring websites, check out the [JavaScript SDK reference article](./javascript.md).
3. **Mobile app code:** Use the App Center SDK to collect events from your app, then send copies of these events to Application Insights for analysis by [following this guide](../app/mobile-center-quickstart.md).
azure-monitor Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/visual-studio.md
It's also useful if you have some [custom telemetry](./api-custom-events-metrics
* In the Search window's Settings, there's an option to search local diagnostics even if your app sends telemetry to the portal. * To stop telemetry being sent to the portal, comment out the line `<instrumentationkey>...` from ApplicationInsights.config. When you're ready to send telemetry to the portal again, uncomment it. ## Next steps
azure-monitor Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/windows-desktop.md
Applications hosted on premises, in Azure, and in other clouds can all take adva
5. [Use the API](./api-custom-events-metrics.md) to send telemetry. 6. Run your app, and see the telemetry in the resource you created in the Azure portal. + ## <a name="telemetry"></a>Example code ```csharp
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
The [Application Insights SDK for Worker Service](https://www.nuget.org/packages
A valid Application Insights instrumentation key. This key is required to send any telemetry to Application Insights. If you need to create a new Application Insights resource to get an instrumentation key, see [Create an Application Insights resource](./create-new-resource.md). + ## Using Application Insights SDK for Worker Services 1. Install the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package to the application.
azure-monitor Tables Feature Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tables-feature-support.md
The following list identifies the tables in a [Log Analytics workspace](log-anal
| [MicrosoftHealthcareApisAuditLogs](/azure/azure-monitor/reference/tables/microsofthealthcareapisauditlogs) | | | [NWConnectionMonitorPathResult](/azure/azure-monitor/reference/tables/nwconnectionmonitorpathresult) | | | [NWConnectionMonitorTestResult](/azure/azure-monitor/reference/tables/nwconnectionmonitortestresult) | |
-| [OfficeActivity](/azure/azure-monitor/reference/tables/officeactivity) | ||
-| [Perf](/azure/azure-monitor/reference/tables/perf) | Partial support ΓÇô only windows perf data is currently supported. | |
+| [OfficeActivity](/azure/azure-monitor/reference/tables/officeactivity) | |
+| [Perf](/azure/azure-monitor/reference/tables/perf) | Partial support ΓÇô only windows perf data is currently supported. |
| [PowerBIDatasetsWorkspace](/azure/azure-monitor/reference/tables/powerbidatasetsworkspace) | | | [HDInsightRangerAuditLogs](/azure/azure-monitor/reference/tables/hdinsightrangerauditlogs) | | | [PurviewScanStatusLogs](/azure/azure-monitor/reference/tables/purviewscanstatuslogs) | |
azure-monitor Tutorial Ingestion Time Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-ingestion-time-transformations.md
In this tutorial, you learn to:
To complete this tutorial, you need the following: - Log Analytics workspace where you have at least [contributor rights](manage-access.md#manage-access-using-azure-permissions) .-- [Permissions to create Data Collection Rule objects](/essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- [Permissions to create Data Collection Rule objects](https://docs.microsoft.com/azure/azure-monitor/essentials/data-collection-rule-overview#permissions) in the workspace.
## Overview of tutorial
There is currently a known issue affecting dynamic columns. A temporary workarou
- [Read more about ingestion-time transformations](ingestion-time-transformations.md) - [See which tables support ingestion-time transformations](tables-feature-support.md)-- [Learn more about writing transformation queries](../essentials/data-collection-rule-transformations.md)
+- [Learn more about writing transformation queries](../essentials/data-collection-rule-transformations.md)
azure-monitor Monitor Virtual Machine Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-configure.md
There's no more cost for Azure Arc-enabled servers, but there might be some cost
### Machines that can't use Azure Arc-enabled servers If you have any hybrid machines that match the following criteria, they won't be able to use Azure Arc-enabled servers: -- The operating system of the machine isn't supported by the server agents enabled by Azure Arc. For more information, see [Supported operating systems](../../azure-arc/servers/agent-overview.md#prerequisites).
+- The operating system of the machine isn't supported by the server agents enabled by Azure Arc. For more information, see [Supported operating systems](../../azure-arc/servers/prerequisites.md#supported-operating-systems).
- Your security policy doesn't allow machines to connect directly to Azure. The Log Analytics agent can use the [Log Analytics gateway](../agents/gateway.md) whether or not Azure Arc-enabled servers are installed. The server agents enabled by Azure Arc must connect directly to Azure. You still can monitor these machines with Azure Monitor, but you need to manually install their agents. To manually install the Log Analytics agent and Dependency agent on those hybrid machines, see [Enable VM insights for a hybrid virtual machine](vminsights-enable-hybrid.md).
azure-netapp-files Configure Ldap Extended Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-extended-groups.md
na Previously updated : 03/03/2022 Last updated : 03/15/2022 # Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes
The following information is passed to the server in the query:
* [Create and manage Active Directory connections](create-active-directory-connections.md) * [Configure NFSv4.1 domain](azure-netapp-files-configure-nfsv41-domain.md#configure-nfsv41-domain) * [Troubleshoot volume errors for Azure NetApp Files](troubleshoot-volumes.md)
+* [Modify Active Directory connections for Azure NetApp Files](modify-active-directory-connections.md)
azure-netapp-files Configure Ldap Over Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-over-tls.md
na Previously updated : 01/04/2022 Last updated : 03/15/2022 # Configure ADDS LDAP over TLS for Azure NetApp Files
Disabling LDAP over TLS stops encrypting LDAP queries to Active Directory (LDAP
* [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md) * [Create an SMB volume for Azure NetApp Files](azure-netapp-files-create-volumes-smb.md) * [Create a dual-protocol volume for Azure NetApp Files](create-volumes-dual-protocol.md)
+* [Modify Active Directory connections for Azure NetApp Files](modify-active-directory-connections.md)
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 03/11/2022 Last updated : 03/15/2022 # Create and manage Active Directory connections for Azure NetApp Files
-Several features of Azure NetApp Files require that you have an Active Directory connection. For example, you need to have an Active Directory connection before you can create an [SMB volume](azure-netapp-files-create-volumes-smb.md), a [NFSv4.1 Kerberos volume](configure-kerberos-encryption.md), or a [dual-protocol volume](create-volumes-dual-protocol.md). This article shows you how to create and manage Active Directory connections for Azure NetApp Files.
+Several features of Azure NetApp Files require that you have an Active Directory connection. For example, you need to have an Active Directory connection before you can create an [SMB volume](azure-netapp-files-create-volumes-smb.md), a [NFSv4.1 Kerberos volume](configure-kerberos-encryption.md), or a [dual-protocol volume](create-volumes-dual-protocol.md). This article shows you how to create and manage Active Directory connections for Azure NetApp Files.
## Before you begin
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
## Next steps
+* [Modify Active Directory connections](modify-active-directory-connections.md)
* [Create an SMB volume](azure-netapp-files-create-volumes-smb.md) * [Create a dual-protocol volume](create-volumes-dual-protocol.md) * [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md)
azure-netapp-files Modify Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/modify-active-directory-connections.md
+
+ Title: Modify an Active Directory Connection for Azure NetApp Files | Microsoft Docs
+description: This article shows you how to modify Active Directory connections for Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 03/15/2022+++
+# Modify Active Directory connections for Azure NetApp Files
+
+Once you have [created an Active Directory connection](create-active-directory-connections.md) in Azure NetApp Files, you can modify it. When modifying an Active Directory, not all configurations can be modified.
+
+## Modify Active Directory connections
+
+1. Select **Active Directory connections**. Then, select **Edit** to edit an existing AD connection.
+
+1. In the **Edit Active Directory** window that appears, modify Active Directory connection configurations as needed. See [Options for Active Directory connections](#options-for-active-directory-connections) for an explanation of what fields can be modified.
+
+## Options for Active Directory connections
+
+|Field Name |What it is |Can it be modified? |Considerations & Impacts |Effect |
+|:-:|:--|:-:|:--|:--|
+| Primary DNS | Primary DNS server IP addresses for the Active Directory domain. | Yes | None* | New DNS IP will be used for DNS resolution. |
+| Secondary DNS | Secondary DNS server IP addresses for the Active Directory domain. | Yes | None* | New DNS IP will be used for DNS resolution in case primary DNS fails. |
+| AD DNS Domain Name | The domain name of your Active Directory Domain Services that you want to join.ΓÇ»| No | None | N/A |
+| AD Site Name | The site to which the domain controller discovery is limited. | Yes | This should match the site name in Active Directory Sites and Services. See footnote.* | Domain discovery will be limited to the new site name. If not specified, "Default-First-Site-Name" will be used. |
+| SMB Server (Computer Account) Prefix | Naming prefix for the machine account in Active Directory that Azure NetApp Files will use for the creation of new accounts. See footnote.* | Yes | Existing volumes need to be mounted again as the mount is changed for SMB shares and NFS Kerberos volumes.* | Renaming the SMB server prefix after you create the Active Directory connection is disruptive. You'll need to remount existing SMB shares and NFS Kerberos volumes after renaming the SMB server prefix as the mount path will change. |
+| Organizational Unit Path | The LDAP path for the organizational unit (OU) where SMB server machine accounts will be created. `OU=second level`, `OU=first level`| No | If you are using Azure NetApp Files with Azure Active Directory Domain Services (AADDS), the organizational path is `OU=AADDC Computers` when you configure Active Directory for your NetApp Account. | Machine accounts will be placed under the OU specified. If not specified, the default of `OU=Computers` is used by default. |
+| AES Encryption | To take advantage of the strongest security with Kerberos-based communication, you can enable AES-256 and AES-128 encryption on the SMB server. | Yes | If you enable AES encryption, the user credentials used to join Active Directory must have the highest corresponding account option enabled, matching the capabilities enabled for your Active Directory. For example, if your Active Directory has only AES-128 enabled, you must enable the AES-128 account option for the user credentials. If your Active Directory has the AES-256 capability, you must enable the AES-256 account option (which also supports AES-128). If your Active Directory does not have any Kerberos encryption capability, Azure NetApp Files uses DES by default.* | Enable AES encryption for Active Directory Authentication |
+| LDAP Signing | This functionality enables secure LDAP lookups between the Azure NetApp Files service and the user-specified Active Directory Domain Services domain controller. | Yes | LDAP signing to Require Signing in group policy* | This provides ways to increase the security for communication between LDAP clients and Active Directory domain controllers. |
+| Allow local NFS users with LDAP | If enabled, this option will manage access for local users and LDAP users. | Yes | This option will allow access to local users. It is not recommended and, if enabled, should only be used for a limited time and later disabled. | If enabled, this option will allow access to local users and LDAP users. If access is needed for only LDAP users, this option must be disabled. |
+| LDAP over TLS | If enabled, LDAP over TLS will be configured to support secure LDAP communication to active directory. | Yes | None | If LDAP over TLS is enabled and if the server root CA certificate is already present in the database, then LDAP traffic is secured using the CA certificate. If a new certificate is passed in, that certificate will be installed. |
+| Server root CA Certificate | When LDAP over SSL/TLS is enabled, the LDAP client is required to have base64-encoded Active Directory Certificate Service's self-signed root CA certificate. | Yes | None* | LDAP traffic secured with new certificate only if LDAP over TLS is enabled |
+| Backup policy users | You can include additional accounts that require elevated privileges to the computer account created for use with Azure NetApp Files. See [Create and manage Active Directory connections](create-active-directory-connections.md#create-an-active-directory-connection) for more information. | Yes | None* | The specified accounts will be allowed to change the NTFS permissions at the file or folder level. |
+| Administrators | Specify users or groups that will be given administrator privileges on the volume | Yes | None | User account will receive administrator privileges |
+| Username | Username of the Active Directory domain administrator | Yes | None* | Credential change to contact DC |
+| Password | Password of the Active Directory domain administrator | Yes | None* | Credential change to contact DC |
+| Kerberos Realm: AD Server Name | The name of the Active Directory machine. This option is only used when creating a Kerberos volume. | Yes | None* | |
+| Kerberos Realm: KDC IP | Specifies the IP address of the Kerberos Distribution Center (KDC) server. KDC in Azure NetApp Files is an Active Directory server | Yes | None | A new KDC IP address will be used | None* |
+| Region | The region where the Active Directory credentials are associated | No | None | N/A |
+| User DN | User domain name, which overrides the base DN for user lookups Nested userDN can be specified in `OU=subdirectory, OU=directory, DC=domain, DC=com` format.​ | Yes | None* | User search scope gets limited to User DN instead of base DN. |
+| Group DN | Group domain name. groupDN overrides the base DN for group lookups. Nested groupDN can be specified in `OU=subdirectory, OU=directory, DC=domain, DC=com` format.​ | Yes | None* | Group search scope gets limited to Group DN instead of base DN. |
+| Group Membership Filter | The custom LDAP search filter to be used when looking up group membership from LDAP server.​ `groupMembershipFilter` can be specified with the `(gidNumber=*)` format. | Yes | None* | Group membership filter will be used while querying group membership of a user from LDAP server. |
+| Security Privilege Users | You can grant security privilege (`SeSecurityPrivilege`) to users that require elevated privilege to access the Azure NetApp Files volumes. The specified user accounts will be allowed to perform certain actions on Azure NetApp Files SMB shares that require security privilege not assigned by default to domain users. See [Create and manage Active Directory connections](create-active-directory-connections.md#create-an-active-directory-connection) for more information. | Yes | Using this feature is optional and supported only for SQL Server. The domain account used for installing SQL Server must already exist before you add it to the Security privilege users field. When you add the SQL Server installer's account to Security privilege users, the Azure NetApp Files service might validate the account by contacting the domain controller. The command might fail if it cannot contact the domain controller. For more information about `SeSecurityPrivilege` and SQL Server, see [SQL Server installation fails if the Setup account doesn't have certain user rights](/troubleshoot/sql/install/installation-fails-if-remove-user-right.md).* | Allows non-administrator accounts to use SQL severs on top of ANF volumes. |
+
+**\*There is no impact on a modified entry only if the modifications are entered correctly. If you enter data incorrectly, users and applications will lose access.**
+
+## Next Steps
+
+* [Configure ADDS LDAP with extended groups for NFS](configure-ldap-extended-groups.md)
+* [Configure ADDS LDAP over TLS](configure-ldap-over-tls.md)
+* [Create and manage Active Directory connections](create-active-directory-connections.md)
azure-netapp-files Performance Linux Concurrency Session Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-concurrency-session-slots.md
Given the need for 1,250 clients, you could safely set `sunrpc.max_tcp_slot_tabl
## NFSv4.1
-In NFSv4.1, sessions define the relationship between the client and the server. Weather the mounted NFS file systems sit atop one connection or many (as is the case with `nconnect`), the rules for the session apply. At session setup, the client and server negotiate the maximum requests for the session, settling on the lower of the two supported values. Azure NetApp Files supports 180 outstanding requests, and Linux clients default to 64. The following table shows the session limits:
+In NFSv4.1, sessions define the relationship between the client and the server. Whether the mounted NFS file systems sit atop one connection or many (as is the case with `nconnect`), the rules for the session apply. At session setup, the client and server negotiate the maximum requests for the session, settling on the lower of the two supported values. Azure NetApp Files supports 180 outstanding requests, and Linux clients default to 64. The following table shows the session limits:
| Azure NetApp Files NFSv4.1 server <br> Max commands per session | Linux client <br> Default max commands per session | Negotiated max commands for the session | |-|-|-|
azure-resource-manager Bicep Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-cli.md
Title: Bicep CLI commands and overview description: Describes the commands that you can use in the Bicep CLI. These commands include building Azure Resource Manager templates from Bicep. Previously updated : 12/08/2021 Last updated : 03/15/2022 + # Bicep CLI commands This article describes the commands you can use in the Bicep CLI. You must have the [Bicep CLI installed](./install.md) to run the commands.
module stgModule 'br:exampleregistry.azurecr.io/bicep/modules/storage:v1' = {
} ```
-The local cache is found at:
+The local cache is found in:
-```path
-%USERPROFILE%\.bicep\br\<registry-name>.azurecr.io\<module-path\<tag>
-```
+- On Windows
+
+ ```path
+ %USERPROFILE%\.bicep\br\<registry-name>.azurecr.io\<module-path\<tag>
+ ```
+
+- On Linux
+
+ ```path
+ /home/<username>/.bicep
+ ```
## upgrade
If you haven't installed Bicep CLI, you see an error indicating Bicep CLI wasn't
To learn about deploying a Bicep file, see:
-* [Azure CLI](deploy-cli.md)
-* [Cloud Shell](deploy-cloud-shell.md)
-* [PowerShell](deploy-powershell.md)
+- [Azure CLI](deploy-cli.md)
+- [Cloud Shell](deploy-cloud-shell.md)
+- [PowerShell](deploy-powershell.md)
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/overview.md
Title: Bicep language for deploying Azure resources description: Describes the Bicep language for deploying infrastructure to Azure. It provides an improved authoring experience over using JSON to develop templates. Previously updated : 01/21/2022 Last updated : 03/14/2022 # What is Bicep? Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. In a Bicep file, you define the infrastructure you want to deploy to Azure, and then use that file throughout the development lifecycle to repeatedly deploy your infrastructure. Your resources are deployed in a consistent manner.
-Bicep provides concise syntax, reliable type safety, and support for code reuse. We believe Bicep offers the best authoring experience for your [infrastructure-as-code](/devops/deliver/what-is-infrastructure-as-code) solutions in Azure.
+Bicep provides concise syntax, reliable type safety, and support for code reuse. Bicep offers a first-class authoring experience for your [infrastructure-as-code](/devops/deliver/what-is-infrastructure-as-code) solutions in Azure.
## Benefits of Bicep
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
Title: Azure subscription limits and quotas description: Provides a list of common Azure subscription and service limits, quotas, and constraints. This article includes information on how to increase limits along with maximum values. Previously updated : 12/01/2021 Last updated : 03/14/2022 # Azure subscription and service limits, quotas, and constraints
For Azure Database for PostgreSQL limits, see [Limitations in Azure Database for
For more information, see [Functions Hosting plans comparison](../../azure-functions/functions-scale.md).
-## Azure Healthcare APIs
+## Azure Health Data Services
-### Healthcare APIs service limits
+### Azure Health Data Services limits
[!INCLUDE [functions-limits](../../../includes/azure-healthcare-api-limits.md)]
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | | | | | > | workspaces | global | 1-50 | Lowercase letters, hyphens, and numbers.<br><br>Start and end with letter or number.<br><br>Can't contain `-ondemand` | > | workspaces / bigDataPools | workspace | 1-15 | Letters and numbers.<br><br>Start with letter. End with letter or number.<br><br>Can't contain [reserved word](../troubleshooting/error-reserved-resource-name.md). |
-> | workspaces / sqlPools | workspace | 1-60 | Can't contain `<>*%&:\/?@-` or control characters.<br><br>Can't end with `.` or space.<br><br>Can't contain [reserved word](../troubleshooting/error-reserved-resource-name.md). |
+> | workspaces / sqlPools | workspace | 1-15 | Can contain only letters, numbers, or underscore.<br><br>Can't contain [reserved word](../troubleshooting/error-reserved-resource-name.md). |
## Microsoft.TimeSeriesInsights
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources.md
Title: Tag resources, resource groups, and subscriptions for logical organization description: Shows how to apply tags to organize Azure resources for billing and managing. Previously updated : 01/28/2022 Last updated : 03/15/2022
You apply tags to your Azure resources, resource groups, and subscriptions to lo
For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
+> [!WARNING]
+> Tags are stored as plain text. Never add sensitive values to tags. Sensitive values could be exposed through many methods, including cost reports, tag taxonomies, deployment histories, exported templates, and monitoring logs.
+ > [!IMPORTANT] > Tag names are case-insensitive for operations. A tag with a tag name, regardless of casing, is updated or retrieved. However, the resource provider might keep the casing you provide for the tag name. You'll see that casing in cost reports. >
azure-signalr Signalr Quickstart Azure Functions Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-java.md
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
## Configure and run the Azure Function app
-1. Make sure you have Azure Function Core Tools, java (version 11 in the sample) and maven installed.
+1. Make sure you have Azure Function Core Tools, Java (version 11 in the sample) and maven installed.
```bash mvn archetype:generate -DarchetypeGroupId=com.microsoft.azure -DarchetypeArtifactId=azure-functions-archetype -DjavaVersion=11
azure-sql Azure Hybrid Benefit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/azure-hybrid-benefit.md
SQL Database and SQL Managed Instance customers have the following rights associ
||| |SQL Server Enterprise Edition core customers with SA|<li>Can pay base rate on Hyperscale, General Purpose, or Business Critical SKU</li><br><li>One core on-premises = Four vCores in Hyperscale SKU</li><br><li>One core on-premises = Four vCores in General Purpose SKU</li><br><li>One core on-premises = One vCore in Business Critical SKU</li>| |SQL Server Standard Edition core customers with SA|<li>Can pay base rate on Hyperscale, General Purpose, or Business Critical SKU</li><br><li>One core on-premises = One vCore in Hyperscale SKU</li><br><li>One core on-premises = One vCore in General Purpose SKU</li><br><li>Four cores on-premises = One vCore in Business Critical SKU</li>|
-|||
## Next steps
azure-sql Active Directory Interactive Connect Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/active-directory-interactive-connect-azure-sql-db.md
For the C# program to successfully run, you need to assign proper values to stat
| Initial_DatabaseName | "myDatabase" | **SQL servers** > **SQL databases** | | ClientApplicationID | "a94f9c62-97fe-4d19-b06d-111111111111" | **Azure Active Directory** > **App registrations** > **Search by name** > **Application ID** | | RedirectUri | new Uri("https://mywebserver.com/") | **Azure Active Directory** > **App registrations** > **Search by name** > *[Your-App-registration]* > **Settings** > **RedirectURIs**<br /><br />For this article, any valid value is fine for RedirectUri, because it isn't used here. |
-| &nbsp; | &nbsp; | &nbsp; |
## Verify with SQL Server Management Studio
azure-sql Active Geo Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/active-geo-replication-overview.md
As discussed previously, active geo-replication can also be managed programmatic
| [sys.dm_geo_replication_link_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-geo-replication-link-status-azure-sql-database) |Gets the last replication time, last replication lag, and other information about the replication link for a given database. | | [sys.dm_operation_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-operation-status-azure-sql-database) |Shows the status for all database operations including changes to replication links. | | [sys.sp_wait_for_database_copy_sync](/sql/relational-databases/system-stored-procedures/active-geo-replication-sp-wait-for-database-copy-sync) |Causes the application to wait until all committed transactions are hardened to the transaction log of a geo-secondary. |
-| | |
+ ### <a name="powershell-manage-failover-of-single-and-pooled-databases"></a> PowerShell: Manage geo-failover of single and pooled databases
As discussed previously, active geo-replication can also be managed programmatic
| [Set-AzSqlDatabaseSecondary](/powershell/module/az.sql/set-azsqldatabasesecondary) |Switches a secondary database to be primary to initiate failover. | | [Remove-AzSqlDatabaseSecondary](/powershell/module/az.sql/remove-azsqldatabasesecondary) |Terminates data replication between a SQL Database and the specified secondary database. | | [Get-AzSqlDatabaseReplicationLink](/powershell/module/az.sql/get-azsqldatabasereplicationlink) |Gets the geo-replication links for a database. |
-| | |
> [!TIP] > For sample scripts, see [Configure and failover a single database using active geo-replication](scripts/setup-geodr-and-failover-database-powershell.md) and [Configure and failover a pooled database using active geo-replication](scripts/setup-geodr-and-failover-elastic-pool-powershell.md).
As discussed previously, active geo-replication can also be managed programmatic
| [Get Replication Link](/rest/api/sql/replicationlinks/get) |Gets a specific replication link for a given database in a geo-replication partnership. It retrieves the information visible in the sys.geo_replication_links catalog view. **This option is not supported for SQL Managed Instance.**| | [Replication Links - List By Database](/rest/api/sql/replicationlinks/listbydatabase) | Gets all replication links for a given database in a geo-replication partnership. It retrieves the information visible in the sys.geo_replication_links catalog view. | | [Delete Replication Link](/rest/api/sql/replicationlinks/delete) | Deletes a database replication link. Cannot be done during failover. |
-| | |
+ ## Next steps
azure-sql Authentication Azure Ad Logins Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-azure-ad-logins-tutorial.md
+
+ Title: Create and utilize Azure Active Directory server logins
+description: This article guides you through creating and utilizing Azure Active Directory logins in the virtual master database of Azure SQL
++++++ Last updated : 03/14/2022++
+# Tutorial: Create and utilize Azure Active Directory server logins
++
+> [!NOTE]
+> Azure Active Directory (Azure AD) server principals (logins) are currently in public preview for Azure SQL Database. Azure SQL Managed Instance can already utilize Azure AD logins.
+
+This article guides you through creating and utilizing [Azure Active Directory (Azure AD) principals (logins)](authentication-azure-ad-logins.md) in the virtual master database of Azure SQL.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> - Create an Azure AD login in the virtual master database with the new syntax extension for Azure SQL Database
+> - Create a user mapped to an Azure AD login in the virtual master database
+> - Grant server roles to an Azure AD user
+> - Disable an Azure AD login
+
+## Prerequisites
+
+- A SQL Database or SQL Managed Instance with a database. See [Quickstart: Create an Azure SQL Database single database](single-database-create-quickstart.md) if you haven't already created an Azure SQL Database, or [Quickstart: Create an Azure SQL Managed Instance](../managed-instance/instance-create-quickstart.md).
+- Azure AD authentication set up for SQL Database or Managed Instance. For more information, see [Configure and manage Azure AD authentication with Azure SQL](authentication-aad-configure.md).
+- This article instructs you on creating an Azure AD login and user within the virtual master database. Only an Azure AD admin can create a user within the virtual master database, so we recommend you use the Azure AD admin account when going through this tutorial. An Azure AD principal with the `loginmanager` role can create a login, but not a user within the virtual master database.
+
+## Create Azure AD login
+
+1. Create an Azure SQL Database login for an Azure AD account. In our example, we'll use `bob@contoso.com` that exists in our Azure AD domain called `contoso`. A login can also be created from an Azure AD group or [service principal (applications)](authentication-aad-service-principal.md). For example, `mygroup` that is an Azure AD group consisting of Azure AD accounts that are a member of that group. For more information, see [CREATE LOGIN (Transact-SQL)](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true).
+
+ > [!NOTE]
+ > The first Azure AD login must be created by the Azure Active Directory admin. A SQL login cannot create Azure AD logins.
+
+1. Using [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms), log into your SQL Database with the Azure AD admin account set up for the server.
+1. Run the following query:
+
+ ```sql
+ Use master
+ CREATE LOGIN [bob@contoso.com] FROM EXTERNAL PROVIDER
+ GO
+ ```
+
+1. Check the created login in `sys.server_principals`. Execute the following query:
+
+ ```sql
+ SELECT name, type_desc, type, is_disabled
+ FROM sys.server_principals
+ WHERE type_desc like 'external%'
+ ```
+
+ You would see a similar output to the following:
+
+ ```output
+ Name type_desc type is_disabled
+ bob@contoso.com EXTERNAL_LOGIN E 0
+ ```
+
+1. The login `bob@contoso.com` has been created in the virtual master database.
+
+## Create user from an Azure AD login
+
+1. Now that we've created an Azure AD login, we can create a database-level Azure AD user that is mapped to the Azure AD login in the virtual master database. We'll continue to use our example, `bob@contoso.com` to create a user in the virtual master database, as we want to demonstrate adding the user to special roles. Only an Azure AD admin or SQL server admin can create users in the virtual master database.
+
+1. We're using the virtual master database, but you can switch to a database of your choice if you want to create users in other databases. Run the following query.
+
+ ```sql
+ Use master
+ CREATE USER [bob@contoso.com] FROM LOGIN [bob@contoso.com]
+ ```
+
+ > [!TIP]
+ > Although it is not required to use Azure AD user aliases (for example, `bob@contoso.com`), it is a recommended best practice to use the same alias for Azure AD users and Azure AD logins.
+
+1. Check the created user in `sys.database_principals`. Execute the following query:
+
+ ```sql
+ SELECT name, type_desc, type
+ FROM sys.database_principals
+ WHERE type_desc like 'external%'
+ ```
+
+ You would see a similar output to the following:
+
+ ```output
+ Name type_desc type
+ bob@contoso.com EXTERNAL_USER E
+ ```
+
+> [!NOTE]
+> The existing syntax to create an Azure AD user without an Azure AD login is still supported, and requires the creation of a contained user inside SQL Database (without login).
+>
+> For example, `CREATE USER [bob@contoso.com] FROM EXTERNAL PROVIDER`.
+
+## Grant server-level roles to Azure AD logins
+
+You can add logins to the [built-in server-level roles](security-server-roles.md#built-in-server-level-roles), such as the **##MS_DefinitionReader##**, **##MS_ServerStateReader##**, or **##MS_ServerStateManager##** role.
+
+> [!NOTE]
+> The server-level roles mentioned here are not supported for Azure AD groups.
+
+```sql
+ALTER SERVER ROLE ##MS_DefinitionReader## ADD MEMBER [AzureAD_object];
+```
+
+```sql
+ALTER SERVER ROLE ##MS_ServerStateReader## ADD MEMBER [AzureAD_object];
+```
+
+```sql
+ALTER SERVER ROLE ##MS_ServerStateManager## ADD MEMBER [AzureAD_object];
+```
+
+Permissions aren't effective until the user reconnects. Flush the DBCC cache as well:
+
+```sql
+DBCC FLUSHAUTHCACHE
+DBCC FREESYSTEMCACHE('TokenAndPermUserStore') WITH NO_INFOMSGS
+```
+
+To check which Azure AD logins are part of server-level roles, run the following query:
+
+```sql
+SELECT roles.principal_id AS RolePID,roles.name AS RolePName,
+ server_role_members.member_principal_id AS MemberPID, members.name AS MemberPName
+ FROM sys.server_role_members AS server_role_members
+ INNER JOIN sys.server_principals AS roles
+ ON server_role_members.role_principal_id = roles.principal_id
+ INNER JOIN sys.server_principals AS members
+ ON server_role_members.member_principal_id = members.principal_id;
+```
+
+## Grant special roles for Azure AD users
+
+[Special roles for SQL Database](/sql/relational-databases/security/authentication-access/database-level-roles#special-roles-for--and-azure-synapse) can be assigned to users in the virtual master database.
+
+In order to grant one of the special database roles to a user, the user must exist in the virtual master database.
+
+To add a user to a role, you can run the following query:
+
+```sql
+ALTER ROLE [dbamanger] ADD MEMBER [AzureAD_object]
+```
+
+To remove a user from a role, run the following query:
+
+```sql
+ALTER ROLE [dbamanger] DROP MEMBER [AzureAD_object]
+```
+
+`AzureAD_object` can be an Azure AD user, group, or service principal in Azure AD.
+
+In our example, we created the user `bob@contoso.com`. Let's give the user the **dbmanager** and **loginmanager** roles.
+
+1. Run the following query:
+
+ ```sql
+ ALTER ROLE [dbamanger] ADD MEMBER [bob@contoso.com]
+ ALTER ROLE [loginmanager] ADD MEMBER [bob@contoso.com]
+ ```
+
+1. Check the database role assignment by running the following query:
+
+ ```sql
+ SELECT DP1.name AS DatabaseRoleName,
+ isnull (DP2.name, 'No members') AS DatabaseUserName
+ FROM sys.database_role_members AS DRM
+ RIGHT OUTER JOIN sys.database_principals AS DP1
+ ON DRM.role_principal_id = DP1.principal_id
+ LEFT OUTER JOIN sys.database_principals AS DP2
+ ON DRM.member_principal_id = DP2.principal_id
+ WHERE DP1.type = 'R'and DP2.name like 'bob%'
+ ```
+
+ You would see a similar output to the following:
+
+ ```output
+ DatabaseRoleName DatabaseUserName
+ dbmanager bob@contoso.com
+ loginmanager bob@contoso.com
+ ```
+
+## Optional - Disable a login
+
+The [ALTER LOGIN (Transact-SQL)](/sql/t-sql/statements/alter-login-transact-sql?view=azuresqldb-current&preserve-view=true) DDL syntax can be used to enable or disable an Azure AD login in Azure SQL Database.
+
+```sql
+ALTER LOGIN [bob@contoso.com] DISABLE
+```
+
+For the `DISABLE` or `ENABLE` changes to take immediate effect, the authentication cache and the **TokenAndPermUserStore** cache must be cleared using the following T-SQL commands:
+
+```sql
+DBCC FLUSHAUTHCACHE
+DBCC FREESYSTEMCACHE('TokenAndPermUserStore') WITH NO_INFOMSGS
+```
+
+Check that the login has been disabled by executing the following query:
+
+```sql
+SELECT name, type_desc, type
+FROM sys.server_principals
+WHERE is_disabled = 1
+```
+
+A use case for this would be to allow read-only on [geo-replicas](active-geo-replication-overview.md), but deny connection on a primary server.
+
+## See also
+
+For more information and examples, see:
+
+- [Azure Active Directory server principals](authentication-azure-ad-logins.md)
+- [CREATE LOGIN (Transact-SQL)](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true)
+- [CREATE USER (Transact-SQL)](/sql/t-sql/statements/create-user-transact-sql)
azure-sql Authentication Azure Ad Logins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-azure-ad-logins.md
+
+ Title: Azure Active Directory server principals
+description: Using Azure Active Directory server principals (logins) in Azure SQL
++++++ Last updated : 03/14/2022++
+# Azure Active Directory server principals
++
+> [!NOTE]
+> Azure Active Directory (Azure AD) server principals (logins) are currently in public preview for Azure SQL Database. Azure SQL Managed Instance can already utilize Azure AD logins.
+
+You can now create and utilize Azure AD server principals, which are logins in the virtual master database of a SQL Database. There are several benefits of using Azure AD server principals for SQL Database:
+
+- Support [Azure SQL Database server roles for permission management](security-server-roles.md).
+- Support multiple Azure AD users with [special roles for SQL Database](/sql/relational-databases/security/authentication-access/database-level-roles#special-roles-for--and-azure-synapse), such as the `loginmanager` and `dbmanager` roles.
+- Functional parity between SQL logins and Azure AD logins.
+- Increase functional improvement support, such as utilizing [Azure AD-only authentication](authentication-azure-ad-only-authentication.md). Azure AD-only authentication allows SQL authentication to be disabled, which includes the SQL server admin, SQL logins and users.
+- Allows Azure AD principals to support geo-replicas. Azure AD principals will be able to connect to the geo-replica of a user database, with a *read-only* permission and *deny* permission to the primary server.
+- Ability to use Azure AD service principal logins with special roles to execute a full automation of user and database creation, as well as maintenance provided by Azure AD applications.
+- Closer functionality between Managed Instance and SQL Database, as Managed Instance already supports Azure AD logins in the master database.
+
+For more information on Azure AD authentication in Azure SQL, see [Use Azure Active Directory authentication](authentication-aad-overview.md)
+
+## Permissions
+
+The following permissions are required to utilize or create Azure AD logins in the virtual master database.
+
+- Azure AD admin permission or membership in the `loginmanager` server role. The first Azure AD login can only be created by the Azure AD admin.
+- Must be a member of Azure AD within the same directory used for Azure SQL Database
+
+By default, the standard permission granted to newly created Azure AD login in the `master` database is **VIEW ANY DATABASE**.
+
+## Azure AD logins syntax
+
+New syntax for Azure SQL Database to use Azure AD server principals has been introduced with this feature release.
+
+### Create login syntax
+
+```syntaxsql
+CREATE LOGIN login_name { FROM EXTERNAL PROVIDER | WITH <option_list> [,..] }  
+
+<option_list> ::=     
+    PASSWORD = {'password'}  
+    | , SID = sid, ]
+```
+
+The *login_name* specifies the Azure AD principal, which is an Azure AD user, group, or application.
+
+For more information, see [CREATE LOGIN (Transact-SQL)](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true).
+
+### Create user syntax
+
+The below T-SQL syntax is already available in SQL Database, and can be used for creating database-level Azure AD principals mapped to Azure AD logins in the virtual master database.
+
+To create an Azure AD user from an Azure AD login, use the following syntax. Only the Azure AD admin can execute this command in the virtual master database.
+
+```syntaxsql
+CREATE USER user_name FROM LOGIN login_name
+```
+
+For more information, see [CREATE USER (Transact-SQL)](/sql/t-sql/statements/create-user-transact-sql).
+
+### Disable or enable a login using ALTER LOGIN syntax
+
+The [ALTER LOGIN (Transact-SQL)](/sql/t-sql/statements/alter-login-transact-sql?view=azuresqldb-current&preserve-view=true) DDL syntax can be used to enable or disable an Azure AD login in Azure SQL Database.
+
+```syntaxsql
+ALTER LOGIN login_name DISABLE
+```
+
+The Azure AD principal `login_name` won't be able to log into any user database in the SQL Database logical server where an Azure AD user principal, `user_name` mapped to login `login_name` was created.
+
+> [!NOTE]
+> - `ALTER LOGIN login_name DISABLE` is not supported for contained users.
+> - `ALTER LOGIN login_name DISABLE` is not supported for Azure AD groups.
+> - An individual disabled login cannot belong to a user who is part of a login group created in the master database (for example, an Azure AD admin group).
+> - For the `DISABLE` or `ENABLE` changes to take immediate effect, the authentication cache and the **TokenAndPermUserStore** cache must be cleared using the T-SQL commands.
+>
+> ```sql
+> DBCC FLUSHAUTHCACHE
+> DBCC FREESYSTEMCACHE('TokenAndPermUserStore') WITH NO_INFOMSGS
+> ```
+
+## Roles for Azure AD principals
+
+[Special roles for SQL Database](/sql/relational-databases/security/authentication-access/database-level-roles#special-roles-for--and-azure-synapse) can be assigned to *users* in the virtual master database for Azure AD principals, including **dbmanager** and **loginmanager**.
+
+[Azure SQL Database server roles](security-server-roles.md) can be assigned to *logins* in the virtual master database.
+
+For a tutorial on how to grant these roles, see [Tutorial: Create and utilize Azure Active Directory server logins](authentication-azure-ad-logins-tutorial.md).
++
+## Limitations and remarks
+
+- The SQL server admin canΓÇÖt create Azure AD logins or users in any databases.
+- Changing a database ownership to an Azure AD group as database owner isn't supported.
+ - `ALTER AUTHORIZATION ON database::<mydb> TO [my_aad_group]` fails with an error message:
+ ```output
+ Msg 33181, Level 16, State 1, Line 4
+ The new owner cannot be Azure Active Directory group.
+ ```
+ - Changing a database ownership to an individual user is supported.
+- A SQL admin or SQL user canΓÇÖt execute the following Azure AD operations:
+ - `CREATE LOGIN [bob@contoso.com] FROM EXTERNAL PROVIDER`
+ - `CREATE USER [bob@contoso.com] FROM EXTERNAL PROVIDER`
+ - `EXECUTE AS USER [bob@contoso.com]`
+ - `ALTER AUTHORIZATION ON securable::name TO [bob@contoso.com]`
+- Impersonation of Azure AD server-level principals (logins) isn't supported:
+ - [EXECUTE AS Clause (Transact-SQL)](/sql/t-sql/statements/execute-as-clause-transact-sql)
+ - [EXECUTE AS (Transact-SQL)](/sql/t-sql/statements/execute-as-transact-sql)
+ - Impersonation of Azure AD database-level principals (users) at a user database-level is still supported.
+- Azure AD logins overlapping with Azure AD administrator aren't supported. Azure AD admin takes precedence over any login. If an Azure AD account already has access to the server as an Azure AD admin, either directly or as a member of the admin group, the login created for this user won't have any effect. The login creation isn't blocked through T-SQL. After the account authenticates to the server, the login will have the effective permissions of an Azure AD admin, and not of a newly created login.
+- Changing permissions on specific Azure AD login object isn't supported:
+ - `GRANT <PERMISSION> ON LOGIN :: <Azure AD account> TO <Any other login> `
+- When permissions are altered for an Azure AD login with existing open connections to an Azure SQL Database, permissions aren't effective until the user reconnects. Also [flush the authentication cache and the TokenAndPermUserStore cache](#disable-or-enable-a-login-using-alter-login-syntax). This applies to server role membership change using the [ALTER SERVER ROLE](/sql/t-sql/statements/alter-server-role-transact-sql) statement.
+- Setting an Azure AD login mapped to an Azure AD group as the database owner isn't supported.
+- [Azure SQL Database server roles](security-server-roles.md) aren't supported for Azure AD groups.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Tutorial: Create and utilize Azure Active Directory server logins](authentication-azure-ad-logins-tutorial.md)
azure-sql Authentication Azure Ad Only Authentication Create Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-azure-ad-only-authentication-create-server.md
The [SQL Security Manager](../../role-based-access-control/built-in-roles.md#sql
The following section provides you with examples and scripts on how to create a logical server or managed instance with an Azure AD admin set for the server or instance, and have Azure AD-only authentication enabled during server creation. For more information on the feature, see [Azure AD-only authentication](authentication-azure-ad-only-authentication.md).
-In our examples, we're enabling Azure AD-only authentication during server or managed instance creation, with a system assigned server admin and password. This will prevent server admin access when Azure AD-only authentication is enabled, and only allows the Azure AD admin to access the resource. It's optional to add parameters to the APIs to include your own server admin and password during server creation. However, the password cannot be reset until you disable Azure AD-only authentication. An example of how to use these optional parameters to specify the server admin login name is presented in the [PowerShell](?tabs=azure-powershell#azure-sql-database) tab on this page.
+In our examples, we're enabling Azure AD-only authentication during server or managed instance creation, with a system assigned server admin and password. This will prevent server admin access when Azure AD-only authentication is enabled, and only allows the Azure AD admin to access the resource. It's optional to add parameters to the APIs to include your own server admin and password during server creation. However, the password canΓÇÖt be reset until you disable Azure AD-only authentication. An example of how to use these optional parameters to specify the server admin login name is presented in the [PowerShell](?tabs=azure-powershell#azure-sql-database) tab on this page.
> [!NOTE] > To change the existing properties after server or managed instance creation, other existing APIs should be used. For more information, see [Managing Azure AD-only authentication using APIs](authentication-azure-ad-only-authentication.md#managing-azure-ad-only-authentication-using-apis) and [Configure and manage Azure AD authentication with Azure SQL](authentication-aad-configure.md).
Replace the following values in the example:
New-AzSqlServer -ResourceGroupName "<ResourceGroupName>" -Location "<Location>" -ServerName "<ServerName>" -ServerVersion "12.0" -ExternalAdminName "<AzureADAccount>" -EnableActiveDirectoryOnlyAuthentication ```
-Here is an example of specifying the server admin name (instead of letting the server admin name being automatically created) at the time of logical server creation. As mentioned earlier, this login is not usable when Azure AD-only authentication is enabled.
+Here's an example of specifying the server admin name (instead of letting the server admin name being automatically created) at the time of logical server creation. As mentioned earlier, this login isn't usable when Azure AD-only authentication is enabled.
```powershell $cred = Get-Credential
You can also use the following template. Use a [Custom deployment in the Azure p
1. You can leave the rest of the settings default. For more information on the **Networking**, **Security**, or other tabs and settings, follow the guide in the article [Quickstart: Create an Azure SQL Managed Instance](../managed-instance/instance-create-quickstart.md).
-1. Once you are done with configuring your settings, select **Review + create** to proceed. Select **Create** to start provisioning the managed instance.
+1. Once you're done with configuring your settings, select **Review + create** to proceed. Select **Create** to start provisioning the managed instance.
# [The Azure CLI](#tab/azure-cli)
azure-sql Auto Failover Group Configure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/auto-failover-group-configure-sql-db.md
The following table lists specific permission scopes for Azure SQL Database:
| **Create failover group**| Azure RBAC write access | Primary server </br> Secondary server </br> All databases in failover group | | **Update failover group** | Azure RBAC write access | Failover group </br> All databases on the current primary server| | **Fail over failover group** | Azure RBAC write access | Failover group on new server |
-| | |
+ ## Remarks
azure-sql Automated Backups Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/automated-backups-overview.md
This table summarizes the capabilities and features of [point in time restore (P
| **Restore via Azure portal**|Yes|Yes|Yes| | **Restore via PowerShell** |Yes|Yes|Yes| | **Restore via Azure CLI** |Yes|Yes|Yes|
-| | | | |
+ \* For business-critical applications that require large databases and must ensure business continuity, use [Auto-failover groups](auto-failover-group-overview.md).
azure-sql Az Cli Script Samples Content Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/az-cli-script-samples-content-guide.md
The following table includes links to Azure CLI script examples to manage single
| [Restore a database](scripts/restore-database-cli.md)| Restores a database in SQL Database to a specific point in time. | | [Copy a database to a new server](scripts/copy-database-to-new-server-cli.md) | Creates a copy of an existing database in SQL Database in a new server. | | [Import a database from a BACPAC file](scripts/import-from-bacpac-cli.md)| Imports a database to SQL Database from a BACPAC file. |
-|||
+ Learn more about the [single-database Azure CLI API](single-database-manage.md#azure-cli).
The following table includes links to Azure CLI script examples for Azure SQL Ma
| [Create SQL Managed Instance](../managed-instance/scripts/create-configure-managed-instance-cli.md)| Creates a SQL Managed Instance. | | [Configure Transparent Data Encryption (TDE)](../managed-instance/scripts/transparent-data-encryption-byok-sql-managed-instance-cli.md)| Configures Transparent Data Encryption (TDE) in SQL Managed Instance by using Azure Key Vault with various key scenarios. | | [Restore geo-backup](../managed-instance/scripts/restore-geo-backup-cli.md) | Performs a geo-restore between two instanced of SQL Managed Instance to a specific point in time. |
-|||
+ For additional SQL Managed Instance examples, see the [create](/archive/blogs/sqlserverstorageengine/create-azure-sql-managed-instance-using-azure-cli), [update](/archive/blogs/sqlserverstorageengine/modify-azure-sql-database-managed-instance-using-azure-cli), [move a database](/archive/blogs/sqlserverstorageengine/cross-instance-point-in-time-restore-in-azure-sql-database-managed-instance), and [working with](https://medium.com/azure-sqldb-managed-instance/working-with-sql-managed-instance-using-azure-cli-611795fe0b44) scripts.
azure-sql Azure Defender For Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/azure-defender-for-sql.md
The flexibility of Azure allows for a number of programmatic methods for enablin
Use any of the following tools to enable Microsoft Defender for your subscription:
-| Method | Instructions |
-|--|-|
-| REST API | [Pricings API](/rest/api/securitycenter/pricings) |
-| Azure CLI | [az security pricing](/cli/azure/security/pricing) |
-| PowerShell | [Set-AzSecurityPricing](/powershell/module/az.security/set-azsecuritypricing) |
+| Method | Instructions |
+|--|-|
+| REST API | [Pricings API](/rest/api/securitycenter/pricings) |
+| Azure CLI | [az security pricing](/cli/azure/security/pricing) |
+| PowerShell | [Set-AzSecurityPricing](/powershell/module/az.security/set-azsecuritypricing) |
| Azure Policy | [Bundle Pricings](https://github.com/Azure/Azure-Security-Center/blob/master/Pricing%20%26%20Settings/ARM%20Templates/Set-ASC-Bundle-Pricing.json) |
-| | |
+ ### Enable Microsoft Defender for Azure SQL Database at the resource level
azure-sql Configure Max Degree Of Parallelism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/configure-max-degree-of-parallelism.md
| = 1 | The database engine uses a single serial thread to execute queries. Parallel threads are not used. | | > 1 | The database engine sets the number of additional [schedulers](/sql/relational-databases/thread-and-task-architecture-guide#sql-server-task-scheduling) to be used by parallel threads to the MAXDOP value, or the total number of logical processors, whichever is smaller. | | = 0 | The database engine sets the number of additional [schedulers](/sql/relational-databases/thread-and-task-architecture-guide#sql-server-task-scheduling) to be used by parallel threads to the total number of logical processors or 64, whichever is smaller. |
-| | |
> [!Note] > Each query executes with at least one scheduler, and one worker thread on that scheduler.
azure-sql Connect Query Content Reference Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/connect-query-content-reference-guide.md
The following document includes links to Azure examples showing how to connect a
|[PHP](connect-query-php.md)|This quickstart demonstrates how to use PHP to create a program to connect to a database and use Transact-SQL statements to query data.| |[Python](connect-query-python.md)|This quickstart demonstrates how to use Python to connect to a database and use Transact-SQL statements to query data. | |[Ruby](connect-query-ruby.md)|This quickstart demonstrates how to use Ruby to create a program to connect to a database and use Transact-SQL statements to query data.|
-|||
## Get server connection information
The following table lists examples of object-relational mapping (ORM) frameworks
| Node.js | Windows, Linux, macOS | [Sequelize ORM](https://sequelize.org/) | | Python | Windows, Linux, macOS |[Django](https://www.djangoproject.com/) | | Ruby | Windows, Linux, macOS | [Ruby on Rails](https://rubyonrails.org/) |
-||||
## Next steps
azure-sql Connect Query Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/connect-query-nodejs.md
To complete this quickstart, you need:
|||[Connectivity from on-premises](../managed-instance/point-to-site-p2s-configure.md) | [Connect to a SQL Server instance](../virtual-machines/windows/sql-vm-create-portal-quickstart.md) |Load data|Adventure Works loaded per quickstart|[Restore Wide World Importers](../managed-instance/restore-sample-database-quickstart.md) | [Restore Wide World Importers](../managed-instance/restore-sample-database-quickstart.md) | |||Restore or import Adventure Works from a [BACPAC](database-import.md) file from [GitHub](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/adventure-works)| Restore or import Adventure Works from a [BACPAC](database-import.md) file from [GitHub](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/adventure-works)|
- |||
+ - [Node.js](https://nodejs.org)-related software
azure-sql Connect Query Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/connect-query-php.md
To complete this quickstart, you need:
|||[Connectivity from on-premises](../managed-instance/point-to-site-p2s-configure.md) | [Connect to a SQL Server instance](../virtual-machines/windows/sql-vm-create-portal-quickstart.md) |Load data|Adventure Works loaded per quickstart|[Restore Wide World Importers](../managed-instance/restore-sample-database-quickstart.md) | [Restore Wide World Importers](../managed-instance/restore-sample-database-quickstart.md) | |||Restore or import Adventure Works from a [BACPAC](database-import.md) file from [GitHub](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/adventure-works)| Restore or import Adventure Works from a [BACPAC](database-import.md) file from [GitHub](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/adventure-works)|
- |||
+
azure-sql Connect Query Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/connect-query-ruby.md
To complete this quickstart, you need the following prerequisites:
|||[Connectivity from on-premises](../managed-instance/point-to-site-p2s-configure.md) | [Connect to a SQL Server instance](../virtual-machines/windows/sql-vm-create-portal-quickstart.md) |Load data|Adventure Works loaded per quickstart|[Restore Wide World Importers](../managed-instance/restore-sample-database-quickstart.md) | [Restore Wide World Importers](../managed-instance/restore-sample-database-quickstart.md) | |||Restore or import Adventure Works from a [BACPAC](database-import.md) file from [GitHub](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/adventure-works)| Restore or import Adventure Works from a [BACPAC](database-import.md) file from [GitHub](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/adventure-works)|
- |||
> [!IMPORTANT] > The scripts in this article are written to use the Adventure Works database. With a SQL Managed Instance, you must either import the Adventure Works database into an instance database or modify the scripts in this article to use the Wide World Importers database.
azure-sql Connect Query Ssms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/connect-query-ssms.md
Completing this quickstart requires the following items:
|||[Connectivity from on-site](../managed-instance/point-to-site-p2s-configure.md) | [Connect to SQL Server](../virtual-machines/windows/sql-vm-create-portal-quickstart.md) |Load data|Adventure Works loaded per quickstart|[Restore Wide World Importers](../managed-instance/restore-sample-database-quickstart.md) | [Restore Wide World Importers](../managed-instance/restore-sample-database-quickstart.md) | |||Restore or import Adventure Works from [BACPAC](database-import.md) file from [GitHub](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/adventure-works)| Restore or import Adventure Works from [BACPAC](database-import.md) file from [GitHub](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/adventure-works)|
- |||
+ > [!IMPORTANT] > The scripts in this article are written to use the Adventure Works database. With a managed instance, you must either import the Adventure Works database into an instance database or modify the scripts in this article to use the Wide World Importers database.
In SSMS, connect to your server.
| **Authentication** | SQL Server Authentication | This tutorial uses SQL Authentication. | | **Login** | Server admin account user ID | The user ID from the server admin account used to create the server. | | **Password** | Server admin account password | The password from the server admin account used to create the server. |
- ||||
![connect to server](./media/connect-query-ssms/connect.png)
azure-sql Connect Query Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/connect-query-vscode.md
Last updated 05/29/2020
|||[Connectivity from on-premises](../managed-instance/point-to-site-p2s-configure.md) |Load data|Adventure Works loaded per quickstart|[Restore Wide World Importers](../managed-instance/restore-sample-database-quickstart.md) |||Restore or import Adventure Works from a [BACPAC](database-import.md) file from [GitHub](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/adventure-works)|
- |||
> [!IMPORTANT] > The scripts in this article are written to use the Adventure Works database. With a SQL Managed Instance, you must either import the Adventure Works database into an instance database or modify the scripts in this article to use the Wide World Importers database.
azure-sql Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/connectivity-architecture.md
Periodically, we will retire Gateways using old hardware and migrate the traffic
| West US | 104.42.238.205, 13.86.216.196 | 13.86.217.224/29 | | West US 2 | 13.66.226.202, 40.78.240.8, 40.78.248.10 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29 | | West US 3 | 20.150.168.0, 20.150.184.2 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29 |
-| | | |
## Next steps
azure-sql Designing Cloud Solutions For Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/designing-cloud-solutions-for-disaster-recovery.md
Your specific cloud disaster recovery strategy can combine or extend these desig
| Active-active deployment for application load balancing |Read-write access < 5 sec |Failure detection time + DNS TTL | | Active-passive deployment for data preservation |Read-only access < 5 sec | Read-only access = 0 | ||Read-write access = zero | Read-write access = Failure detection time + grace period with data loss |
-|||
+ ## Next steps
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/doc-changes-updates-release-notes-whats-new.md
The following table lists the features of Azure SQL Database that are currently
| [SQL Analytics](../../azure-monitor/insights/azure-sql.md)|Azure SQL Analytics is an advanced cloud monitoring solution for monitoring performance of all of your Azure SQL databases at scale and across multiple subscriptions in a single view. Azure SQL Analytics collects and visualizes key performance metrics with built-in intelligence for performance troubleshooting.| | [SQL insights](../../azure-monitor/insights/sql-insights-overview.md) | SQL insights is a comprehensive solution for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance.| | [Zone redundant configuration](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview) | The zone redundant configuration feature utilizes [Azure Availability Zones](../../availability-zones/az-overview.md#availability-zones) to replicate databases across multiple physical locations within an Azure region. By selecting [zone redundancy](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview), you can make your databases resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes to the application logic. **The feature is currently in preview for the General Purpose and Hyperscale service tiers.** |
-|||
+ ## General availability (GA)
The following table lists the features of Azure SQL Database that have transitio
| [Azure Active Directory-only authentication](authentication-azure-ad-only-authentication.md) | November 2021 | It's possible to configure your Azure SQL Database to allow authentication only from Azure Active Directory. | | [Azure AD service principal](authentication-aad-service-principal.md) | September 2021 | Azure Active Directory (Azure AD) supports user creation in Azure SQL Database on behalf of Azure AD applications (service principals).| | [Audit management operations](../database/auditing-overview.md#auditing-of-microsoft-support-operations) | March 2021 | Azure SQL audit capabilities enable you to audit operations done by Microsoft support engineers when they need to access your SQL assets during a support request, enabling more transparency in your workforce. |
-||||
+ ## Documentation changes
Learn about significant changes to the Azure SQL Database documentation.
| **GA for maintenance window** | The [maintenance window](maintenance-window.md) feature allows you to configure a maintenance schedule for your Azure SQL Database and receive advance notifications of maintenance windows. [Maintenance window advance notifications](../database/advance-notifications.md) are in public preview for databases configured to use a non-default [maintenance window](maintenance-window.md).| | **Hyperscale zone redundant configuration preview** | It's now possible to create new Hyperscale databases with zone redundancy to make your databases resilient to a much larger set of failures. This feature is currently in preview for the Hyperscale service tier. To learn more, see [Hyperscale zone redundancy](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview). | | **Hyperscale storage redundancy GA** | Choosing your storage redundancy for your databases in the Hyperscale service tier is now generally available. See [Configure backup storage redundancy](automated-backups-overview.md#configure-backup-storage-redundancy) to learn more.
-|||
### February 2022 | Changes | Details | | | | | **Free Azure SQL Database** | Try Azure SQL Database for free using the Azure free account. To learn more, review [Try SQL Database for free](free-sql-db-free-account-how-to-deploy.md).|
-|||
+ ### 2021
Learn about significant changes to the Azure SQL Database documentation.
| **SQL Database ledger** | SQL Database ledger is in preview, and introduces the ability to cryptographically attest to other parties, such as auditors or other business parties, that your data hasn't been tampered with. To learn more, see [Ledger](ledger-overview.md). | | **Maintenance window** | The maintenance window feature allows you to configure a maintenance schedule for your Azure SQL Database, currently in preview. To learn more, see [maintenance window](maintenance-window.md).| | **SQL insights** | SQL insights is a comprehensive solution for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. To learn more, see [SQL insights](../../azure-monitor/insights/sql-insights-overview.md). |
-|||
## Contribute to content
azure-sql Elastic Pool Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/elastic-pool-resource-management.md
To send an alert when pool resource utilization (CPU, data IO, log IO, workers,
|`avg_storage_percent`|Total storage space used by data in all databases within an elastic pool. Does not include empty space in database files. Available in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `storage_percent`, and can be viewed in Azure portal.|Below 80%. Can approach 100% for pools with no data growth.| |`avg_allocated_storage_percent`|Total storage space used by database files in storage in all databases within an elastic pool. Includes empty space in database files. Available in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `allocated_data_storage_percent`, and can be viewed in Azure portal.|Below 90%. Can approach 100% for pools with no data growth.| |`tempdb_log_used_percent`|Transaction log space utilization in the `tempdb` database. Even though temporary objects created in one database are not visible in other databases in the same elastic pool, `tempdb` is a shared resource for all databases in the same pool. A long running or orphaned transaction in `tempdb` started from one database in the pool can consume a large portion of transaction log, and cause failures for queries in other databases in the same pool. Derived from [sys.dm_db_log_space_usage](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-log-space-usage-transact-sql) and [sys.database_files](/sql/relational-databases/system-catalog-views/sys-database-files-transact-sql) views. This metric is also emitted to Azure Monitor, and can be viewed in Azure portal. See [Examples](#examples) for a sample query to return the current value of this metric.|Below 50%. Occasional spikes up to 80% are acceptable.|
-|||
In addition to these metrics, Azure SQL Database provides a view that returns actual resource governance limits, as well as additional views that return resource utilization statistics at the resource pool level, and at the workload group level.
In addition to these metrics, Azure SQL Database provides a view that returns ac
|[sys.dm_resource_governor_workload_groups](/sql/relational-databases/system-dynamic-management-views/sys-dm-resource-governor-workload-groups-transact-sql)|Returns cumulative workload group statistics and the current configuration of the workload group. This view can be joined with sys.dm_resource_governor_resource_pools on the `pool_id` column to get resource pool information.| |[sys.dm_resource_governor_resource_pools_history_ex](/sql/relational-databases/system-dynamic-management-views/sys-dm-resource-governor-resource-pools-history-ex-azure-sql-database)|Returns resource pool utilization statistics for recent history, based on the number of snapshots available. Each row represents a time interval. The duration of the interval is provided in the `duration_ms` column. The `delta_` columns return the change in each statistic during the interval.| |[sys.dm_resource_governor_workload_groups_history_ex](/sql/relational-databases/system-dynamic-management-views/sys-dm-resource-governor-workload-groups-history-ex-azure-sql-database)|Returns workload group utilization statistics for recent history, based on the number of snapshots available. Each row represents a time interval. The duration of the interval is provided in the `duration_ms` column. The `delta_` columns return the change in each statistic during the interval.|
-|||
> [!TIP] > To query these and other dynamic management views using a principal other than server administrator, add this principal to the `##MS_ServerStateReader##` [server role](security-server-roles.md).
azure-sql Free Sql Db Free Account How To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/free-sql-db-free-account-how-to-deploy.md
The following table describes the values on the track usage page:
|**Meter** | Identifies the unit of measure for the service being consumed. For example, the meter for Azure SQL Database is *SQL Database, Single Standard, S0 DTUs*, which tracks the number of S0 databases used per day, and has a monthly limit of 1. | | **Usage/limit** | The usage of the meter for the current month, and the limit for the meter. | **Status**| The current status of your usage of the service defined by the meter. The possible values for status are: </br> **Not in use**: You haven't used the meter or the usage for the meter hasn't reached the billing system. </br> **Exceeded on \<Date\>**: You've exceeded the limit for the meter on \<Date\>. </br> **Unlikely to Exceed**: You're unlikely to exceed the limit for the meter. </br>**Exceeds on \<Date\>**: You're likely to exceed the limit for the meter on \<Date\>. |
-| | |
+ >[!IMPORTANT] > - With an Azure free account, you also get $200 in credit to use in 30 days. During this time, any usage of the service beyond the free monthly amount is deducted from this credit.
azure-sql Intelligent Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/intelligent-insights-overview.md
Identified database performance degradations are recorded in the SQLInsights log
| Impacted queries and error codes | Query hash or error code. These can be used to easily correlate to affected queries. Metrics that consist of either query duration increase, waiting time, timeout counts, or error codes are provided. | | Detections | Detection identified at the database during the time of an event. There are 15 detection patterns. For more information, see [Troubleshoot database performance issues with Intelligent Insights](intelligent-insights-troubleshoot-performance.md). | | Root cause analysis | Root cause analysis of the issue identified in a human-readable format. Some insights might contain a performance improvement recommendation where possible. |
-|||
+ Intelligent Insights shines in discovering and troubleshooting database performance issues. In order to use Intelligent Insights to troubleshoot database performance issues, see [Troubleshoot performance issues with Intelligent Insights](intelligent-insights-troubleshoot-performance.md).
azure-sql Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/maintenance-window.md
Choosing a maintenance window other than the default is currently available in t
| West US | Yes | Yes | | | West US 2 | Yes | Yes | Yes | | West US 3 | Yes | | |
-| | | | |
+ ## Gateway maintenance
azure-sql Manage Data After Migrating To Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/manage-data-after-migrating-to-database.md
You donΓÇÖt create backups on Azure SQL Database and that is because you donΓÇÖt
|Basic|7| |Standard|35| |Premium|35|
-|||
+ In addition, the [Long-Term Retention (LTR)](long-term-retention-overview.md) feature allows you to hold onto your backup files for a much longer period specifically, for up to 10 years, and restore data from these backups at any point within that period. Furthermore, the database backups are kept in geo-replicated storage to ensure resilience from regional catastrophe. You can also restore these backups in any Azure region at any point of time within the retention period. See [Business continuity overview](business-continuity-high-availability-disaster-recover-hadr-overview.md).
Azure AD supports [Azure AD Multi-Factor Authentication](authentication-mfa-ssms
|Are logged in to Windows using your Azure AD credentials from a federated domain|Use [Azure AD integrated authentication](authentication-aad-configure.md).| |Are logged in to Windows using credentials from a domain not federated with Azure|Use [Azure AD integrated authentication](authentication-aad-configure.md).| |Have middle-tier services which need to connect to SQL Database or Azure Synapse Analytics|Use [Azure AD integrated authentication](authentication-aad-configure.md).|
-|||
+ ### How do I limit or control connectivity access to my database
For protecting your sensitive data in-flight and at rest, SQL Database provides
|**Allowed T-SQL operations**|Equality comparison|All T-SQL surface area is available| |**App changes required to use the feature**|Minimal|Very Minimal| |**Encryption granularity**|Column level|Database level|
-||||
### How can I limit access to sensitive data in my database
SQL Database offers various service tiers Basic, Standard, and Premium. Each ser
|**Basic**|Applications with a handful users and a database that doesnΓÇÖt have high concurrency, scale, and performance requirements. | |**Standard**|Applications with a considerable concurrency, scale, and performance requirements coupled with low to medium IO demands. | |**Premium**|Applications with lots of concurrent users, high CPU/memory, and high IO demands. High concurrency, high throughput, and latency sensitive apps can leverage the Premium level. |
-|||
+ For making sure youΓÇÖre on the right compute size, you can monitor your query and database resource consumption through one of the above-mentioned ways in ΓÇ£How do I monitor the performance and resource utilization in SQL DatabaseΓÇ¥. Should you find that your queries/databases are consistently running hot on CPU/Memory etc. you can consider scaling up to a higher compute size. Similarly, if you note that even during your peak hours, you donΓÇÖt seem to use the resources as much; consider scaling down from the current compute size.
azure-sql Migrate Dtu To Vcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/migrate-dtu-to-vcore.md
The following table provides guidance for specific migration scenarios:
|General purpose|Premium|Upgrade|Must migrate secondary first| |Business critical|General purpose|Downgrade|Must migrate primary first| |General purpose|Business critical|Upgrade|Must migrate secondary first|
-||||
+ ## Migrate failover groups
azure-sql Powershell Script Content Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/powershell-script-content-guide.md
The following table includes links to sample Azure PowerShell scripts for Azure
| [Sync data between databases](scripts/sql-data-sync-sync-data-between-sql-databases.md?toc=%2fpowershell%2fmodule%2ftoc.json) | This PowerShell script configures Data Sync to sync between multiple databases in Azure SQL Database. | | [Sync data between SQL Database and SQL Server on-premises](scripts/sql-data-sync-sync-data-between-azure-onprem.md?toc=%2fpowershell%2fmodule%2ftoc.json) | This PowerShell script configures Data Sync to sync between a database in Azure SQL Database and a SQL Server on-premises database. | | [Update the SQL Data Sync sync schema](scripts/update-sync-schema-in-sync-group.md?toc=%2fpowershell%2fmodule%2ftoc.json) | This PowerShell script adds or removes items from the Data Sync sync schema. |
-|||
+ Learn more about the [Single-database Azure PowerShell API](single-database-manage.md#powershell).
The following table includes links to sample Azure PowerShell scripts for Azure
| [Manage transparent data encryption in a managed instance using your own key from Azure Key Vault](../managed-instance/scripts/transparent-data-encryption-byok-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json)| This PowerShell script configures transparent data encryption in a Bring Your Own Key scenario for Azure SQL Managed Instance, using a key from Azure Key Vault.| |**Configure a failover group**|| | [Configure a failover group for a managed instance](../managed-instance/scripts/add-to-failover-group-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json) | This PowerShell script creates two managed instances, adds them to a failover group, and then tests failover from the primary managed instance to the secondary managed instance. |
-|||
+ Learn more about [PowerShell cmdlets for Azure SQL Managed Instance](../managed-instance/api-references-create-manage-instance.md#powershell-create-and-configure-managed-instances).
azure-sql Purchasing Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/purchasing-models.md
The following table and chart compares and contrasts the vCore-based and the DTU
|||| |DTU-based|This model is based on a bundled measure of compute, storage, and I/O resources. Compute sizes are expressed in DTUs for single databases and in elastic database transaction units (eDTUs) for elastic pools. For more information about DTUs and eDTUs, see [What are DTUs and eDTUs?](purchasing-models.md#dtu-purchasing-model).|Customers who want simple, preconfigured resource options| |vCore-based|This model allows you to independently choose compute and storage resources. The vCore-based purchasing model also allows you to use [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) for SQL Server to save costs.|Customers who value flexibility, control, and transparency|
-||||
+ :::image type="content" source="./media/purchasing-models/pricing-model.png" alt-text="Pricing model comparison" lightbox="./media/purchasing-models/pricing-model.png":::
azure-sql Replication To Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/replication-to-sql-database.md
There are different [types of replication](/sql/relational-databases/replication
| [**Peer-to-peer**](/sql/relational-databases/replication/transactional/peer-to-peer-transactional-replication) | No | No| | [**Bidirectional**](/sql/relational-databases/replication/transactional/bidirectional-transactional-replication) | No | Yes| | [**Updatable subscriptions**](/sql/relational-databases/replication/transactional/updatable-subscriptions-for-transactional-replication) | No | No|
-| &nbsp; | &nbsp; | &nbsp; |
## Remarks
azure-sql Resource Limits Dtu Elastic Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-limits-dtu-elastic-pools.md
For the same number of DTUs, resources provided to an elastic pool may exceed th
| Min DTU per database choices | 0, 5 | 0, 5 | 0, 5 | 0, 5 | 0, 5 | 0, 5 | 0, 5 | 0, 5 | | Max DTU per database choices | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | | Max storage per database (GB) | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 |
-||||||||
+ <sup>1</sup> See [Resource management in dense elastic pools](elastic-pool-resource-management.md) for additional considerations.
For the same number of DTUs, resources provided to an elastic pool may exceed th
| Min DTU per database choices | 0, 10, 20, 50 | 0, 10, 20, 50, 100 | 0, 10, 20, 50, 100, 200 | 0, 10, 20, 50, 100, 200, 300 | 0, 10, 20, 50, 100, 200, 300, 400 | 0, 10, 20, 50, 100, 200, 300, 400, 800 | | Max DTU per database choices | 10, 20, 50 | 10, 20, 50, 100 | 10, 20, 50, 100, 200 | 10, 20, 50, 100, 200, 300 | 10, 20, 50, 100, 200, 300, 400 | 10, 20, 50, 100, 200, 300, 400, 800 | | Max storage per database (GB) | 1024 | 1024 | 1024 | 1024 | 1024 | 1024 |
-||||||||
+ <sup>1</sup> See [SQL Database pricing options](https://azure.microsoft.com/pricing/details/sql-database/elastic/) for details on additional cost incurred due to any extra storage provisioned.
For the same number of DTUs, resources provided to an elastic pool may exceed th
| Min DTU per database choices | 0, 10, 20, 50, 100, 200, 300, 400, 800, 1200 | 0, 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600 | 0, 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600, 2000 | 0, 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600, 2000, 2500 | 0, 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600, 2000, 2500, 3000 | | Max DTU per database choices | 10, 20, 50, 100, 200, 300, 400, 800, 1200 | 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600 | 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600, 2000 | 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600, 2000, 2500 | 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600, 2000, 2500, 3000 | | Max storage per database (GB) | 1024 | 1536 | 1792 | 2304 | 2816 |
-|||||||
+ <sup>1</sup> See [SQL Database pricing options](https://azure.microsoft.com/pricing/details/sql-database/elastic/) for details on additional cost incurred due to any extra storage provisioned.
For the same number of DTUs, resources provided to an elastic pool may exceed th
| Min eDTUs per database | 0, 25, 50, 75, 125 | 0, 25, 50, 75, 125, 250 | 0, 25, 50, 75, 125, 250, 500 | 0, 25, 50, 75, 125, 250, 500, 1000 | 0, 25, 50, 75, 125, 250, 500, 1000| | Max eDTUs per database | 25, 50, 75, 125 | 25, 50, 75, 125, 250 | 25, 50, 75, 125, 250, 500 | 25, 50, 75, 125, 250, 500, 1000 | 25, 50, 75, 125, 250, 500, 1000| | Max storage per database (GB) | 1024 | 1024 | 1024 | 1024 | 1536 |
-|||||||
+ <sup>1</sup> See [SQL Database pricing options](https://azure.microsoft.com/pricing/details/sql-database/elastic/) for details on additional cost incurred due to any extra storage provisioned.
For the same number of DTUs, resources provided to an elastic pool may exceed th
| Min DTU per database choices | 0, 25, 50, 75, 125, 250, 500, 1000, 1750 | 0, 25, 50, 75, 125, 250, 500, 1000, 1750 | 0, 25, 50, 75, 125, 250, 500, 1000, 1750 | 0, 25, 50, 75, 125, 250, 500, 1000, 1750 | 0, 25, 50, 75, 125, 250, 500, 1000, 1750, 4000 | | Max DTU per database choices | 25, 50, 75, 125, 250, 500, 1000, 1750 | 25, 50, 75, 125, 250, 500, 1000, 1750 | 25, 50, 75, 125, 250, 500, 1000, 1750 | 25, 50, 75, 125, 250, 500, 1000, 1750 | 25, 50, 75, 125, 250, 500, 1000, 1750, 4000 | | Max storage per database (GB) | 2048 | 2560 | 3072 | 3584 | 4096 |
-|||||||
+ <sup>1</sup> See [SQL Database pricing options](https://azure.microsoft.com/pricing/details/sql-database/elastic/) for details on additional cost incurred due to any extra storage provisioned.
The following table describes per database properties for pooled databases.
| Max DTUs per database |The maximum number of DTUs that any database in the pool may use, if available based on utilization by other databases in the pool. Max DTUs per database is not a resource guarantee for a database. If the workload in each database does not need all available pool resources to perform adequately, consider setting max DTUs per database to prevent a single database from monopolizing pool resources. Some degree of over-committing is expected since the pool generally assumes hot and cold usage patterns for databases, where all databases are not simultaneously peaking. | | Min DTUs per database |The minimum number of DTUs reserved for any database in the pool. Consider setting a min DTUs per database when you want to guarantee resource availability for each database regardless of resource consumption by other databases in the pool. The min DTUs per database may be set to 0, and is also the default value. This property is set to anywhere between 0 and the average DTUs utilization per database.| | Max storage per database |The maximum database size set by the user for a database in a pool. Pooled databases share allocated pool storage, so the size a database can reach is limited to the smaller of remaining pool storage and maximum database size. Maximum database size refers to the maximum size of the data files and does not include the space used by the log file. |
-|||
+ > [!IMPORTANT] > Because resources in an elastic pool are finite, setting min DTUs per database to a value greater than 0 implicitly limits resource utilization by each database. If, at a point in time, most databases in a pool are idle, resources reserved to satisfy the min DTUs guarantee are not available to databases active at that point in time.
The following table lists tempdb sizes for single databases in Azure SQL Databas
|Standard Elastic Pools (1200 eDTU)|32|10|320| |Standard Elastic Pools (1600-3000 eDTU)|32|12|384| |Premium Elastic Pools (all DTU configurations)|13.9|12|166.7|
-||||
+ ## Next steps
azure-sql Resource Limits Dtu Single Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-limits-dtu-single-databases.md
The following tables show the resources available for a single database at each
| Max in-memory OLTP storage (GB) |N/A | | Max concurrent workers | 30 | | Max concurrent sessions | 300 |
-|||
+ > [!IMPORTANT] > The Basic service tier provides less than one vCore (CPU). For CPU-intensive workloads, a service tier of S3 or greater is recommended.
The following tables show the resources available for a single database at each
| Max in-memory OLTP storage (GB) | N/A | N/A | N/A | N/A | | Max concurrent workers | 60 | 90 | 120 | 200 | | Max concurrent sessions |600 | 900 | 1200 | 2400 |
-||||||
+ <sup>1</sup> See [SQL Database pricing options](https://azure.microsoft.com/pricing/details/sql-database/single/) for details on additional cost incurred due to any extra storage provisioned.
The following tables show the resources available for a single database at each
| Max in-memory OLTP storage (GB) | N/A | N/A | N/A | N/A |N/A | | Max concurrent workers | 400 | 800 | 1600 | 3200 |6000 | | Max concurrent sessions |4800 | 9600 | 19200 | 30000 |30000 |
-|||||||
+ <sup>1</sup> See [SQL Database pricing options](https://azure.microsoft.com/pricing/details/sql-database/single/) for details on additional cost incurred due to any extra storage provisioned.
The following tables show the resources available for a single database at each
| Max in-memory OLTP storage (GB) | 1 | 2 | 4 | 8 | 14 | 32 | | Max concurrent workers | 200 | 400 | 800 | 1600 | 2800 | 6400 | | Max concurrent sessions | 30000 | 30000 | 30000 | 30000 | 30000 | 30000 |
-|||||||
+ <sup>1</sup> See [SQL Database pricing options](https://azure.microsoft.com/pricing/details/sql-database/single/) for details on additional cost incurred due to any extra storage provisioned.
The following table lists tempdb sizes for single databases in Azure SQL Databas
|P6|13.9|12|166.7| |P11|13.9|12|166.7| |P15|13.9|12|166.7|
-||||
+ ## Next steps
azure-sql Resource Limits Logical Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-limits-logical-server.md
vCore resource limits are listed in the following articles, please be sure to up
| DTU / eDTU quota per logical server | 54,000 | | vCore quota per logical server | 540 | | Max elastic pools per logical server | Limited by number of DTUs or vCores. For example, if each pool is 1000 DTUs, then a server can support 54 pools.|
-|||
> [!IMPORTANT] > As the number of databases approaches the limit per logical server, the following can occur:
Log rate governor traffic shaping is surfaced via the following wait types (expo
| HADR_THROTTLE_LOG_RATE_SEND_RECV_QUEUE_SIZE | Feedback control, availability group physical replication in Premium/Business Critical not keeping up | | HADR_THROTTLE_LOG_RATE_LOG_SIZE | Feedback control, limiting rates to avoid an out of log space condition | | HADR_THROTTLE_LOG_RATE_MISMATCHED_SLO | Geo-replication feedback control, limiting log rate to avoid high data latency and unavailability of geo-secondaries|
-|||
+ When encountering a log rate limit that is hampering desired scalability, consider the following options:
WHERE database_id = DB_ID();
|`slo_name`|Service objective name, including hardware generation| |`user_data_directory_space_quota_mb`|**Maximum local storage**, in MB| |`user_data_directory_space_usage_mb`|Current local storage consumption by data files, transaction log files, and tempdb files, in MB. Updated every five minutes.|
-|||
+ This query should be executed in the user database, not in the master database. For elastic pools, the query can be executed in any database in the pool. Reported values apply to the entire pool.
azure-sql Resource Limits Vcore Elastic Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-limits-vcore-elastic-pools.md
The following table describes per database properties for pooled databases.
| Max vCores per database |The maximum number of vCores that any database in the pool may use, if available based on utilization by other databases in the pool. Max vCores per database is not a resource guarantee for a database. If the workload in each database does not need all available pool resources to perform adequately, consider setting max vCores per database to prevent a single database from monopolizing pool resources. Some degree of over-committing is expected since the pool generally assumes hot and cold usage patterns for databases, where all databases are not simultaneously peaking. | | Min vCores per database |The minimum number of vCores reserved for any database in the pool. Consider setting a min vCores per database when you want to guarantee resource availability for each database regardless of resource consumption by other databases in the pool. The min vCores per database may be set to 0, and is also the default value. This property is set to anywhere between 0 and the average vCores utilization per database.| | Max storage per database |The maximum database size set by the user for a database in a pool. Pooled databases share allocated pool storage, so the size a database can reach is limited to the smaller of remaining pool storage and maximum database size. Maximum database size refers to the maximum size of the data files and does not include the space used by the log file. |
-|||
+ > [!IMPORTANT] > Because resources in an elastic pool are finite, setting min vCores per database to a value greater than 0 implicitly limits resource utilization by each database. If, at a point in time, most databases in a pool are idle, resources reserved to satisfy the min vCores guarantee are not available to databases active at that point in time.
azure-sql Resource Limits Vcore Single Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-limits-vcore-single-databases.md
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Multi-AZ|N/A|N/A|N/A|N/A|N/A|N/A| |Read Scale-out|Yes|Yes|Yes|Yes|Yes|Yes| |Backup storage retention|7 days|7 days|7 days|7 days|7 days|7 days|
-|||
+ <sup>1</sup> Besides local SSD IO, workloads will use remote [page server](service-tier-hyperscale.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Multi-AZ|N/A|N/A|N/A|N/A|N/A|N/A| |Read Scale-out|Yes|Yes|Yes|Yes|Yes|Yes| |Backup storage retention|7 days|7 days|7 days|7 days|7 days|7 days|
-|||
+ <sup>1</sup> Besides local SSD IO, workloads will use remote [page server](service-tier-hyperscale.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Multi-AZ|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)| |Read Scale-out|Yes|Yes|Yes|Yes|Yes|Yes|Yes| |Backup storage retention|7 days|7 days|7 days|7 days|7 days|7 days|7 days|
-|||
+ <sup>1</sup> Besides local SSD IO, workloads will use remote [page server](service-tier-hyperscale.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Multi-AZ|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)| |Read Scale-out|Yes|Yes|Yes|Yes|Yes|Yes|Yes| |Backup storage retention|7 days|7 days|7 days|7 days|7 days|7 days|7 days|
-|||
+ <sup>1</sup> Besides local SSD IO, workloads will use remote [page server](service-tier-hyperscale.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Multi-AZ|N/A|N/A|N/A|N/A| |Read Scale-out|Yes|Yes|Yes|Yes| |Backup storage retention|7 days|7 days|7 days|7 days|
-|||
+ <sup>1</sup> Besides local SSD IO, workloads will use remote [page server](service-tier-hyperscale.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
azure-sql Saas Dbpertenant Get Started Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/saas-dbpertenant-get-started-deploy.md
The Wingtip application uses [*Azure Traffic Manager*](../../traffic-manager/tr
| .*&lt;user&gt;* | *af1* in the example. | | .trafficmanager.net/ | Traffic Manager, base URL. | | fabrikamjazzclub | Identifies the tenant named Fabrikam Jazz Club. |
- | &nbsp; | &nbsp; |
+ - The tenant name is parsed from the URL by the events app. - The tenant name is used to create a key.
azure-sql Saas Dbpertenant Restore Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/saas-dbpertenant-restore-single-tenant.md
In this tutorial, you learn two data recovery patterns:
|:--|:--| | Restore into a parallel database | This pattern can be used for tasks such as review, auditing, and compliance to allow a tenant to inspect their data from an earlier point. The tenant's current database remains online and unchanged. | | Restore in place | This pattern is typically used to recover a tenant to an earlier point, after a tenant accidentally deletes or corrupts data. The original database is taken off line and replaced with the restored database. |
-|||
+ To complete this tutorial, make sure the following prerequisites are completed:
azure-sql Saas Tenancy App Design Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/saas-tenancy-app-design-patterns.md
The following table summarizes the differences between the main tenancy models.
| Performance monitoring and management | Per-tenant only | Aggregate + per-tenant | Aggregate; although is per-tenant only for singles. | | Development complexity | Low | Low | Medium; due to sharding. | | Operational complexity | Low-High. Individually simple, complex at scale. | Low-Medium. Patterns address complexity at scale. | Low-High. Individual tenant management is complex. |
-| &nbsp; ||||
+ ## Next steps
azure-sql Auditing Threat Detection Powershell Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/auditing-threat-detection-powershell-configure.md
This script uses the following commands. Each command in the table links to comm
| [Set-AzSqlDatabaseAuditing](/powershell/module/az.sql/set-azsqldatabaseaudit) | Sets the auditing policy for a database. | | Set-AzSqlDatabaseThreatDetectionPolicy | Sets an Advanced Threat Protection policy on a database. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
+ ## Next steps
azure-sql Copy Database To New Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/copy-database-to-new-server-powershell.md
This script uses the following commands. Each command in the table links to comm
| [New-AzSqlDatabase](/powershell/module/az.sql/new-azsqldatabase) | Creates a database or elastic pool. | | [New-AzSqlDatabaseCopy](/powershell/module/az.sql/new-azsqldatabasecopy) | Creates a copy of a database that uses the snapshot at the current time. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
+ ## Next steps
azure-sql Create And Configure Database Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/create-and-configure-database-powershell.md
This script uses the following commands. Each command in the table links to comm
| [New-AzSqlServerFirewallRule](/powershell/module/az.sql/new-azsqlserverfirewallrule) | Creates a server-level firewall rule for a server. | | [New-AzSqlDatabase](/powershell/module/az.sql/new-azsqldatabase) | Creates a database in a server. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
+ ## Next steps
azure-sql Monitor And Scale Database Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/monitor-and-scale-database-powershell.md
This script uses the following commands. Each command in the table links to comm
| [Set-AzSqlDatabase](/powershell/module/az.sql/set-azsqldatabase) | Updates database properties or moves the database into, out of, or between elastic pools. | | [Add-AzMetricAlertRule](/powershell/module/az.monitor/add-azmetricalertrule) | Sets an alert rule to automatically monitor metrics in the future. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
+ ## Next steps
azure-sql Monitor And Scale Pool Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/monitor-and-scale-pool-powershell.md
This script uses the following commands. Each command in the table links to comm
| [Set-AzSqlElasticPool](/powershell/module/az.sql/set-azsqlelasticpool) | Updates elastic pool properties. | | [Add-AzMetricAlertRule](/powershell/module/az.monitor/add-azmetricalertrule) | Sets an alert rule to automatically monitor metrics in the future. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
+ ## Next steps
azure-sql Move Database Between Elastic Pools Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/move-database-between-elastic-pools-powershell.md
This script uses the following commands. Each command in the table links to comm
| [New-AzSqlDatabase](/powershell/module/az.sql/new-azsqldatabase) | Creates a database in a server. | | [Set-AzSqlDatabase](/powershell/module/az.sql/set-azsqldatabase) | Updates database properties or moves a database into, out of, or between elastic pools. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
+ ## Next steps
azure-sql Setup Geodr And Failover Database Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/setup-geodr-and-failover-database-powershell.md
This script uses the following commands. Each command in the table links to comm
| [Get-AzSqlDatabaseReplicationLink](/powershell/module/az.sql/get-azsqldatabasereplicationlink) | Gets the geo-replication links between an Azure SQL Database and a resource group or logical SQL server. | | [Remove-AzSqlDatabaseSecondary](/powershell/module/az.sql/remove-azsqldatabasesecondary) | Terminates data replication between a database and the specified secondary database. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
+ ## Next steps
azure-sql Setup Geodr And Failover Elastic Pool Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/setup-geodr-and-failover-elastic-pool-powershell.md
This script uses the following commands. Each command in the table links to comm
| [Set-AzSqlDatabaseSecondary](/powershell/module/az.sql/set-azsqldatabasesecondary)| Switches a secondary database to be primary in order to initiate failover.| | [Get-AzSqlDatabaseReplicationLink](/powershell/module/az.sql/get-azsqldatabasereplicationlink) | Gets the geo-replication links between an Azure SQL Database and a resource group or logical SQL server. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
+ ## Next steps
azure-sql Sql Data Sync Sync Data Between Azure Onprem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/sql-data-sync-sync-data-between-azure-onprem.md
This script uses the following commands. Each command in the table links to comm
| [Update-AzSqlSyncGroup](/powershell/module/az.sql/Update-azSqlSyncGroup) | Updates the Sync Group. | | [Start-AzSqlSyncGroupSync](/powershell/module/az.sql/Start-azSqlSyncGroupSync) | Triggers a sync. | | [Get-AzSqlSyncGroupLog](/powershell/module/az.sql/Get-azSqlSyncGroupLog) | Checks the Sync Log. |
-|||
+ ## Next steps
azure-sql Sql Data Sync Sync Data Between Sql Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/sql-data-sync-sync-data-between-sql-databases.md
This script uses the following commands. Each command in the table links to comm
| [Update-AzSqlSyncGroup](/powershell/module/az.sql/Update-azSqlSyncGroup) | Updates the sync group. | | [Start-AzSqlSyncGroupSync](/powershell/module/az.sql/Start-azSqlSyncGroupSync) | Triggers a sync. | | [Get-AzSqlSyncGroupLog](/powershell/module/az.sql/Get-azSqlSyncGroupLog) | Checks the Sync Log. |
-|||
+ ## Next steps
azure-sql Security Server Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/security-server-roles.md
Previously updated : 09/02/2021 Last updated : 03/14/2022
For example, the server-level role **##MS_ServerStateReader##** holds the permis
> [!NOTE] > Any permission can be denied within user databases, in effect, overriding the server-wide grant via role membership. However, in the system database *master*, permissions cannot be granted or denied.
-Azure SQL Database currently provides three fixed server roles. The permissions that are granted to the fixed server roles cannot be changed and these roles can't have other fixed roles as members. You can add server-level SQL logins as members to server-level roles.
+Azure SQL Database currently provides three fixed server roles. The permissions that are granted to the fixed server roles cannot be changed and these roles can't have other fixed roles as members. You can add server-level logins as members to server-level roles.
> [!IMPORTANT] > Each member of a fixed server role can add other logins to that same role.
INNER JOIN sys.sql_logins AS sql_logins
ON server_role_members.member_principal_id = sql_logins.principal_id ; GO
-```
+```
+ ### C. Complete example: Adding a login to a server-level role, retrieving metadata for role membership and permissions, and running a test query #### Part 1: Preparing role membership and user account
SELECT * FROM sys.dm_exec_query_stats
```
+### D. Check server-level roles for Azure AD logins
+
+Run this command in the virtual master database to see all Azure AD logins that are part of server-level roles in SQL Database. For more information on Azure AD server logins, see [Azure Active Directory server principals](authentication-azure-ad-logins.md).
+
+```sql
+SELECT roles.principal_id AS RolePID,roles.name AS RolePName,
+ server_role_members.member_principal_id AS MemberPID, members.name AS MemberPName
+ FROM sys.server_role_members AS server_role_members
+ INNER JOIN sys.server_principals AS roles
+ ON server_role_members.role_principal_id = roles.principal_id
+ INNER JOIN sys.server_principals AS members
+ ON server_role_members.member_principal_id = members.principal_id;
+```
+
+### E. Check the virtual master database roles for specific logins
+
+Run this command in the virtual master database to check with roles `bob` has, or change the value to match your principal.
+
+```sql
+SELECT DR1.name AS DbRoleName, isnull (DR2.name, 'No members') AS DbUserName
+ FROM sys.database_role_members AS DbRMem RIGHT OUTER JOIN sys.database_principals AS DR1
+ ON DbRMem.role_principal_id = DR1.principal_id LEFT OUTER JOIN sys.database_principals AS DR2
+ ON DbRMem.member_principal_id = DR2.principal_id
+ WHERE DR1.type = 'R' and DR2.name like 'bob%'
+```
+ ## Limitations of server-level roles - Role assignments may take up to 5 minutes to become effective. Also for existing sessions, changes to server role assignments don't take effect until the connection is closed and reopened. This is due to the distributed architecture between the *master* database and other databases on the same logical server.
azure-sql Service Tier Business Critical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tier-business-critical.md
The following table shows resource limits for both Azure SQL Database and Azure
| [**Read-only replicas**](read-scale-out.md) |1 built-in high availability replica is readable <br> 0 - 4 [geo-replicas](active-geo-replication-overview.md) |1 built-in high availability replica is readable <br> 0 - 1 geo-replicas using [auto-failover groups](auto-failover-group-overview.md#best-practices-for-sql-managed-instance) | | **Pricing/Billing** |[vCore, reserved storage, backup storage, and geo-replicas](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/> High availability replicas aren't charged. <br/>IOPS isn't charged. |[vCore, reserved storage, backup storage, and geo-replicas](https://azure.microsoft.com/pricing/details/sql-database/managed/) are charged. <br/> High availability replicas aren't charged. <br/>IOPS isn't charged. | | **Discount models** |[Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|[Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions |
-| | |
+ ## Next steps
azure-sql Service Tier General Purpose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tier-general-purpose.md
The following table shows resource limits for both Azure SQL Database and Azure
| [**Read-only replicas**](read-scale-out.md) | 0 built-in </br> 0 - 4 [geo-replicas](active-geo-replication-overview.md) | 0 built-in </br> 0 - 1 geo-replicas using [auto-failover groups](auto-failover-group-overview.md#best-practices-for-sql-managed-instance) | | **Pricing/Billing** | [vCore, reserved storage, backup storage, and geo-replicas](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged.| [vCore, reserved storage, backup storage, and geo-replicas](https://azure.microsoft.com/pricing/details/sql-database/managed/) are charged. <br/>IOPS is not charged. | | **Discount models** |[Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions | [Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|
-| | |
+ ## Next steps
azure-sql Service Tiers Sql Database Vcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tiers-sql-database-vcore.md
For greater details, review resource limits for [logical server](resource-limits
|**Availability**|1 replica, no read-scale replicas, <br/>zone-redundant high availability (HA) (preview)|3 replicas, 1 [read-scale replica](read-scale-out.md),<br/>zone-redundant high availability (HA)|zone-redundant high availability (HA) (preview)| |**Pricing/billing** | [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged. |[vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged. | [vCore for each replica and used storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS not yet charged. | |**Discount models**| [Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|[Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions | [Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|
-| | |
+ > [!NOTE]
azure-sql Sql Data Sync Data Sql Server Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/sql-data-sync-data-sql-server-sql-database.md
Data Sync isn't the preferred solution for the following scenarios:
| Read Scale | [Use read-only replicas to load balance read-only query workloads](read-scale-out.md) | | ETL (OLTP to OLAP) | [Azure Data Factory](https://azure.microsoft.com/services/data-factory/) or [SQL Server Integration Services](/sql/integration-services/sql-server-integration-services) | | Migration from SQL Server to Azure SQL Database. However, SQL Data Sync can be used after the migration is completed, to ensure that the source and target are kept in sync. | [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/) |
-|||
+ ## How it works
azure-sql Sql Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/sql-vulnerability-assessment.md
You can use Azure PowerShell cmdlets to programmatically manage your vulnerabili
| [Update-AzSqlDatabaseVulnerabilityAssessmentSetting](/powershell/module/az.sql/Update-azSqlDatabaseVulnerabilityAssessmentSetting) | Updates the vulnerability assessment settings of a database. | | [Update-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting](/powershell/module/az.sql/Update-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting) | Updates the vulnerability assessment settings of a managed database. | | [Update-AzSqlInstanceVulnerabilityAssessmentSetting](/powershell/module/az.sql/Update-AzSqlInstanceVulnerabilityAssessmentSetting) | Updates the vulnerability assessment settings of a managed instance. |
-| &nbsp; | &nbsp; |
+ For a script example, see [Azure SQL vulnerability assessment PowerShell support](/archive/blogs/sqlsecurity/azure-sql-vulnerability-assessment-now-with-powershell-support).
You can use Azure CLI commands to programmatically manage your vulnerability ass
| [az security va sql results show](/cli/azure/security/va/sql/results#az_security_va_sql_results_show) | View Sql Vulnerability Assessment scan results. | | [az security va sql scans list](/cli/azure/security/va/sql/scans#az_security_va_sql_scans_list) | List all Sql Vulnerability Assessment scan summaries. | | [az security va sql scans show](/cli/azure/security/va/sql/scans#az_security_va_sql_scans_show) | View Sql Vulnerability Assessment scan summaries. |
-| &nbsp; | &nbsp; |
azure-sql Transparent Data Encryption Tde Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/transparent-data-encryption-tde-overview.md
Use the following cmdlets for Azure SQL Database and Azure Synapse:
| [Set-AzSqlServerTransparentDataEncryptionProtector](/powershell/module/az.sql/set-azsqlservertransparentdataencryptionprotector) |Sets the transparent data encryption protector for a server. | | [Get-AzSqlServerTransparentDataEncryptionProtector](/powershell/module/az.sql/get-azsqlservertransparentdataencryptionprotector) |Gets the transparent data encryption protector | | [Remove-AzSqlServerKeyVaultKey](/powershell/module/az.sql/remove-azsqlserverkeyvaultkey) |Removes a Key Vault key from a server. |
-| | |
+ > [!IMPORTANT] > For Azure SQL Managed Instance, use the T-SQL [ALTER DATABASE](/sql/t-sql/statements/alter-database-azure-sql-database) command to turn TDE on and off on a database level, and check [sample PowerShell script](transparent-data-encryption-byok-configure.md) to manage TDE on an instance level.
Connect to the database by using a login that is an administrator or member of t
| [ALTER DATABASE (Azure SQL Database)](/sql/t-sql/statements/alter-database-azure-sql-database) | SET ENCRYPTION ON/OFF encrypts or decrypts a database | | [sys.dm_database_encryption_keys](/sql/relational-databases/system-dynamic-management-views/sys-dm-database-encryption-keys-transact-sql) |Returns information about the encryption state of a database and its associated database encryption keys | | [sys.dm_pdw_nodes_database_encryption_keys](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-nodes-database-encryption-keys-transact-sql) |Returns information about the encryption state of each Azure Synapse node and its associated database encryption keys |
-| | |
+ You can't switch the TDE protector to a key from Key Vault by using Transact-SQL. Use PowerShell or the Azure portal.
azure-sql Glossary Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/glossary-terms.md
Last updated 02/02/2022
|Compute size (service objective) ||Compute size (service objective) is the amount of CPU, memory, and storage resources available for a single database or elastic pool. Compute size also defines resource consumption limits, such as maximum IOPS, maximum log rate, etc. ||vCore-based sizing options| Configure the compute size for your database or elastic pool by selecting the appropriate service tier, compute tier, and hardware generation for your workload. When using an elastic pool, configure the reserved vCores for the pool, and optionally configure per-database settings. For sizing options and resource limits in the vCore-based purchasing model, see [vCore single databases](database/resource-limits-vcore-single-databases.md), and [vCore elastic pools](database/resource-limits-vcore-elastic-pools.md).| ||DTU-based sizing options| Configure the compute size for your database or elastic pool by selecting the appropriate service tier and selecting the maximum data size and number of DTUs. When using an elastic pool, configure the reserved eDTUs for the pool, and optionally configure per-database settings. For sizing options and resource limits in the DTU-based purchasing model, see [DTU single databases](database/resource-limits-dtu-single-databases.md) and [DTU elastic pools](database/resource-limits-dtu-elastic-pools.md).
-||||
+ ## Azure SQL Managed Instance
Last updated 02/02/2022
|Compute|Provisioned compute| SQL Managed Instance provides a specific amount of [compute resources](managed-instance/service-tiers-managed-instance-vcore.md#compute) that are continuously provisioned independent of workload activity, and bills for the amount of compute provisioned at a fixed price per hour. | |Hardware generation|Available hardware configurations| SQL Managed Instance [hardware generations](managed-instance/service-tiers-managed-instance-vcore.md#hardware-generations) include standard-series (Gen5), premium-series, and memory optimized premium-series hardware generations. | |Compute size | vCore-based sizing options | Compute size (service objective) is the maximum amount of CPU, memory, and storage resources available for a single managed instance or instance pool. Configure the compute size for your managed instance by selecting the appropriate service tier and hardware generation for your workload. Learn about [resource limits for managed instances](managed-instance/resource-limits.md). |
-||||
+ ## SQL Server on Azure VMs |Context|Term|More information|
Last updated 02/02/2022
| SQL IaaS Agent extension | | The [SQL IaaS Agent extension](virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md) (SqlIaasExtension) runs on SQL Server VMs to automate management and administration tasks. There's no extra cost associated with the extension. | | | Automated patching | [Automated Patching](virtual-machines/windows/automated-patching.md) establishes a maintenance window for a SQL Server VM when security updates will be automatically applied by the SQL IaaS Agent extension. Note that there may be other mechanisms for applying Automatic Updates. If you configure automated patching using the SQL IaaS Agent extension you should ensure that there are no other conflicting update schedules. | | | Automated backup | [Automated Backup v2](virtual-machines/windows/automated-backup.md) automatically configures Managed Backup to Microsoft Azure for all existing and new databases on a SQL Server VM running SQL Server 2016 or later Standard, Enterprise, or Developer editions. |
-||||
azure-sql Auto Failover Group Configure Sql Mi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/auto-failover-group-configure-sql-mi.md
Create the primary virtual network gateway using the Azure portal.
| **Virtual network**| Select the virtual network for your secondary managed instance. | | **Public IP address**| Select **Create new**. | | **Public IP address name**| Enter a name for your IP address. |
- | &nbsp; | &nbsp; |
+ 1. Leave the other values as default, and then select **Review + create** to review the settings for your virtual network gateway.
The following table shows the values necessary for the gateway for the secondary
| **Virtual network**| Select the virtual network that was created in section 2, such as `vnet-sql-mi-secondary`. | | **Public IP address**| Select **Create new**. | | **Public IP address name**| Enter a name for your IP address, such as `secondary-gateway-IP`. |
- | &nbsp; | &nbsp; |
+ ![Secondary gateway settings](./media/auto-failover-group-configure-sql-mi/settings-for-secondary-gateway.png)
The following table lists specific permission scopes for Azure SQL Managed Insta
|**Create failover group**| Azure RBAC write access | Primary managed instance </br> Secondary managed instance| | **Update failover group** Azure RBAC write access | Failover group </br> All databases within the managed instance| | **Fail over failover group** | Azure RBAC write access | Failover group on new primary managed instance |
-| | |
+ ## Next steps
azure-sql Auto Failover Group Sql Mi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/auto-failover-group-sql-mi.md
Previously updated : 03/01/2022 Last updated : 03/15/2022 # Auto-failover groups overview & best practices (Azure SQL Managed Instance)
Due to the high latency of wide area networks, geo-replication uses an asynchron
> [!NOTE] > `sp_wait_for_database_copy_sync` prevents data loss after geo-failover for specific transactions, but does not guarantee full synchronization for read access. The delay caused by a `sp_wait_for_database_copy_sync` procedure call can be significant and depends on the size of the not yet transmitted transaction log on the primary at the time of the call.
+## Failover group status
+Auto-failover group reports its status describing the current state of the data replication:
+
+- Seeding - [Initial seeding](auto-failover-group-sql-mi.md#initial-seeding) is taking place after creation of the failover group, until all user databases are initialized on the secondary instance. Failover process cannot be initiated while auto-failover group is in the Seeding status, since user databases are not copied to secondary instance yet.
+- Synchronizing - the usual status of auto-failover group. It means that data changes on the primary instance are being replicated asynchronously to the secondary instance. This status doesn't guarantee that the data is fully synchronized at every moment. There may be data changes from primary still to be replicated to the secondary due to asynchronous nature of the replication process between instances in the auto-failover group. Both automatic and manual failovers can be initiated while the auto-failover group is in the Seeding status.
+- Failover in progress - this status indicates that either automatically or manually initiated failover process is in progress. No changes to the failover group or additional failovers can be initiated while the auto-failover group is in this status.
+ ## Permissions <!--
azure-sql Azure App Sync Network Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/azure-app-sync-network-configuration.md
- Title: Sync network configuration for Azure App Service -
-description: This article discusses how to sync your network configuration for Azure App Service hosting plan with your Azure SQL Managed Instance.
-------- Previously updated : 12/13/2018-
-# Sync networking configuration for Azure App Service hosting plan with Azure SQL Managed Instance
-
-It might happen that although you [integrated your app with an Azure Virtual Network](../../app-service/overview-vnet-integration.md), you can't establish a connection to SQL Managed Instance. Refreshing, or synchronizing, the networking configuration for your service plan can resolve this issue.
-
-## Sync network configuration
-
-To do that, follow these steps:
-
-1. Go to your web apps App Service plan.
-
- ![Screenshot of App Service plan](./media/azure-app-sync-network-configuration/app-service-plan.png)
-
-2. Select **Networking** and then select **Click here to Manage**.
-
- ![Screenshot of manage service plan](./media/azure-app-sync-network-configuration/manage-plan.png)
-
-3. Select your **VNet** and click **Sync Network**.
-
- ![Screenshot of sync network](./media/azure-app-sync-network-configuration/sync.png)
-
-4. Wait until the sync is done.
-
- ![Screenshot of sync done](./media/azure-app-sync-network-configuration/sync-done.png)
-
-You are now ready to try to re-establish your connection to your SQL Managed Instance.
-
-## Next steps
--- For information about configuring your VNet for SQL Managed Instance, see [SQL Managed Instance VNet architecture](connectivity-architecture-overview.md) and [How to configure existing VNet](vnet-existing-add-subnet.md).
azure-sql Connect Application Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/connect-application-instance.md
Once you have the basic infrastructure set up, you need to modify some settings
You can also connect an application that's hosted by Azure App Service. In order to access it from Azure App Service via virtual network, you first need to make a connection between the application and the SQL Managed Instance virtual network. See [Integrate your app with an Azure virtual network](../../app-service/overview-vnet-integration.md). For data access to your managed instance from outside a virtual network see [Configure public endpoint in Azure SQL Managed Instance](./public-endpoint-configure.md).
-For troubleshooting Azure App Service access via virtual network, see [Troubleshooting virtual networks and applications](../../app-service/overview-vnet-integration.md#troubleshooting). If a connection cannot be established, try [syncing the networking configuration](azure-app-sync-network-configuration.md).
+For troubleshooting Azure App Service access via virtual network, see [Troubleshooting virtual networks and applications](../../app-service/overview-vnet-integration.md#troubleshooting).
A special case of connecting Azure App Service to SQL Managed Instance is when you integrate Azure App Service to a network peered to a SQL Managed Instance virtual network. That case requires the following configuration to be set up:
azure-sql Connectivity Architecture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/connectivity-architecture-overview.md
These routes are necessary to ensure that management traffic is routed directly
|mi-storage-REGION-internet|Storage.REGION|Internet| |mi-storage-REGION_PAIR-internet|Storage.REGION_PAIR|Internet| |mi-azureactivedirectory-internet|AzureActiveDirectory|Internet|
-||||
+ \* MI SUBNET refers to the IP address range for the subnet in the form x.x.x.x/y. You can find this information in the Azure portal, in subnet properties.
azure-sql Doc Changes Updates Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/doc-changes-updates-known-issues.md
This article lists the currently known issues with [Azure SQL Managed Instance](
|Point-in-time database restore from Business Critical tier to General Purpose tier will not succeed if source database contains in-memory OLTP objects.||Resolved|Oct 2019| |Database mail feature with external (non-Azure) mail servers using secure connection||Resolved|Oct 2019| |Contained databases not supported in SQL Managed Instance||Resolved|Aug 2019|
-|||||
+ ## Resolved
azure-sql Failover Group Add Instance Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/failover-group-add-instance-tutorial.md
To create a virtual network, follow these steps:
| **Region** | The location where you will deploy your secondary managed instance. | | **Subnet** | The name for your subnet. `default` is provided for you by default. | | **Address range**| The address range for your subnet. This must be different than the subnet address range used by the virtual network of your primary managed instance, such as `10.128.0.0/24`. |
- | &nbsp; | &nbsp; |
+ ![Secondary virtual network values](./media/failover-group-add-instance-tutorial/secondary-virtual-network.png)
Create the secondary managed instance using the Azure portal.
| **Region**| The location for your secondary managed instance. | | **SQL Managed Instance admin login** | The login you want to use for your new secondary managed instance, such as `azureuser`. | | **Password** | A complex password that will be used by the admin login for the new secondary managed instance. |
- | &nbsp; | &nbsp; |
+ 1. Under the **Networking** tab, for the **Virtual Network**, select the virtual network you created for the secondary managed instance from the drop-down.
Create the gateway for the virtual network of your primary managed instance usin
| **Virtual network**| Select the virtual network that was created in section 2, such as `vnet-sql-mi-primary`. | | **Public IP address**| Select **Create new**. | | **Public IP address name**| Enter a name for your IP address, such as `primary-gateway-IP`. |
- | &nbsp; | &nbsp; |
+ 1. Leave the other values as default, and then select **Review + create** to review the settings for your virtual network gateway.
Using the Azure portal, repeat the steps in the previous section to create the v
| **Virtual network**| Select the virtual network for the secondary managed instance, such as `vnet-sql-mi-secondary`. | | **Public IP address**| Select **Create new**. | | **Public IP address name**| Enter a name for your IP address, such as `secondary-gateway-IP`. |
- | &nbsp; | &nbsp; |
+ ![Secondary gateway settings](./media/failover-group-add-instance-tutorial/settings-for-secondary-gateway.png)
azure-sql How To Content Reference Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/how-to-content-reference-guide.md
In this article you can find a content reference to various guides, scripts, and
Secure your subnet against erroneous or malicious data exfiltration into unauthorized Azure Storage accounts. - [Configure custom DNS](custom-dns-configure.md): Configure custom DNS to grant external resource access to custom domains from SQL Managed Instance via a linked server of db mail profiles. -- [Sync network configuration](azure-app-sync-network-configuration.md):
- Refresh the networking configuration plan if you can't establish a connection after [integrating your app with an Azure virtual network](../../app-service/overview-vnet-integration.md).
- [Find the management endpoint IP address](management-endpoint-find-ip-address.md): Determine the public endpoint that SQL Managed Instance is using for management purposes. - [Verify built-in firewall protection](management-endpoint-verify-built-in-firewall.md):
azure-sql Managed Instance Link Use Scripts To Failover Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-scripts-to-failover-database.md
+
+ Title: Fail over database with link feature with T-SQL and PowerShell scripts
+
+description: This guide teaches you how to use the SQL Managed Instance link with scripts to fail over database from SQL Server to Azure SQL Managed Instance.
++++
+ms.devlang:
++++ Last updated : 03/15/2022++
+# Failover (migrate) database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts
++
+This article teaches you to use T-SQL and PowerShell scripts for [Managed Instance link feature](link-feature.md) to fail over (migrate) your database from SQL Server to Azure SQL Managed Instance.
+
+> [!NOTE]
+> The link feature for Azure SQL Managed Instance is currently in preview.
+
+> [!NOTE]
+> Configuration on Azure side is done with PowerShell that calls SQL Managed Instance REST API. Support for Azure PowerShell and CLI will be released in the upcomming weeks. At that point this article will be updated with the simplified PowerShell scripts.
+
+> [!TIP]
+> SQL Managed Instance link database failover can be set up with [SSMS wizard](managed-instance-link-use-ssms-to-failover-database.md).
+
+Database failover from SQL Server instance to SQL Managed Instance breaks the link between the two databases. Failover stops replication and leaves both databases in an independent state, ready for individual read-write workloads.
+
+To start migrating database to the SQL Managed Instance, first stop the application workload to the SQL Server during your maintenance hours. This is required to enable SQL Managed Instance to catchup with the database replication and make migration to Azure without any data loss.
+
+While database is a part of Always On Availability Group, it isn't possible to set it to read-only mode. You'll need to ensure that your application(s) aren't committing transactions to SQL Server.
+
+## Switch the replication mode from asynchronous to synchronous
+
+The replication between SQL Server and SQL Managed Instance is asynchronous by default. Before you perform database migration to Azure, the link needs to be switched to synchronous mode. Synchronous replication across distances might slow down transactions on the primary SQL Server.
+Switching from async to sync mode requires replication mode change on SQL Managed Instance and SQL Server.
+
+## Switch replication mode on Managed Instance
+
+Use the following PowerShell script to call REST API that changes the replication mode from asynchronous to synchronous on SQL Managed Instance. We suggest you execute the REST API call using Azure Cloud Shell in Azure portal.
+
+Replace `<YourSubscriptionID>` with your subscription ID and replace `<ManagedInstanceName>` with the name of your managed instance. Replace `<DAGName>` with the name of Distributed Availability Group link for which youΓÇÖd like to get the status.
+
+```powershell
+# ====================================================================================
+# POWERSHELL SCRIPT TO SWITCH REPLICATION MODE SYNC-ASYNC ON MANAGED INSTANCE
+# USER CONFIGURABLE VALUES
+# (C) 2021-2022 SQL Managed Instance product group
+# ====================================================================================
+# Enter your Azure Subscription ID
+$SubscriptionID = "<SubscriptionID>"
+# Enter your Managed Instance name ΓÇô example "sqlmi1"
+$ManagedInstanceName = "<ManagedInstanceName>"
+# Enter the Distributed Availability Group name
+$DAGName = "<DAGName>"
+
+# ====================================================================================
+# INVOKING THE API CALL -- THIS PART IS NOT USER CONFIGURABLE
+# ====================================================================================
+# Log in and select subscription if needed
+if ((Get-AzContext ) -eq $null)
+{
+ echo "Logging to Azure subscription"
+ Login-AzAccount
+}
+Select-AzSubscription -SubscriptionName $SubscriptionID
+
+# Build URI for the API call
+#
+$miRG = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName
+$uriFull = "https://management.azure.com/subscriptions/" + $SubscriptionID + "/resourceGroups/" + $miRG+ "/providers/Microsoft.Sql/managedInstances/" + $ManagedInstanceName + "/distributedAvailabilityGroups/" + $DAGName + "?api-version=2021-05-01-preview"
+echo $uriFull
+
+# Build API request body
+#
+
+$bodyFull = @"
+{
+ "properties":{
+ "ReplicationMode":"sync"
+ }
+}"@
+
+echo $bodyFull
+
+# Get auth token and build the header
+#
+$azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
+$currentAzureContext = Get-AzContext
+$profileClient = New-Object Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient($azProfile)
+$token = $profileClient.AcquireAccessToken($currentAzureContext.Tenant.TenantId)
+$authToken = $token.AccessToken
+$headers = @{}
+$headers.Add("Authorization", "Bearer "+"$authToken")
+
+# Invoke API call
+#
+echo "Invoking API call switch Async-Sync replication mode on Managed Instance"
+Invoke-WebRequest -Method PATCH -Headers $headers -Uri $uriFull -ContentType "application/json" -Body $bodyFull
+```
+
+## Switch replication mode on SQL Server
+
+Use the following T-SQL script to change the replication mode of Distributed Availability Group on SQL Server from async to sync. Replace `<DAGName>` with the name of Distributed Availability Group, and replace `<AGName>` with the name of Availability Group created on SQL Server. In addition, replace `<ManagedInstanceName>` with the name of your SQL Managed Instance.
+With this step, the migration of the database from SQL Server to SQL Managed Instance is completed.
+
+```sql
+-- Sets the Distributed Availability Group to synchronous commit.
+-- ManagedInstanceName example 'sqlmi1'
+USE master
+GO
+ALTER AVAILABILITY GROUP [<DAGName>]
+MODIFY
+AVAILABILITY GROUP ON
+ '<AGName>' WITH
+ (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT),
+ '<ManagedInstanceName>' WITH
+ (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT);
+```
+
+To validate change of the link replication, execute the following DMV, and expected results are shown below. They're indicating SYNCHRONOUS_COMIT state.
+
+```sql
+-- Verifies the state of the distributed availability group
+SELECT
+ ag.name, ag.is_distributed, ar.replica_server_name,
+ ar.availability_mode_desc, ars.connected_state_desc, ars.role_desc,
+ ars.operational_state_desc, ars.synchronization_health_desc
+FROM
+ sys.availability_groups ag
+ join sys.availability_replicas ar
+ on ag.group_id=ar.group_id
+ left join sys.dm_hadr_availability_replica_states ars
+ on ars.replica_id=ar.replica_id
+WHERE
+ ag.is_distributed=1
+```
+
+With both SQL Managed Instance, and SQL Server being switched to Sync mode, the replication between the two entities is now synchronous. If you require to reverse this state, follow the same steps and set async state for both SQL Server and SQL Managed Instance.
+
+## Check LSN values on both SQL Server and Managed Instance
+
+To complete the migration, we need to ensure that the replication has completed. For this, you need to ensure that LSNs (Log Sequence Numbers) indicating the log records written for both SQL Server and SQL Managed Instance are the same. Initially, it's expected that SQL Server LSN will be higher than LSN number on SQL Managed Instance. The difference is caused by the fact that SQL Managed Instance might be lagging somewhat behind the primary SQL Server due to network latency. After some time, LSNs on SQL Managed Instance and SQL Server should match and stop changing, as the workload on SQL Server should be stopped.
+
+Use the following T-SQL query on SQL Server to read the LSN number of the last recorded transaction log. Replace `<DatabaseName>` with your database name and look for the last hardened LSN number, as shown below.
+
+```sql
+-- Obtain last hardened LSN for a database on SQL Server.
+SELECT
+ ag.name AS [Replication group],
+ db.name AS [Database name],
+ drs.database_id AS [Database ID],
+ drs.group_id,
+ drs.replica_id,
+ drs.synchronization_state_desc AS [Sync state],
+ drs.end_of_log_lsn AS [End of log LSN],
+ drs.last_hardened_lsn AS [Last hardened LSN]
+FROM
+ sys.dm_hadr_database_replica_states drs
+ inner join sys.databases db on db.database_id = drs.database_id
+ inner join sys.availability_groups ag on drs.group_id = ag.group_id
+WHERE
+ ag.is_distributed = 1 and db.name = '<DatabaseName>'
+```
+
+Use the following T-SQL query on SQL Managed Instance to read the LSN number of the last hardened LSN number for your database. Replace `<DatabaseName>` with your database name.
+
+Query shown below will work on General Purpose SQL Managed Instance. For Business Critical Managed Instance, you will need to uncomment `and drs.is_primary_replica = 1` at the end of the script. On Business Critical, this filter will make sure that only primary replica details are read.
+
+```sql
+-- Obtain LSN for a database on SQL Managed Instance.
+SELECT
+ db.name AS [Database name],
+ drs.database_id AS [Database ID],
+ drs.group_id,
+ drs.replica_id,
+ drs.synchronization_state_desc AS [Sync state],
+ drs.end_of_log_lsn AS [End of log LSN],
+ drs.last_hardened_lsn AS [Last hardened LSN]
+FROM
+ sys.dm_hadr_database_replica_states drs
+ inner join sys.databases db on db.database_id = drs.database_id
+WHERE
+ db.name = '<DatabaseName>'
+ -- for BC add the following as well
+ -- AND drs.is_primary_replica = 1
+```
+
+Verify once again that your workload is stopped on SQL Server. Check that LSNs on both SQL Server and SQL Managed Instance match, and that they remain matched and unchanged for some time. Stable LSN numbers on both ends indicate that tail log has been replicated to SQL Managed Instance and workload is effectively stopped. Proceed to the next step to initiate database failover and migration to Azure.
+
+## Initiate database failover and migration to Azure
+
+SQL Managed Instance link database failover and migration to Azure is accomplished by invoking REST API call. This will close the link and complete the replication on SQL Managed Instance. Replicated database will become read-write on SQL Managed Instance.
+
+Use the following API to initiate database failover to Azure. Replace `<YourSubscriptionID>` with your actual Azure subscription ID. Replace `<RG>` with the resource group where your SQL Managed Instance is deployed and replace `<ManagedInstanceName>` with the name of our SQL Managed Instance. In addition, replace `<DAGName>` with the name of Distributed Availability Group made on SQL Server.
+
+```PowerShell
+# ====================================================================================
+# POWERSHELL SCRIPT TO FAILOVER AND MIGRATE DATABASE WITH SQL MANAGED INSTANCE LINK
+# USER CONFIGURABLE VALUES
+# (C) 2021-2022 SQL Managed Instance product group
+# ====================================================================================
+# Enter your Azure Subscription ID
+$SubscriptionID = "<SubscriptionID>"
+# Enter your Managed Instance name ΓÇô example "sqlmi1"
+$ManagedInstanceName = "<ManagedInstanceName>"
+# Enter the Distributed Availability Group link name
+$DAGName = "<DAGName>"
+
+# ====================================================================================
+# INVOKING THE API CALL -- THIS PART IS NOT USER CONFIGURABLE.
+# ====================================================================================
+# Log in and select subscription if needed
+if ((Get-AzContext ) -eq $null)
+{
+ echo "Logging to Azure subscription"
+ Login-AzAccount
+}
+Select-AzSubscription -SubscriptionName $SubscriptionID
+
+# Build URI for the API call
+#
+$miRG = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName
+$uriFull = "https://management.azure.com/subscriptions/" + $SubscriptionID + "/resourceGroups/" + $miRG+ "/providers/Microsoft.Sql/managedInstances/" + $ManagedInstanceName + "/distributedAvailabilityGroups/" + $DAGName + "?api-version=2021-05-01-preview"
+echo $uriFull
+
+# Get auth token and build the header
+#
+$azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
+$currentAzureContext = Get-AzContext
+$profileClient = New-Object Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient($azProfile)
+$token = $profileClient.AcquireAccessToken($currentAzureContext.Tenant.TenantId)
+$authToken = $token.AccessToken
+$headers = @{}
+$headers.Add("Authorization", "Bearer "+"$authToken")
+
+# Invoke API call
+#
+Invoke-WebRequest -Method DELETE -Headers $headers -Uri $uriFull -ContentType "application/json"
+```
+
+## Cleanup Availability Group and Distributed Availability Group on SQL Server
+
+After breaking the link and migrating database to Azure SQL Managed Instance, consider cleaning up Availability Group and Distributed Availability Group on SQL Server if they aren't used otherwise.
+Replace `<DAGName>` with the name of the Distributed Availability Group on SQL Server and replace `<AGName>` with Availability Group name on the SQL Server.
+
+``` sql
+DROP AVAILABILITY GROUP <DAGName>
+GO
+DROP AVAILABILITY GROUP <AGName>
+GO
+```
+
+With this step, the migration of the database from SQL Server to Managed Instance has been completed.
+
+## Next steps
+
+For more information on the link feature, see the following resources:
+
+- [Managed Instance link ΓÇô connecting SQL Server to Azure reimagined](https://aka.ms/mi-link-techblog).
+- [Prepare for SQL Managed Instance link](./managed-instance-link-preparation.md).
+- [Use SQL Managed Instance link with scripts to replicate database](./managed-instance-link-use-scripts-to-replicate-database.md).
+- [Use SQL Managed Instance link via SSMS to replicate database](./managed-instance-link-use-ssms-to-replicate-database.md).
+- [Use SQL Managed Instance link via SSMS to migrate database](./managed-instance-link-use-ssms-to-failover-database.md).
azure-sql Managed Instance Link Use Scripts To Replicate Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-scripts-to-replicate-database.md
+
+ Title: Replicate database with link feature with T-SQL and PowerShell scripts
+
+description: This guide teaches you how to use the SQL Managed Instance link with scripts to replicate database from SQL Server to Azure SQL Managed Instance.
++++
+ms.devlang:
++++ Last updated : 03/15/2022++
+# Replicate database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts
++
+This article teaches you to use scripts, T-SQL and PowerShell, to set up [Managed Instance link feature](link-feature.md) to replicate your database from SQL Server to Azure SQL Managed Instance.
+
+Before configuring replication for your database through the link feature, make sure you've [prepared your environment](managed-instance-link-preparation.md).
+
+> [!NOTE]
+> The link feature for Azure SQL Managed Instance is currently in preview.
+
+> [!NOTE]
+> Configuration on Azure side is done with PowerShell that calls SQL Managed Instance REST API. Support for Azure PowerShell and CLI will be released in the upcomming weeks. At that point this article will be updated with the simplified PowerShell scripts.
+
+> [!TIP]
+> SQL Managed Instance link database replication can be set up with [SSMS wizard](managed-instance-link-use-ssms-to-replicate-database.md).
+
+## Prerequisites
+
+To replicate your databases to Azure SQL Managed Instance, you need the following prerequisites:
+
+- An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/).
+- [SQL Server 2019 Enterprise or Developer edition](https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2019), starting with [CU15 (15.0.4198.2)](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6).
+- An instance of Azure SQL Managed Instance. [Get started](instance-create-quickstart.md) if you don't have one.
+- [SQL Server Management Studio (SSMS) v18.11.1 or later](/sql/ssms/download-sql-server-management-studio-ssms).
+- A properly [prepared environment](managed-instance-link-preparation.md).
+
+## Terminology and naming conventions
+
+In executing scripts from this user guide, it's important not to mistaken, for example, SQL Server, or Managed Instance name, with their fully qualified domain names.
+The following table is explaining what different names exactly represent, and how to obtain their values.
+
+| Terminology | Description | How to find out |
+| :-| :- | :- |
+| SQL Server name | Also referred to as a short SQL Server name. For example: **"sqlserver1"**. This isn't a fully qualified domain name. | Execute **ΓÇ£SELECT @@SERVERNAMEΓÇ¥** from T-SQL |
+| SQL Server FQDN | Fully qualified domain name of your SQL Server. For example: **"sqlserver1.domain.com"**. | From your network (DNS) configuration on-prem, or Server name if using Azure VM. |
+| Managed Instance name | Also referred to as a short Managed Instance name. For example: **"managedinstance1"**. | See the name of your Managed Instance in Azure portal. |
+| SQL Managed Instance FQDN | Fully qualified domain name of your SQL Managed Instance name. For example: **"managedinstance1.6d710bcf372b.database.windows.net"**. | See the Host name at SQL Managed Instance overview page in Azure portal. |
+| Resolvable domain name | DNS name that could be resolved to an IP address. For example, executing **"nslookup sqlserver1.domain.com"** should return an IP address, for example 10.0.1.100. | Use nslookup from the command prompt. |
+
+## Trust between SQL Server and SQL Managed Instance
+
+This first step in creating SQL Managed Instance link is establishing the trust between the two entities and secure the endpoints used for communication and encryption of data across the network. Distributed Availability Groups technology in SQL Server doesn't have its own database mirroring endpoint, but it rather uses the existing Availability Group database mirroring endpoint. This is why the security and trust between the two entities needs to be configured for the Availability Group database mirroring endpoint.
+
+Certificates-based trust is the only supported way to secure database mirroring endpoints on SQL Server and SQL Managed Instance. In case you've existing Availability Groups that are using Windows Authentication, certificate based trust needs to be added to the existing mirroring endpoint as a secondary authentication option. This can be done by using ALTER ENDPOINT statement.
+
+> [!IMPORTANT]
+> Certificates are generated with an expiry date and time, and they need to be rotated before they expire.
+
+Here's the overview of the process to secure database mirroring endpoints for both SQL Server and SQL Managed Instance:
+- Generate certificate on SQL Server and obtain its public key.
+- Obtain public key of SQL Managed Instance certificate.
+- Exchange the public keys between the SQL Server and SQL Managed Instance.
+
+The following section discloses steps to complete these actions.
+
+## Create certificate on SQL Server and import its public key to Managed Instance
+
+First, create master key on SQL Server and generate authentication certificate.
+
+```sql
+-- Create MASTER KEY encryption password
+-- Keep the password confidential and in a secure place.
+USE MASTER
+CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<strong_password>'
+GO
+
+-- Create the SQL Server certificate for SQL Managed Instance link
+USE MASTER
+GO
+
+DECLARE @sqlserver_certificate_name NVARCHAR(MAX) = N'Cert_' + @@servername + N'_endpoint'
+DECLARE @sqlserver_certificate_subject NVARCHAR(MAX) = N'Certificate for ' + @sqlserver_certificate_name
+DECLARE @create_sqlserver_certificate_command NVARCHAR(MAX) = N'CREATE CERTIFICATE [' + @sqlserver_certificate_name + '] WITH SUBJECT = ''' + @sqlserver_certificate_subject + ''', EXPIRY_DATE = ''03/30/2025'''
+EXEC sp_executesql @stmt = @create_sqlserver_certificate_command
+GO
+```
+
+Then, use the following T-SQL query to verify the certificate has been created.
+
+```sql
+USE MASTER
+GO
+SELECT * FROM sys.certificates
+```
+
+In the query results you'll find the certificate and will see that it has been encrypted with the master key.
+
+Now you can get the public key of the generated certificate.
+
+```sql
+-- Show the public key of the generated SQL Server certificate
+USE MASTER
+GO
+DECLARE @sqlserver_certificate_name NVARCHAR(MAX) = N'Cert_' + @@servername + N'_endpoint'
+DECLARE @PUBLICKEYENC VARBINARY(MAX) = CERTENCODED(CERT_ID(@sqlserver_certificate_name));
+SELECT @PUBLICKEYENC AS PublicKeyEncoded;
+```
+
+Save the value of PublicKeyEncoded from the output, as it will be needed for the next step.
+
+Next step should be executed in PowerShell, with installed Az.Sql module, version 3.5.1 or higher, or use Azure Cloud Shell online to run the commands as it's always updated wit the latest module versions.
+
+Execute the following PowerShell script in Azure Cloud Shell (fill out necessary user information, copy, paste into Azure Cloud Shell and execute).
+Replace `<SubscriptionID>` with your Azure Subscription ID. Replace `<ManagedInstanceName>` with the short name of your managed instance. Replace `<PublicKeyEncoded>` below with the public portion of the SQL Server certificate in binary format generated in the previous step. That will be a long string value starting with 0x, that you've obtained from SQL Server.
+
+
+```powershell
+# ===============================================================================
+# POWERSHELL SCRIPT TO IMPORT SQL SERVER CERTIFICATE TO MANAGED INSTANCE
+# USER CONFIGURABLE VALUES
+# (C) 2021-2022 SQL Managed Instance product group
+# ===============================================================================
+# Enter your Azure Subscription ID
+$SubscriptionID = "<YourSubscriptionID>"
+
+# Enter your Managed Instance name ΓÇô example "sqlmi1"
+$ManagedInstanceName = "<YourManagedInstanceName>"
+
+# Insert the cert public key blob you got from the SQL Server
+$PublicKeyEncoded = "<PublicKeyEncoded>"
++
+# ===============================================================================
+# INVOKING THE API CALL -- REST OF THE SCRIPT IS NOT USER CONFIGURABLE
+# ===============================================================================
+# Log in and select Subscription if needed.
+#
+if ((Get-AzContext ) -eq $null)
+{
+ echo "Logging to Azure subscription"
+ Login-AzAccount
+}
+Select-AzSubscription -SubscriptionName $SubscriptionID
++
+# Build URI for the API call.
+#
+$miRG = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName
+$uriFull = "https://management.azure.com/subscriptions/" + $SubscriptionID + "/resourceGroups/" + $miRG+ "/providers/Microsoft.Sql/managedInstances/" + $ManagedInstanceName + "/hybridCertificate?api-version=2020-11-01-preview"
+echo $uriFull
+
+# Build API request body.
+#
+$bodyFull = @"
+{
+ "properties":{ "PublicBlob":"$PublicKeyEncoded" }
+}"@
+
+echo $bodyFull
++
+# Get auth token and build the HTTP request header.
+#
+$azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
+$currentAzureContext = Get-AzContext
+$profileClient = New-Object Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient($azProfile)
+$token = $profileClient.AcquireAccessToken($currentAzureContext.Tenant.TenantId)
+$authToken = $token.AccessToken
+$headers = @{}
+$headers.Add("Authorization", "Bearer "+"$authToken")
++
+# Invoke API call
+#
+Invoke-WebRequest -Method POST -Headers $headers -Uri $uriFull -ContentType "application/json" -Body $bodyFull
+```
+
+The result of this operation will be time stamp of the successful upload of the SQL Server certificate private key to Managed Instance.
+
+## Get the Managed Instance public certificate public key and import it to SQL Server
+
+Certificate for securing the endpoint for SQL Managed Instance link is automatically generated. This section describes how to get the SQL Managed Instance certificate public key, and how import is to SQL Server.
+
+Use SSMS to connect to the SQL Managed Instance and execute stored procedure [sp_get_endpoint_certificate](/sql/relational-databases/system-stored-procedures/sp-get-endpoint-certificate-transact-sql) to get the certificate public key.
+
+```sql
+-- Execute stored procedure on SQL Managed Instance to get public key of the instance certificate.
+EXEC sp_get_endpoint_certificate @endpoint_type = 4
+```
+
+Copy the entire public key from Managed Instance starting with ΓÇ£0xΓÇ¥ shown in the previous step and use it in the below query by replacing `<InstanceCertificate>` with the key value. No quotations need to be used.
+
+> [!IMPORTANT]
+> Name of the certificate must be SQL Managed Instance FQDN.
+
+```sql
+USE MASTER
+CREATE CERTIFICATE [<SQLManagedInstanceFQDN>]
+FROM BINARY = <InstanceCertificate>
+```
+
+Finally, verify all created certificates by viewing the following DMV.
+
+```sql
+SELECT * FROM sys.certificates
+```
+
+## Mirroring endpoint on SQL Server
+
+If you donΓÇÖt have existing Availability Group nor mirroring endpoint, the next step is to create a mirroring endpoint on SQL Server and secure it with the certificate. If you do have existing Availability Group or mirroring endpoint, go straight to the next section ΓÇ£Altering existing database mirroring endpointΓÇ¥
+To verify that you don't have an existing database mirroring endpoint created, use the following script.
+
+```sql
+-- View database mirroring endpoints on SQL Server
+SELECT * FROM sys.database_mirroring_endpoints WHERE type_desc = 'DATABASE_MIRRORING'
+```
+
+In case that the above query doesn't show there exists a previous database mirroring endpoint, execute the following script to create a new database mirroring endpoint on the port 5022 and secure it with a certificate.
+
+```sql
+-- Create connection endpoint listener on SQL Server
+USE MASTER
+CREATE ENDPOINT database_mirroring_endpoint
+ STATE=STARTED
+ AS TCP (LISTENER_PORT=5022, LISTENER_IP = ALL)
+ FOR DATABASE_MIRRORING (
+ ROLE=ALL,
+ AUTHENTICATION = CERTIFICATE <SQL_SERVER_CERTIFICATE>,
+ ENCRYPTION = REQUIRED ALGORITHM AES
+ )
+GO
+```
+
+Validate that the mirroring endpoint was created by executing the following on SQL Server.
++
+```sql
+-- View database mirroring endpoints on SQL Server
+SELECT
+ name, type_desc, state_desc, role_desc,
+ connection_auth_desc, is_encryption_enabled, encryption_algorithm_desc
+FROM
+ sys.database_mirroring_endpoints
+```
+
+New mirroring endpoint was created with CERTIFICATE authentication, and AES encryption enabled.
+
+### Altering existing database mirroring endpoint
+
+> [!NOTE]
+> Skip this step if you've just created a new mirroring endpoint. Use this step only if using existing Availability Groups with existing database mirroring endpoint.
++
+In case existing Availability Groups are used for SQL Managed Instance link, or in case there's an existing database mirroring endpoint, first validate it satisfies the following mandatory conditions for SQL Managed Instance Link:
+- Type must be ΓÇ£DATABASE_MIRRORINGΓÇ¥.
+- Connection authentication must be ΓÇ£CERTIFICATEΓÇ¥.
+- Encryption must be enabled.
+- Encryption algorithm must be ΓÇ£AESΓÇ¥.
+
+Execute the following query to view details for an existing database mirroring endpoint.
+
+```sql
+-- View database mirroring endpoints on SQL Server
+SELECT
+ name, type_desc, state_desc, role_desc, connection_auth_desc,
+ is_encryption_enabled, encryption_algorithm_desc
+FROM
+ sys.database_mirroring_endpoints
+```
+
+In case that the output shows that the existing DATABASE_MIRRORING endpoint connection_auth_desc isn't ΓÇ£CERTIFICATEΓÇ¥, or encryption_algorthm_desc isn't ΓÇ£AESΓÇ¥, the **endpoint needs to be altered to meet the requirements**.
+
+On SQL Server, one database mirroring endpoint is used for both Availability Groups and Distributed Availability Groups. In case your connection_auth_desc is NTLM (Windows authentication) or KERBEROS, and you need Windows authentication for an existing Availability Groups, it's possible to alter the endpoint to use multiple authentication methods by switching the auth option to NEGOTIATE CERTIFICATE. This will allow the existing AG to use Windows authentication, while using certificate authentication for SQL Managed Instance. See details of possible options at documentation page for [sys.database_mirroring_endpoints](/sql/relational-databases/system-catalog-views/sys-database-mirroring-endpoints-transact-sql).
+
+Similarly, if encryption doesn't include AES and you need RC4 encryption, it's possible to alter the endpoint to use both algorithms. See details of possible options at documentation page for [sys.database_mirroring_endpoints](/sql/relational-databases/system-catalog-views/sys-database-mirroring-endpoints-transact-sql).
+
+The script below is provided as an example of how to alter your existing database mirroring endpoint. Depending on your existing specific configuration, you perhaps might need to customize it further for your scenario. Replace `<YourExistingEndpointName>` with your existing endpoint name. Replace `<CERTIFICATE-NAME>` with the name of the generated SQL Server certificate. You can also use `SELECT * FROM sys.certificates` to get the name of the created certificate on the SQL Server.
+
+```sql
+-- Alter the existing database mirroring endpoint to use CERTIFICATE for authentication and AES for encryption
+USE MASTER
+ALTER ENDPOINT <YourExistingEndpointName>
+ STATE=STARTED
+ AS TCP (LISTENER_PORT=5022, LISTENER_IP = ALL)
+ FOR DATABASE_MIRRORING (
+ ROLE=ALL,
+ AUTHENTICATION = WINDOWS NEGOTIATE CERTIFICATE <CERTIFICATE-NAME>,
+ ENCRYPTION = REQUIRED ALGORITHM AES
+ )
+GO
+```
+
+After running the ALTER endpoint query and setting the dual authentication mode to Windows and Certificate, use again this query to show the database mirroring endpoint details.
+
+```sql
+-- View database mirroring endpoints on SQL Server
+SELECT
+ name, type_desc, state_desc, role_desc, connection_auth_desc,
+ is_encryption_enabled, encryption_algorithm_desc
+FROM
+ sys.database_mirroring_endpoints
+```
+
+With this you've successfully modified your database mirroring endpoint for SQL Managed Instance link.
+
+## Availability Group on SQL Server
+
+If you don't have existing AG the next step is to create an AG on SQL Server. If you do have existing AG go straight to the next section ΓÇ£Use existing Availability Group (AG) on SQL ServerΓÇ¥. A new AG needs to be created with the following parameters for Managed Instance link:
+- Specify SQL Server name
+- Specify database name
+- Failover mode MANUAL
+- Seeding mode AUTOMATIC
+
+Use the following script to create a new AG on SQL Server. Replace `<SQLServerName>` with the name of your SQL Server. Find out your SQL Server name with executing the following T-SQL:
+
+```sql
+SELECT @@SERVERNAME AS SQLServerName
+```
+
+Replace `<AGName>` with the name of your availability group. For multiple databases you'll need to create multiple Availability Groups. Managed Instance link requires one database per AG. In this respect, consider naming each AG so that its name reflects the corresponding database - for example `AG_<db_name>`. Replace `<DatabaseName>` with the name of database you wish to replicate. Replace `<SQLServerIP>` with SQL ServerΓÇÖs IP address. Alternatively, resolvable SQL Server host machine name can be used, but you need to make sure that the name is resolvable from SQL Managed Instance virtual network.
+
+```sql
+-- Create primary AG on SQL Server
+USE MASTER
+CREATE AVAILABILITY GROUP [<AGName>]
+WITH (CLUSTER_TYPE = NONE)
+ FOR database [<DatabaseName>]
+ REPLICA ON
+ '<SQLServerName>' WITH
+ (
+ ENDPOINT_URL = 'TCP://<SQLServerIP>:5022',
+ AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
+ FAILOVER_MODE = MANUAL,
+ SEEDING_MODE = AUTOMATIC
+ );
+GO
+```
+
+> [!NOTE]
+> One database per single Availability Group is the current product limitation for replication to SQL Managed Instance using the link feature.
+> If you get the Error 1475 you'll have to create a full backup without COPY ONLY option, that will start new backup chain.
+> As the best practice it's highly recommended that collation on SQL Server and SQL Managed Instance is the same. This is because depending on collation settings, AG and DAG names could, or could not be case sensitive. If there's a mismatch with this, there could be issues in ability to successfully connect SQL Server to Managed Instance.
+
+### Verify AG and distributed AG
+
+Use the following script to list all available Availability Groups and Distributed Availability Groups on the SQL Server. Availability Group state needs to be connected, and Distributed Availability Group state disconnected at this point. Distributed Availability Group state will move to `connected` only when it has been joined with SQL Managed Instance. This will be explained in one of the next steps.
+
+```sql
+-- This will show that Availability Group and Distributed Availability Group have been created on SQL Server.
+SELECT
+ name, is_distributed, cluster_type_desc,
+ sequence_number, is_contained
+FROM
+ sys.availability_groups
+```
+
+Alternatively, in SSMS object explorer, expand the ΓÇ£Always On High AvailabilityΓÇ¥, then ΓÇ£Availability GroupsΓÇ¥ folder to show available Availability Groups and Distributed Availability Groups.
+
+## Creating SQL Managed Instance link
+
+The final step of the setup process is to create the SQL Managed Instance link. To accomplish this, a REST API call will be made. Invoking direct API calls will be replaced with PowerShell and CLI clients, which will be delivered in one of our next releases.
+
+Invoking direct API call to Azure can be accomplished with various API clients. However, for simplicity of the process, execute the below PowerShell script from Azure Cloud Shell.
+
+Log in to Azure portal and execute the below PowerShell scripts in Azure Cloud Shell. Make the following replacements with the actual values in the script: Replace `<SubscriptionID>` with your Azure Subscription ID. Replace `<ManagedInstanceName>` with the short name of your managed instance. Replace `<AGName>` with the name of Availability Group created on SQL Server. Replace `<DAGName>` with the name of Distributed Availability Group create on SQL Server. Replace `<DatabaseName>` with the database replicated in Availability Group on SQL Server. Replace `<SQLServerAddress>` with the address of the SQL Server. This can be a DNS name, or public IP or even private IP address, as long as the address provided can be resolved from the backend node hosting the SQL Managed Instance.
+
+```powershell
+# =============================================================================
+# POWERSHELL SCRIPT FOR CREATING MANAGED INSTANCE LINK
+# USER CONFIGURABLE VALUES
+# (C) 2021-2022 SQL Managed Instance product group
+# =============================================================================
+# Enter your Azure Subscription ID
+$SubscriptionID = "<SubscriptionID>"
+# Enter your Managed Instance name ΓÇô example "sqlmi1"
+$ManagedInstanceName = "<ManagedInstanceName>"
+# Enter Availability Group name that was created on the SQL Server
+$AGName = "<AGName>"
+# Enter Distributed Availability Group name that was created on SQL Server
+$DAGName = "<DAGName>"
+# Enter database name that was placed in Availability Group for replciation
+$DatabaseName = "<DatabaseName>"
+# Enter SQL Server address
+$ SQLServerAddress = "<SQLServerAddress>"
+
+# =============================================================================
+# INVOKING THE API CALL -- THIS PART IS NOT USER CONFIGURABLE
+# =============================================================================
+# Log in to subscription if needed
+if ((Get-AzContext ) -eq $null)
+{
+ echo "Logging to Azure subscription"
+ Login-AzAccount
+}
+Select-AzSubscription -SubscriptionName $SubscriptionID
+# --
+# Build URI for the API call
+# --
+echo "Building API URI"
+$miRG = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName
+$uriFull = "https://management.azure.com/subscriptions/" + $SubscriptionID + "/resourceGroups/" + $miRG+ "/providers/Microsoft.Sql/managedInstances/" + $ManagedInstanceName + "/distributedAvailabilityGroups/" + $DAGName + "?api-version=2021-05-01-preview"
+echo $uriFull
+# --
+# Build API request body
+# --
+echo "Buildign API request body"
+$bodyFull = @"
+{
+ "properties":{
+ "TargetDatabase":"$DatabaseName",
+ "SourceEndpoint":"TCP://$SQLServerAddress`:5022",
+ "PrimaryAvailabilityGroupName":"$AGName",
+ "SecondaryAvailabilityGroupName":"$ManagedInstanceName",
+ }
+}
+"@
+echo $bodyFull
+# --
+# Get auth token and build the header
+# --
+$azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
+$currentAzureContext = Get-AzContext
+$profileClient = New-Object Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient($azProfile)
+$token = $profileClient.AcquireAccessToken($currentAzureContext.Tenant.TenantId)
+$authToken = $token.AccessToken
+$headers = @{}
+$headers.Add("Authorization", "Bearer "+"$authToken")
+# --
+# Invoke API call
+# --
+echo "Invoking API call to have Managed Instance join DAG on SQL Server"
+$response = Invoke-WebRequest -Method PUT -Headers $headers -Uri $uriFull -ContentType "application/json" -Body $bodyFull
+echo $response
+```
+
+The result of this operation will be the time stamp of the successful execution of request for Managed Instance link creation.
+
+## Verifying created SQL Managed Instance link
+
+To verify that connection has been made between SQL Managed Instance and SQL Server, execute the following query on SQL Server. Have in mind that connection will not be instantaneous upon executing the API call. It can take up to a minute for the DMV to start showing a successful connection. Keep refreshing the DMV until connection is shown as CONNECTED for SQL Managed Instance replica.
+
+```sql
+SELECT
+ r.replica_server_name AS [Replica],
+ r.endpoint_url AS [Endpoint],
+ rs.connected_state_desc AS [Connected state],
+ rs.last_connect_error_description AS [Last connection error],
+ rs.last_connect_error_number AS [Last connection error No],
+ rs.last_connect_error_timestamp AS [Last error timestamp]
+FROM
+ sys.dm_hadr_availability_replica_states rs
+ JOIN sys.availability_replicas r
+ ON rs.replica_id = r.replica_id
+```
+
+In addition, once the connection is established, Managed Instance Databases view in SSMS will initially show replicated database as “Restoring…”. This is because the initial seeding is in progress moving the full backup of the database, which is followed by the catchup replication. Once the seeding process is done, the database will no longer be in “Restoring…” state. For small databases, seeding might finish quickly so you might not see the initial “Restoring…” state in SSMS.
+
+> [!IMPORTANT]
+> The link will not work unless network connectivity exists between SQL Server and Managed Instance. To troubleshoot the network connectivity following steps described in [test bidirectional network connectivity](managed-instance-link-preparation.md#test-bidirectional-network-connectivity).
+
+> [!IMPORTANT]
+> Make regular backups of the log file on SQL Server. If the log space used reaches 100%, the replication to SQL Managed Instance will stop until this space use is reduced. It is highly recommended that you automate log backups through setting up a daily job. For more details on how to do this see [Backup log files on SQL Server](link-feature-best-practices.md#take-log-backups-regularly).
+
+## Next steps
+
+For more information on the link feature, see the following:
+
+- [Managed Instance link ΓÇô connecting SQL Server to Azure reimagined](https://aka.ms/mi-link-techblog).
+- [Prepare for SQL Managed Instance link](./managed-instance-link-preparation.md).
+- [Use SQL Managed Instance link with scripts to migrate database](./managed-instance-link-use-scripts-to-failover-database.md).
+- [Use SQL Managed Instance link via SSMS to replicate database](./managed-instance-link-use-ssms-to-replicate-database.md).
+- [Use SQL Managed Instance link via SSMS to migrate database](./managed-instance-link-use-ssms-to-failover-database.md).
azure-sql Management Operations Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/management-operations-monitor.md
The following table compares management operation monitoring options:
| Resource group deployments | Infinite<sup>1</sup> | No<sup>2</sup> | Visible | Visible | Not visible | Visible | Not visible | | Activity log | 90 days | No | Visible | Visible | Visible | Visible | Not visible | | Managed instance operations API | 24 hours | [Yes](management-operations-cancel.md) | Visible | Visible | Visible | Visible | Visible |
-| | | | | | | | |
+ <sup>1</sup> The deployment history for a resource group is limited to 800 deployments.
azure-sql Management Operations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/management-operations-overview.md
The following tables summarize operations and typical overall durations, based o
|First instance in an empty subnet|Virtual cluster creation|90% of operations finish in 4 hours.| |First instance of another hardware generation or maintenance window in a non-empty subnet (for example, first Premium series instance in a subnet with Standard series instances)|Virtual cluster creation<sup>1</sup>|90% of operations finish in 4 hours.| |Subsequent instance creation within the non-empty subnet (2nd, 3rd, etc. instance)|Virtual cluster resizing|90% of operations finish in 2.5 hours.|
-| | |
+ <sup>1</sup> Virtual cluster is built per hardware generation and maintenance window configuration.
The following tables summarize operations and typical overall durations, based o
|Instance service tier change (General Purpose to Business Critical and vice versa)|- Virtual cluster resizing<br>- Always On availability group seeding|90% of operations finish in 2.5 hours + time to seed all databases (220 GB/hour).| |Instance hardware generation or maintenance window change (General Purpose)|- Virtual cluster creation or resizing<sup>1</sup>|90% of operations finish in 4 hours (creation) or 2.5 hours (resizing) .| |Instance hardware generation or maintenance window change (Business Critical)|- Virtual cluster creation or resizing<sup>1</sup><br>- Always On availability group seeding|90% of operations finish in 4 hours (creation) or 2.5 hours (resizing) + time to seed all databases (220 GB/hour).|
-| | |
+ <sup>1</sup> Managed instance must be placed in a virtual cluster with the corresponding hardware generation and maintenance window. If there is no such virtual cluster in the subnet, a new one must be created first to accommodate the instance.
The following tables summarize operations and typical overall durations, based o
|||| |Non-last instance deletion|Log tail backup for all databases|90% of operations finish in up to 1 minute.<sup>1</sup>| |Last instance deletion |- Log tail backup for all databases <br> - Virtual cluster deletion|90% of operations finish in up to 1.5 hours.<sup>2</sup>|
-| | |
+ <sup>1</sup> In case of multiple virtual clusters in the subnet, if the last instance in the virtual cluster is deleted, this operation will immediately trigger **asynchronous** deletion of the virtual cluster.
azure-sql Replication Transactional Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/replication-transactional-overview.md
The key components in transactional replication are the **Publisher**, **Distrib
| **Distributor** | No | Yes| | **Pull subscriber** | No | Yes| | **Push Subscriber**| Yes | Yes|
-| &nbsp; | &nbsp; | &nbsp; |
+ The **Publisher** publishes changes made on some tables (articles) by sending the updates to the Distributor. The publisher can be an Azure SQL Managed Instance or a SQL Server instance.
There are different [types of replication](/sql/relational-databases/replication
| [**Peer-to-peer**](/sql/relational-databases/replication/transactional/peer-to-peer-transactional-replication) | No | No| | [**Bidirectional**](/sql/relational-databases/replication/transactional/bidirectional-transactional-replication) | No | Yes| | [**Updatable subscriptions**](/sql/relational-databases/replication/transactional/updatable-subscriptions-for-transactional-replication) | No | No|
-| &nbsp; | &nbsp; | &nbsp; |
+ ### Supportability Matrix
There are different [types of replication](/sql/relational-databases/replication
| SQL Server 2014 | SQL Server 2019 <br/> SQL Server 2017 <br/> SQL Server 2016 <br/> SQL Server 2014 <br/>| SQL Server 2017 <br/> SQL Server 2016 <br/> SQL Server 2014 <br/> SQL Server 2012 <br/> SQL Server 2008 R2 <br/> SQL Server 2008 | | SQL Server 2012 | SQL Server 2019 <br/> SQL Server 2017 <br/> SQL Server 2016 <br/> SQL Server 2014 <br/>SQL Server 2012 <br/> | SQL Server 2016 <br/> SQL Server 2014 <br/> SQL Server 2012 <br/> SQL Server 2008 R2 <br/> SQL Server 2008 | | SQL Server 2008 R2 <br/> SQL Server 2008 | SQL Server 2019 <br/> SQL Server 2017 <br/> SQL Server 2016 <br/> SQL Server 2014 <br/>SQL Server 2012 <br/> SQL Server 2008 R2 <br/> SQL Server 2008 | SQL Server 2014 <br/> SQL Server 2012 <br/> SQL Server 2008 R2 <br/> SQL Server 2008 <br/> |
-| &nbsp; | &nbsp; | &nbsp; |
+ ## When to use
azure-sql Create Configure Managed Instance Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/scripts/create-configure-managed-instance-powershell.md
This script uses some of the following commands. For more information about used
| [Set-AzRouteTable](/powershell/module/az.network/Set-AzRouteTable) | Sets the goal state for a route table. | | [New-AzSqlInstance](/powershell/module/az.sql/New-AzSqlInstance) | Creates a managed instance. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group, including all nested resources. |
-|||
+ ## Next steps
azure-sql Service Tiers Managed Instance Vcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/service-tiers-managed-instance-vcore.md
For more details, review [resource limits](resource-limits.md).
|**Read-only replicas**| 0 built-in <br> 0 - 4 using [geo-replication](../database/active-geo-replication-overview.md) | 1 built-in, included in price <br> 0 - 4 using [geo-replication](../database/active-geo-replication-overview.md) | |**Pricing/billing**| [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/managed/) is charged. <br/>IOPS is not charged| [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/managed/) is charged. <br/>IOPS is not charged. |**Discount models**| [Reserved instances](../database/reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|[Reserved instances](../database/reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|
-|||
+ > [!NOTE] > For more information on the Service Level Agreement (SLA), see [SLA for Azure SQL Managed Instance](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance/).
azure-sql Sql Managed Instance Paas Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/sql-managed-instance-paas-overview.md
The key features of SQL Managed Instance are shown in the following table:
| Built-in Integration Service (SSIS) | No - SSIS is a part of [Azure Data Factory PaaS](../../data-factory/tutorial-deploy-ssis-packages-azure.md) | | Built-in Analysis Service (SSAS) | No - SSAS is separate [PaaS](../../analysis-services/analysis-services-overview.md) | | Built-in Reporting Service (SSRS) | No - use [Power BI paginated reports](/power-bi/paginated-reports/paginated-reports-report-builder-power-bi) instead or host SSRS on an Azure VM. While SQL Managed Instance cannot run SSRS as a service, it can host [SSRS catalog databases](/sql/reporting-services/install-windows/ssrs-report-server-create-a-report-server-database#database-server-version-requirements) for a reporting server installed on Azure Virtual Machine, using SQL Server authentication. |
-|||
+ ## vCore-based purchasing model
azure-sql Winauth Azuread Setup Incoming Trust Based Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-setup-incoming-trust-based-flow.md
To implement the incoming trust-based authentication flow, first ensure that the
|Azure tenant. | | |Azure subscription under the same Azure AD tenant you plan to use for authentication.| | |Azure AD Connect installed. | Hybrid environments where identities exist both in Azure AD and AD. |
-| | |
+ ## Create and configure the Azure AD Kerberos Trusted Domain Object
azure-sql Winauth Azuread Setup Modern Interactive Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-setup-modern-interactive-flow.md
There is no AD to Azure AD set up required for enabling software running on Azur
|Application must connect to the managed instance via an interactive session. | This supports applications such as SQL Server Management Studio (SSMS) and web applications, but won't work for applications that run as a service. | |Azure AD tenant. | | |Azure AD Connect installed. | Hybrid environments where identities exist both in Azure AD and AD. |
-| | |
+ ## Configure group policy
azure-sql Winauth Azuread Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-setup.md
The following prerequisites are required to implement the modern interactive aut
|Application must connect to the managed instance via an interactive session. | This supports applications such as SQL Server Management Studio (SSMS) and web applications, but won't work for applications that run as a service. | |Azure AD tenant. | | |Azure AD Connect installed. | Hybrid environments where identities exist both in Azure AD and AD. |
-| | |
+ See [How to set up Windows Authentication for Azure Active Directory with the modern interactive flow (Preview)](winauth-azuread-setup-modern-interactive-flow.md) for steps to enable this authentication flow.
The following prerequisites are required to implement the incoming trust-based a
|Azure tenant. | | |Azure subscription under the same Azure AD tenant you plan to use for authentication.| | |Azure AD Connect installed. | Hybrid environments where identities exist both in Azure AD and AD. |
-| | |
+ See [How to set up Windows Authentication for Azure Active Directory with the incoming trust based flow (Preview)](winauth-azuread-setup-incoming-trust-based-flow.md) for instructions on enabling this authentication flow.
azure-sql Sql Server To Sql Database Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/database/sql-server-to-sql-database-overview.md
We recommend the following migration tools:
| [Azure Migrate](../../../migrate/how-to-create-azure-sql-assessment.md) | This Azure service helps you discover and assess your SQL data estate at scale on VMware. It provides Azure SQL deployment recommendations, target sizing, and monthly estimates. | |[Data Migration Assistant](/sql/dma/dma-migrateonpremsqltosqldb)|This desktop tool from Microsoft provides seamless assessments of SQL Server and single-database migrations to Azure SQL Database (both schema and data). </br></br>The tool can be installed on a server on-premises or on your local machine that has connectivity to your source databases. The migration process is a logical data movement between objects in the source and target databases.| |[Azure Database Migration Service](../../../dms/tutorial-sql-server-to-azure-sql.md)|This Azure service can migrate SQL Server databases to Azure SQL Database through the Azure portal or automatically through PowerShell. Database Migration Service requires you to select a preferred Azure virtual network during provisioning to ensure connectivity to your source SQL Server databases. You can migrate single databases or at scale. |
-| | |
+ The following table lists alternative migration tools:
The following table lists alternative migration tools:
|[Bulk copy](/sql/relational-databases/import-export/import-and-export-bulk-data-by-using-the-bcp-utility-sql-server)|The [bulk copy program (bcp) tool](/sql/tools/bcp-utility) copies data from an instance of SQL Server into a data file. Use the tool to export the data from your source and import the data file into the target SQL database. </br></br> For high-speed bulk copy operations to move data to Azure SQL Database, you can use the [Smart Bulk Copy tool](/samples/azure-samples/smartbulkcopy/smart-bulk-copy/) to maximize transfer speed by taking advantage of parallel copy tasks.| |[Azure Data Factory](../../../data-factory/connector-azure-sql-database.md)|The [Copy activity](../../../data-factory/copy-activity-overview.md) in Azure Data Factory migrates data from source SQL Server databases to Azure SQL Database by using built-in connectors and an [integration runtime](../../../data-factory/concepts-integration-runtime.md).</br> </br> Data Factory supports a wide range of [connectors](../../../data-factory/connector-overview.md) to move data from SQL Server sources to Azure SQL Database.| |[SQL Data Sync](../../database/sql-data-sync-data-sql-server-sql-database.md)|SQL Data Sync is a service built on Azure SQL Database that lets you synchronize selected data bidirectionally across multiple databases, both on-premises and in the cloud.</br>Data Sync is useful in cases where data needs to be kept updated across several databases in Azure SQL Database or SQL Server.|
-| | |
+ ## Compare migration options
The following table compares the migration options that we recommend:
|||| |[Data Migration Assistant](/sql/dma/dma-migrateonpremsqltosqldb) | - Migrate single databases (both schema and data). </br> - Can accommodate downtime during the data migration process. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Migration activity performs data movement between database objects (from source to target), so we recommend that you run it during off-peak times. </br> - Data Migration Assistant reports the status of migration per database object, including the number of rows migrated. </br> - For large migrations (number of databases or size of database), use Azure Database Migration Service.| |[Azure Database Migration Service](../../../dms/tutorial-sql-server-to-azure-sql.md)| - Migrate single databases or at scale. </br> - Can accommodate downtime during the migration process. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Migrations at scale can be automated via [PowerShell](../../../dms/howto-sql-server-to-azure-sql-powershell.md). </br> - Time to complete migration depends on database size and the number of objects in the database. </br> - Requires the source database to be set as read-only. |
-| | | |
+ The following table compares the alternative migration options:
The following table compares the alternative migration options:
|[Bulk copy](/sql/relational-databases/import-export/import-and-export-bulk-data-by-using-the-bcp-utility-sql-server)| - Do full or partial data migrations. </br> - Can accommodate downtime. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Requires downtime for exporting data from the source and importing into the target. </br> - The file formats and data types used in the export or import need to be consistent with table schemas. | |[Azure Data Factory](../../../data-factory/connector-azure-sql-database.md)| - Migrate and/or transform data from source SQL Server databases. </br> - Merging data from multiple sources of data to Azure SQL Database is typically for business intelligence (BI) workloads. | - Requires creating data movement pipelines in Data Factory to move data from source to destination. </br> - [Cost](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/) is an important consideration and is based on factors like pipeline triggers, activity runs, and duration of data movement. | |[SQL Data Sync](../../database/sql-data-sync-data-sql-server-sql-database.md)| - Synchronize data between source and target databases.</br> - Suitable to run continuous sync between Azure SQL Database and on-premises SQL Server in a bidirectional flow. | - Azure SQL Database must be the hub database for sync with an on-premises SQL Server database as a member database.</br> - Compared to transactional replication, SQL Data Sync supports bidirectional data sync between on-premises and Azure SQL Database. </br> - Can have a higher performance impact, depending on the workload.|
-| | | |
+ ## Feature interoperability
azure-sql Sql Server To Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-overview.md
We recommend the following migration tools:
|[Azure Database Migration Service](../../../dms/tutorial-sql-server-to-managed-instance.md) | This Azure service supports migration in the offline mode for applications that can afford downtime during the migration process. Unlike the continuous migration in online mode, offline mode migration runs a one-time restore of a full database backup from the source to the target. | |[Native backup and restore](../../managed-instance/restore-sample-database-quickstart.md) | SQL Managed Instance supports restore of native SQL Server database backups (.bak files). It's the easiest migration option for customers who can provide full database backups to Azure Storage.| |[Log Replay Service](../../managed-instance/log-replay-service-migrate.md) | This cloud service is enabled for SQL Managed Instance based on SQL Server log-shipping technology. It's a migration option for customers who can provide full, differential, and log database backups to Azure Storage. Log Replay Service is used to restore backup files from Azure Blob Storage to SQL Managed Instance.|
-| | |
+ The following table lists alternative migration tools:
The following table compares the migration options that we recommend:
|[Azure Database Migration Service](../../../dms/tutorial-sql-server-to-managed-instance.md) | - Migrate single databases or multiple databases at scale. </br> - Can accommodate downtime during the migration process. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Migrations at scale can be automated via [PowerShell](../../../dms/howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md). </br> - Time to complete migration depends on database size and is affected by backup and restore time. </br> - Sufficient downtime might be required. | |[Native backup and restore](../../managed-instance/restore-sample-database-quickstart.md) | - Migrate individual line-of-business application databases. </br> - Quick and easy migration without a separate migration service or tool. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Database backup uses multiple threads to optimize data transfer to Azure Blob Storage, but partner bandwidth and database size can affect transfer rate. </br> - Downtime should accommodate the time required to perform a full backup and restore (which is a size of data operation).| |[Log Replay Service](../../managed-instance/log-replay-service-migrate.md) | - Migrate individual line-of-business application databases. </br> - More control is needed for database migrations. </br> </br> Supported sources: </br> - SQL Server (2008 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - The migration entails making full database backups on SQL Server and copying backup files to Azure Blob Storage. Log Replay Service is used to restore backup files from Azure Blob Storage to SQL Managed Instance. </br> - Databases being restored during the migration process will be in a restoring mode and can't be used to read or write until the process has finished.|
-| | | |
+ The following table compares the alternative migration options:
The following table compares the alternative migration options:
|[Bulk copy](/sql/relational-databases/import-export/import-and-export-bulk-data-by-using-the-bcp-utility-sql-server)| - Do full or partial data migrations. </br> - Can accommodate downtime. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Requires downtime for exporting data from the source and importing into the target. </br> - The file formats and data types used in the export or import need to be consistent with table schemas. | |[Import Export Wizard/BACPAC](../../database/database-import.md)| - Migrate individual line-of-business application databases. </br>- Suited for smaller databases. </br> Does not require a separate migration service or tool. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | </br> - Requires downtime because data needs to be exported at the source and imported at the destination. </br> - The file formats and data types used in the export or import need to be consistent with table schemas to avoid truncation or data-type mismatch errors. </br> - Time taken to export a database with a large number of objects can be significantly higher. | |[Azure Data Factory](../../../data-factory/connector-azure-sql-managed-instance.md)| - Migrate and/or transform data from source SQL Server databases.</br> - Merging data from multiple sources of data to Azure SQL Managed Instance is typically for business intelligence (BI) workloads. </br> - Requires creating data movement pipelines in Data Factory to move data from source to destination. </br> - [Cost](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/) is an important consideration and is based on factors like pipeline triggers, activity runs, and duration of data movement. |
-| | | |
+ ## Feature interoperability
azure-sql Sql Server To Sql On Azure Vm Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md
The following table details all available methods to migrate your SQL Server dat
| **[Database Migration Assistant (DMA)](/sql/dma/dma-overview)** | SQL Server 2005| SQL Server 2008 SP4| [Azure VM storage limit](../../../index.yml) | The [DMA](/sql/dma/dma-overview) assesses SQL Server on-premises and then seamlessly upgrades to later versions of SQL Server or migrates to SQL Server on Azure VMs, Azure SQL Database or Azure SQL Managed Instance. <br /><br /> Should not be used on Filestream-enabled user databases.<br /><br /> DMA also includes capability to migrate [SQL and Windows logins](/sql/dma/dma-migrateserverlogins) and assess [SSIS Packages](/sql/dma/dma-assess-ssis). <br /><br /> **Automation & scripting**: [Command line interface](/sql/dma/dma-commandline) | | **[Detach and attach](../../virtual-machines/windows/migrate-to-vm-from-sql-server.md#detach-and-attach-from-a-url)** | SQL Server 2008 SP4 | SQL Server 2014 | [Azure VM storage limit](../../../index.yml) | Use this method when you plan to [store these files using the Azure Blob storage service](/sql/relational-databases/databases/sql-server-data-files-in-microsoft-azure) and attach them to an instance of SQL Server on an Azure VM, particularly useful with very large databases or when the time to backup and restore is too long. <br /><br /> **Automation & scripting**: [T-SQL](/sql/relational-databases/databases/detach-a-database#TsqlProcedure) and [AzCopy to Blob storage](../../../storage/common/storage-use-azcopy-v10.md)| |**[Log shipping](sql-server-to-sql-on-azure-vm-individual-databases-guide.md#migrate)** | SQL Server 2008 SP4 (Windows Only) | SQL Server 2008 SP4 (Windows Only) | [Azure VM storage limit](../../../index.yml) | Log shipping replicates transactional log files from on-premises on to an instance of SQL Server on an Azure VM. <br /><br /> This provides minimal downtime during failover and has less configuration overhead than setting up an Always On availability group. <br /><br /> **Automation & scripting**: [T-SQL](/sql/database-engine/log-shipping/log-shipping-tables-and-stored-procedures) |
-| | | | | |
+ &nbsp; &nbsp;
azure-sql Availability Group Clusterless Workgroup Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/availability-group-clusterless-workgroup-configure.md
For reference, the following parameters are used in this article, but can be mod
| **Listener** | AGListener (10.0.0.7) | | **DNS suffix** | ag.wgcluster.example.com | | **Work group name** | AGWorkgroup |
-| &nbsp; | &nbsp; |
+ ## Set a DNS suffix
azure-sql Availability Group Manually Configure Prerequisites Tutorial Multi Subnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/availability-group-manually-configure-prerequisites-tutorial-multi-subnet.md
To assign additional secondary IPs to the VMs, follow these steps:
| **Name** |windows-cluster-ip | availability-group-listener | | **Allocation** | Static | Static | | **IP address** | 10.38.2.10 | 10.38.2.11 |
- | | | |
+ Now you are ready to join the **corp.contoso.com**.
azure-sql Availability Group Quickstart Template Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/availability-group-quickstart-template-configure.md
This article describes how to use the Azure quickstart templates to partially au
| | | | [sql-vm-ag-setup](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.sqlvirtualmachine/sql-vm-ag-setup) | Creates the Windows failover cluster and joins the SQL Server VMs to it. | | [sql-vm-aglistener-setup](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.sqlvirtualmachine/sql-vm-aglistener-setup) | Creates the availability group listener and configures the internal load balancer. This template can be used only if the Windows failover cluster was created with the **101-sql-vm-ag-setup** template. |
- | &nbsp; | &nbsp; |
+ Other parts of the availability group configuration must be done manually, such as creating the availability group and creating the internal load balancer. This article provides the sequence of automated and manual steps.
Adding SQL Server VMs to the *SqlVirtualMachineGroups* resource group bootstraps
| **Cloud Witness Name** | A new Azure storage account that will be created and used for the cloud witness. You can modify this name. | | **\_artifacts Location** | This field is set by default and should not be modified. | | **\_artifacts Location SaS Token** | This field is intentionally left blank. |
- | &nbsp; | &nbsp; |
+ 1. If you agree to the terms and conditions, select the **I Agree to the terms and conditions stated above** check box. Then select **Purchase** to finish deployment of the quickstart template. 1. To monitor your deployment, either select the deployment from the **Notifications** bell icon in the top navigation banner or go to **Resource Group** in the Azure portal. Select **Deployments** under **Settings**, and choose the **Microsoft.Template** deployment.
You just need to create the internal load balancer. In step 4, the **101-sql-vm-
| **Subscription** |If you have multiple subscriptions, this field might appear. Select the subscription that you want to associate with this resource. It's normally the same subscription as all the resources for the availability group. | | **Resource group** |Select the resource group that the SQL Server instances are in. | | **Location** |Select the Azure location that the SQL Server instances are in. |
- | &nbsp; | &nbsp; |
+ 6. Select **Create**.
To configure the internal load balancer and create the availability group listen
| **Existing Subnet** | The name of the internal subnet of your SQL Server VMs (for example: *default*). You can determine this value by going to **Resource Group**, selecting your virtual network, selecting **Subnets** in the **Settings** pane, and copying the value under **Name**. | | **Existing Internal Load Balancer** | The name of the internal load balancer that you created in step 3. | | **Probe Port** | The probe port that you want the internal load balancer to use. The template uses 59999 by default, but you can change this value. |
- | &nbsp; | &nbsp; |
+ 1. If you agree to the terms and conditions, select the **I Agree to the terms and conditions stated above** check box. Select **Purchase** to finish deployment of the quickstart template. 1. To monitor your deployment, either select the deployment from the **Notifications** bell icon in the top navigation banner or go to **Resource Group** in the Azure portal. Select **Deployments** under **Settings**, and choose the **Microsoft.Template** deployment.
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/doc-changes-updates-release-notes-whats-new.md
When you deploy an Azure virtual machine (VM) with SQL Server installed on it, e
| Changes | Details | | | | | **Security best practices** | The [SQL Server VM security best practices](security-considerations-best-practices.md) have been rewritten and refreshed! |
-| &nbsp; | &nbsp; |
## January 2022
When you deploy an Azure virtual machine (VM) with SQL Server installed on it, e
| Changes | Details | | | | | **Migrate with distributed AG** | It's now possible to migrate your database(s) from a [standalone instance](../../migration-guides/virtual-machines/sql-server-distributed-availability-group-migrate-standalone-instance.md) of SQL Server or an [entire availability group](../../migration-guides/virtual-machines/sql-server-distributed-availability-group-migrate-ag.md) over to SQL Server on Azure VMs using a distributed availability group! See the [prerequisites](../../migration-guides/virtual-machines/sql-server-distributed-availability-group-migrate-prerequisites.md) to get started. |
-| &nbsp; | &nbsp; |
When you deploy an Azure virtual machine (VM) with SQL Server installed on it, e
| **HADR content refresh** | We've refreshed and enhanced our high availability and disaster recovery (HADR) content! There's now an [Overview of the Windows Server Failover Cluster](hadr-windows-server-failover-cluster-overview.md), as well as a consolidated [how-to configure quorum](hadr-cluster-quorum-configure-how-to.md) for SQL Server VMs. Additionally, we've enhanced the [cluster best practices](hadr-cluster-best-practices.md) with more comprehensive setting recommendations adopted to the cloud.| | **Migrate high availability to VM** | Azure Migrate brings support to lift and shift your entire high availability solution to SQL Server on Azure VMs! Bring your [availability group](../../migration-guides/virtual-machines/sql-server-availability-group-to-sql-on-azure-vm.md) or your [failover cluster instance](../../migration-guides/virtual-machines/sql-server-failover-cluster-instance-to-sql-on-azure-vm.md) to SQL Server VMs using Azure Migrate today! | **Performance best practices refresh** | We've rewritten, refreshed, and updated the performance best practices documentation, splitting one article into a series that contain: [a checklist](performance-guidelines-best-practices-checklist.md), [VM size guidance](performance-guidelines-best-practices-vm-size.md), [Storage guidance](performance-guidelines-best-practices-storage.md), and [collecting baseline instructions](performance-guidelines-best-practices-collect-baseline.md). |
-| &nbsp; | &nbsp; |
When you deploy an Azure virtual machine (VM) with SQL Server installed on it, e
| **Configure ag in portal** | It is now possible to [configure your availability group via the Azure portal](availability-group-azure-portal-configure.md). This feature is currently in preview and being deployed so if your desired region is unavailable, check back soon. | | **Automatic extension registration** | You can now enable the [Automatic registration](sql-agent-extension-automatic-registration-all-vms.md) feature to automatically register all SQL Server VMs already deployed to your subscription with the [SQL IaaS Agent extension](sql-server-iaas-agent-extension-automate-management.md). This applies to all existing VMs, and will also automatically register all SQL Server VMs added in the future. | | **DNN for AG** | You can now configure a [distributed network name (DNN) listener)](availability-group-distributed-network-name-dnn-listener-configure.md) for SQL Server 2019 CU8 and later to replace the traditional [VNN listener](availability-group-overview.md#connectivity), negating the need for an Azure Load Balancer. |
-| &nbsp; | &nbsp; |
+ ## 2019
When you deploy an Azure virtual machine (VM) with SQL Server installed on it, e
| **Named instance supportability** | You can now use the [SQL Server IaaS extension](sql-server-iaas-agent-extension-automate-management.md#installation) with a named instance, if the default instance has been uninstalled properly. | | **Portal enhancement** | The Azure portal experience for deploying a SQL Server VM has been revamped to improve usability. For more information, see the brief [quickstart](sql-vm-create-portal-quickstart.md) and more thorough [how-to guide](create-sql-vm-portal.md) to deploy a SQL Server VM.| | **Portal improvement** | It's now possible to change the licensing model for a SQL Server VM from pay-as-you-go to bring-your-own-license by using the [Azure portal](licensing-model-azure-hybrid-benefit-ahb-change.md#change-license-model).|
-| **Simplification of availability group deployment to a SQL Server VM through the Azure CLI** | It's now easier than ever to deploy an availability group to a SQL Server VM in Azure. You can use the [Azure CLI](/cli/azure/sql/vm?view=azure-cli-2018-03-01-hybrid&preserve-view=true) to create the Windows failover cluster, internal load balancer, and availability group listeners, all from the command line. For more information, see [Use the Azure CLI to configure an Always On availability group for SQL Server on an Azure VM](./availability-group-az-commandline-configure.md). |
-| &nbsp; | &nbsp; |
+| **Simplification of availability group deployment to a SQL Server VM through the Azure CLI** | It's now easier than ever to deploy an availability group to a SQL Server VM in Azure. You can use the [Azure CLI](/cli/azure/sql/vm?view=azure-cli-2018-03-01-hybrid&preserve-view=true) to create the Windows failover cluster, internal load balancer, and availability group listeners, all from the command line. For more information, see [Use the Azure CLI to configure an Always On availability group for SQL Server on an Azure VM](./
## 2018
When you deploy an Azure virtual machine (VM) with SQL Server installed on it, e
| **Automatic registration to the SQL IaaS Agent extension** | SQL Server VMs deployed after this month are automatically registered with the new SQL IaaS Agent extension. SQL Server VMs deployed before this month still need to be manually registered. For more information, see [Register a SQL Server virtual machine in Azure with the SQL IaaS Agent extension](sql-agent-extension-manually-register-single-vm.md).| |**New SQL IaaS Agent extension** | A new resource provider (Microsoft.SqlVirtualMachine) provides better management of your SQL Server VMs. For more information on registering your VMs, see [Register a SQL Server virtual machine in Azure with the SQL IaaS Agent extension](sql-agent-extension-manually-register-single-vm.md). | |**Switch licensing model** | You can now switch between the pay-per-usage and bring-your-own-license models for your SQL Server VM by using the Azure CLI or PowerShell. For more information, see [How to change the licensing model for a SQL Server virtual machine in Azure](licensing-model-azure-hybrid-benefit-ahb-change.md). |
-| &nbsp; | &nbsp; |
+ ## Additional resources
azure-sql Failover Cluster Instance Prepare Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/failover-cluster-instance-prepare-vm.md
To assign additional secondary IPs to the VMs, follow these steps:
| **Name** |windows-cluster-ip | FCI-network-name | | **Allocation** | Static | Static | | **IP address** | 10.38.2.10 | 10.38.2.11 |
- | | | |
+
azure-sql Performance Guidelines Best Practices Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-storage.md
The following table provides a summary of the recommended caching policies based
|**Transaction log disk**|Set the caching policy to `None` for disks hosting the transaction log. There is no performance benefit to enabling caching for the Transaction log disk, and in fact having either `Read-only` or `Read/Write` caching enabled on the log drive can degrade performance of the writes against the drive and decrease the amount of cache available for reads on the data drive. | |**Operating OS disk** | The default caching policy is `Read/write` for the OS drive. <br/> It is not recommended to change the caching level of the OS drive. | | **tempdb**| If tempdb cannot be placed on the ephemeral drive `D:\` due to capacity reasons, either resize the virtual machine to get a larger ephemeral drive or place tempdb on a separate data drive with `Read-only` caching configured. <br/> The virtual machine cache and ephemeral drive both use the local SSD, so keep this in mind when sizing as tempdb I/O will count against the cached IOPS and throughput virtual machine limits when hosted on the ephemeral drive.|
-| | |
+ > [!IMPORTANT] > Changing the cache setting of an Azure disk detaches and reattaches the target disk. When changing the cache setting for a disk that hosts SQL Server data, log, or application files, be sure to stop the SQL Server service along with any other related services to avoid data corruption.
azure-sql Sql Agent Extension Manually Register Vms Bulk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-vms-bulk.md
The report is generated as a `.txt` file named `RegisterSqlVMScriptReport<Timest
| Number of VMs failed to register due to error | Count of virtual machines that failed to register due to some error. The details of the error can be found in the log file. | | Number of VMs skipped as the VM or the gust agent on VM is not running | Count and list of virtual machines that could not be registered as either the virtual machine or the guest agent on the virtual machine were not running. These can be retried once the virtual machine or guest agent has been started. Details can be found in the log file. | | Number of VMs skipped as they are not running SQL Server on Windows | Count of virtual machines that were skipped as they are not running SQL Server or are not a Windows virtual machine. The virtual machines are listed in the format `SubscriptionID, Resource Group, Virtual Machine`. |
-| &nbsp; | &nbsp; |
+ ### Log
backup Backup Azure Policy Supported Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-policy-supported-skus.md
Title: Supported VM SKUs for Azure Policy description: 'An article describing the supported VM SKUs (by Publisher, Image Offer and Image SKU) which are supported for the built-in Azure Policies provided by Backup' Previously updated : 11/08/2019 Last updated : 03/15/2022+++ # Supported VM SKUs for Azure Policy
MicrosoftWindowsServer | WindowsServer | Windows Server 2019 Datacenter (zh-cn)
MicrosoftWindowsServer | WindowsServerSemiAnnual | Datacenter-Core-1709-smalldisk MicrosoftWindowsServer | WindowsServerSemiAnnual | Datacenter-Core-1709-with-Containers-smalldisk MicrosoftWindowsServer | WindowsServerSemiAnnual | Datacenter-Core-1803-with-Containers-smalldisk
+MicrosoftWindowsServer | WindowsServer | Windows Server 2019 Datacenter gen2(2019-Datacenter- gensecond)
+MicrosoftWindowsServer | WindowsServer | Windows Server 2022 Datacenter - Gen 2(2022-datacenter-g2)
+MicrosoftWindowsServer | WindowsServer | Windows Server 2022 Datacenter(2022-datacenter)
+MicrosoftWindowsServer | WindowsServer | Windows Server 2022 Datacenter: Azure Edition - Gen 2 (2022-datacenter-azure-edition)
+MicrosoftWindowsServer | WindowsServer | [smalldisk] Windows Server 2022 Datacenter: Azure Edition - Gen 2(2022-datacenter-azure-edition-smalldisk)
+MicrosoftWindowsServer | WindowsServer | Windows Server 2022 Datacenter: Azure Edition Core- Gen 2 (2022-datacenter-azure-edition-core)
+MicrosoftWindowsServer | WindowsServer | [smalldisk] Windows Server 2022 Datacenter: Azure Edition Core-Gen 2 (2022-datacenter-azure-edition-core-smalldisk)
+MicrosoftWindowsServer | WindowsServer | [smalldisk] Windows Server 2022 Datacenter-Gen 2 (2022-datacenter-smalldisk-g2)
+MicrosoftWindowsServer | WindowsServer | [smalldisk] Windows Server 2022 Datacenter-Gen 1 (2022-datacenter-smalldisk)
+MicrosoftWindowsServer | WindowsServer | Windows Server 2022 Datacenter Server Core -Gen 2 (2022-datacenter-core-g2)
+MicrosoftWindowsServer | WindowsServer | Windows Server 2022 Datacenter Server Core -Gen 1 (2022-datacenter-core)
+MicrosoftWindowsServer | WindowsServer | [smalldisk]Windows Server 2022 Datacenter Server Core -Gen 2 (2022-datacenter-core-smalldisk-g2)
+MicrosoftWindowsServer | WindowsServer | [smalldisk]Windows Server 2022 Datacenter Server Core -Gen 1(2022-datacenter-core-smalldisk)
MicrosoftWindowsServerHPCPack | WindowsServerHPCPack | All Image SKUs MicrosoftSQLServer | SQL2016SP1-WS2016 | All Image SKUs MicrosoftSQLServer | SQL2016-WS2016 | All Image SKUs
Canonical | UbuntuServer | 16.04-LTS
Canonical | UbuntuServer | 16.04.0-LTS Canonical | UbuntuServer | 18.04-DAILY-LTS Canonical | UbuntuServer | 18.04-LTS
+Canonical | UbuntuServer | 20.04-LTS
Oracle | Oracle-Linux | 6.8, 6.9, 6.10, 7.3, 7.4, 7.5, 7.6 OpenLogic | CentOS | 6.X, 7.X OpenLogic | CentOSΓÇôLVM | 6.X, 7.X
backup Backup Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-encryption.md
Azure Backup includes encryption on two levels:
- **Infrastructure-level encryption**: In addition to encrypting your data in the Recovery Services vault using customer-managed keys, you can also choose to have an additional layer of encryption configured on the storage infrastructure. This infrastructure encryption is managed by the platform. Together with encryption at rest using customer-managed keys, it allows two-layer encryption of your backup data. Infrastructure encryption can only be configured if you first choose to use your own keys for encryption at rest. Infrastructure encryption uses platform-managed keys for encrypting data. - **Encryption specific to the workload being backed up** - **Azure virtual machine backup**: Azure Backup supports backup of VMs with disks encrypted using [platform-managed keys](../virtual-machines/disk-encryption.md#platform-managed-keys), as well as [customer-managed keys](../virtual-machines/disk-encryption.md#customer-managed-keys) owned and managed by you. In addition, you can also back up your Azure Virtual machines that have their OS or data disks encrypted using [Azure Disk Encryption](backup-azure-vms-encryption.md#encryption-support-using-ade). ADE uses BitLocker for Windows VMs, and DM-Crypt for Linux VMs, to perform in-guest encryption.
+ - **TDE - enabled database backup is supported**. To restore a TDE-encrypted database to another SQL Server, you need to first [restore the certificate to the destination server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server). The backup compression for TDE-enabled databases for SQL Server 2016 and newer versions is available, but at lower transfer size as explained [here](https://techcommunity.microsoft.com/t5/sql-server/backup-compression-for-tde-enabled-databases-important-fixes-in/ba-p/385593).
## Next steps
bastion Bastion Create Host Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-create-host-powershell.md
The following required roles for your resources.
## <a name="connect"></a>Connect to a VM
-You can use the [Connection steps](#steps) in the section below to easily connect to your VM. Some connection types require the Bastion [Standard SKU](configuration-settings.md#skus). You can also use any of the [VM connection articles](#articles) to connect to a VM.
+You can use the [Connection steps](#steps) in the section below to connect to your VM. You can also use any of the following articles to connect to a VM. Some connection types require the Bastion [Standard SKU](configuration-settings.md#skus).
+ ### <a name="steps"></a>Connection steps [!INCLUDE [Connection steps](../../includes/bastion-vm-connect.md)]
-#### <a name="articles"></a>Connect to VM articles
+### <a name="audio"></a>To enable audio output
## <a name="ip"></a>Remove VM public IP address
bastion Create Host Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/create-host-cli.md
description: Learn how to deploy Azure Bastion using CLI
Previously updated : 03/02/2022 Last updated : 03/14/2022 # Customer intent: As someone with a networking background, I want to deploy Bastion and connect to a VM.
This section helps you deploy Azure Bastion using Azure CLI.
## <a name="connect"></a>Connect to a VM
-You can use any of the following articles to connect to a VM that's located in the virtual network to which you deployed Bastion. You can also use the [Connection steps](#steps) in the section below. Some connection types require the [Standard SKU](configuration-settings.md#skus).
+You can use the [Connection steps](#steps) in the section below to connect to your VM. You can also use any of the following articles to connect to a VM. Some connection types require the Bastion [Standard SKU](configuration-settings.md#skus).
### <a name="steps"></a>Connection steps [!INCLUDE [Connection steps](../../includes/bastion-vm-connect.md)]
+### <a name="audio"></a>To enable audio output
++ ## <a name="ip"></a>Remove VM public IP address Azure Bastion doesn't use the public IP address to connect to the client VM. If you don't need the public IP address for your VM, you can disassociate the public IP address. See [Dissociate a public IP address from an Azure VM](../virtual-network/ip-services/remove-public-ip-address-vm.md).
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-host-portal.md
Azure Bastion is a PaaS service that's maintained for you, not a bastion host th
* 3389 for Windows VMs * 22 for Linux VMs
-
- > [!NOTE]
- > The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
- >
+
+> [!NOTE]
+> The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
+>
### <a name="values"></a>Example values
When the Bastion deployment is complete, the screen changes to the **Connect** p
:::image type="content" source="./media/quickstart-host-portal/connected.png" alt-text="Screenshot of RDP connection." lightbox="./media/quickstart-host-portal/connected.png":::
+### <a name="audio"></a>To enable audio output
++ ## <a name="remove"></a>Remove VM public IP address [!INCLUDE [Remove a public IP address from a VM](../../includes/bastion-remove-ip.md)]
When you're done using the virtual network and the virtual machines, delete the
## Next steps
-In this quickstart, you deployed Bastion to your virtual network, and then connected to a virtual machine securely via Bastion. Next, you can continue with the following step if you want to connect to a virtual machine scale set.
+In this quickstart, you deployed Bastion to your virtual network, and then connected to a virtual machine securely via Bastion. Next, you can continue with the following steps if you want to copy and paste to your VM.
> [!div class="nextstepaction"]
-> [Connect to a virtual machine scale set using Azure Bastion](bastion-connect-vm-scale-set.md)
+> [Copy and paste to a Windows VM](bastion-vm-copy-paste.md)
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md
Previously updated : 02/28/2022 Last updated : 03/14/2022
This is the public IP address of the Bastion host resource on which RDP/SSH will
1. At the bottom of the page, select **Create**. 1. You'll see a message letting you know that your deployment is underway. Status will display on this page as the resources are created. It takes about 10 minutes for the Bastion resource to be created and deployed.
-## Connect to a VM
+## <a name="connect"></a>Connect to a VM
+
+You can use the [Connection steps](#steps) in the section below to connect to your VM. You can also use any of the following articles to connect to a VM. Some connection types require the Bastion [Standard SKU](configuration-settings.md#skus).
++
+### <a name="steps"></a>Connection steps
[!INCLUDE [Connect to a VM](../../includes/bastion-vm-connect.md)]
-### To enable audio output
+### <a name="audio"></a>To enable audio output
[!INCLUDE [Enable VM audio output](../../includes/bastion-vm-audio.md)]
-## Remove VM public IP address
+## <a name="ip"></a>Remove VM public IP address
[!INCLUDE [Remove a public IP address from a VM](../../includes/bastion-remove-ip.md)]
cognitive-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/concepts/troubleshoot.md
Title: Troubleshooting the Anomaly Detector Multivariate API
+ Title: Troubleshoot the Anomaly Detector multivariate API
-description: Learn how to remediate common error codes when using the Anomaly Detector API
+description: Learn how to remediate common error codes when you use the Azure Anomaly Detector multivariate API.
keywords: anomaly detection, machine learning, algorithms
-# Troubleshooting the multivariate API
+# Troubleshoot the multivariate API
-This article provides guidance on how to troubleshoot and remediate common error messages when using the multivariate API.
+This article provides guidance on how to troubleshoot and remediate common error messages when you use the Azure Cognitive Services Anomaly Detector multivariate API.
## Multivariate error codes
-### Common Errors
+The following tables list multivariate error codes.
-| Error Code | HTTP Error Code | Error Message | Comment |
+### Common errors
+
+| Error code | HTTP error code | Error message | Comment |
| -- | | - | |
-| `SubscriptionNotInHeaders` | 400 | apim-subscription-id is not found in headers | Please add your APIM subscription ID in the header. Example header: `{"apim-subscription-id": <Your Subscription ID>}` |
-| `FileNotExist` | 400 | File \<source> does not exist. | Please check the validity of your blob shared access signature (SAS). Make sure that it has not expired. |
-| `InvalidBlobURL` | 400 | | Your blob shared access signature (SAS) is not a valid SAS. |
-| `StorageWriteError` | 403 | | This error is possibly caused by permission issues. Our service is not allowed to write the data to the blob encrypted by a Customer Managed Key (CMK). Either remove CMK or grant access to our service again. Please refer to [this page](../../encryption/cognitive-services-encryption-keys-portal.md) for more details. |
+| `SubscriptionNotInHeaders` | 400 | apim-subscription-id is not found in headers. | Add your APIM subscription ID in the header. An example header is `{"apim-subscription-id": <Your Subscription ID>}`. |
+| `FileNotExist` | 400 | File \<source> does not exist. | Check the validity of your blob shared access signature. Make sure that it hasn't expired. |
+| `InvalidBlobURL` | 400 | | Your blob shared access signature isn't a valid shared access signature. |
+| `StorageWriteError` | 403 | | This error is possibly caused by permission issues. Our service isn't allowed to write the data to the blob encrypted by a customer-managed key. Either remove the customer-managed key or grant access to our service again. For more information, see [Configure customer-managed keys with Azure Key Vault for Cognitive Services](../../encryption/cognitive-services-encryption-keys-portal.md). |
| `StorageReadError` | 403 | | Same as `StorageWriteError`. |
-| `UnexpectedError` | 500 | | Please contact us with detailed error information. You could take the support options from [this document](../../cognitive-services-support-options.md?context=%2fazure%2fcognitive-services%2fanomaly-detector%2fcontext%2fcontext) or email us at [AnomalyDetector@microsoft.com](mailto:AnomalyDetector@microsoft.com) |
-
+| `UnexpectedError` | 500 | | Contact us with detailed error information. You could take the support options from [Azure Cognitive Services support and help options](../../cognitive-services-support-options.md?context=%2fazure%2fcognitive-services%2fanomaly-detector%2fcontext%2fcontext) or email us at [AnomalyDetector@microsoft.com](mailto:AnomalyDetector@microsoft.com). |
-### Train a Multivariate Anomaly Detection Model
+### Train a multivariate anomaly detection model
-| Error Code | HTTP Error Code | Error Message | Comment |
+| Error code | HTTP error code | Error message | Comment |
| | | | |
-| `TooManyModels` | 400 | This subscription has reached the maximum number of models. | Each APIM subscription ID is allowed to have 300 active models. Please delete unused models before training a new model |
-| `TooManyRunningModels` | 400 | This subscription has reached the maximum number of running models. | Each APIM subscription ID is allowed to train 5 models concurrently. Please train a new model after previous models have completed their training process. |
-| `InvalidJsonFormat` | 400 | Invalid json format. | Training request is not a valid JSON. |
-| `InvalidAlignMode` | 400 | The `'alignMode'` field must be one of the following: `'Inner'` or `'Outer'` . | Please check the value of `'alignMode'` which should be either `'Inner'` or `'Outer'` (case sensitive). |
-| `InvalidFillNAMethod` | 400 | The `'fillNAMethod'` field must be one of the following: `'Previous'`, `'Subsequent'`, `'Linear'`, `'Zero'`, `'Fixed'`, `'NotFill'` and it cannot be `'NotFill'` when `'alignMode'` is `'Outer'`. | Please check the value of `'fillNAMethod'`. You may refer to [this section](./best-practices-multivariate.md#optional-parameters-for-training-api) for more details. |
-| `RequiredPaddingValue` | 400 | The `'paddingValue'` field is required in the request when `'fillNAMethod'` is `'Fixed'`. | You need to provide a valid padding value when `'fillNAMethod'` is `'Fixed'`. You may refer to [this section](./best-practices-multivariate.md#optional-parameters-for-training-api) for more details. |
-| `RequiredSource` | 400 | The `'source'` field is required in the request. | Your training request has not specified a value for the `'source'` field. Example: `{"source": <Your Blob SAS>}`. |
-| `RequiredStartTime` | 400 | The `'startTime'` field is required in the request. | Your training request has not specified a value for the `'startTime'` field. Example: `{"startTime": "2021-01-01T00:00:00Z"}`. |
-| `InvalidTimestampFormat` | 400 | Invalid Timestamp format. `<timestamp>` is not a valid format. | The format of timestamp in the request body is not correct. You may try `import pandas as pd; pd.to_datetime(timestamp)` to verify. |
-| `RequiredEndTime` | 400 | The `'endTime'` field is required in the request. | Your training request has not specified a value for the `'startTime'` field. Example: `{"endTime": "2021-01-01T00:00:00Z"}`. |
-| `InvalidSlidingWindow` | 400 | The `'slidingWindow'` field must be an integer between 28 and 2880. | `'slidingWindow'` must be an integer between 28 and 2880 (inclusive). |
-
-### Get Multivariate Model with Model ID
-
-| Error Code | HTTP Error Code | Error Message | Comment |
+| `TooManyModels` | 400 | This subscription has reached the maximum number of models. | Each APIM subscription ID is allowed to have 300 active models. Delete unused models before you train a new model. |
+| `TooManyRunningModels` | 400 | This subscription has reached the maximum number of running models. | Each APIM subscription ID is allowed to train five models concurrently. Train a new model after previous models have completed their training process. |
+| `InvalidJsonFormat` | 400 | Invalid JSON format. | Training request isn't a valid JSON. |
+| `InvalidAlignMode` | 400 | The `'alignMode'` field must be one of the following: `'Inner'` or `'Outer'` . | Check the value of `'alignMode'`, which should be either `'Inner'` or `'Outer'` (case sensitive). |
+| `InvalidFillNAMethod` | 400 | The `'fillNAMethod'` field must be one of the following: `'Previous'`, `'Subsequent'`, `'Linear'`, `'Zero'`, `'Fixed'`, `'NotFill'`. It cannot be `'NotFill'` when `'alignMode'` is `'Outer'`. | Check the value of `'fillNAMethod'`. For more information, see [Best practices for using the Anomaly Detector multivariate API](./best-practices-multivariate.md#optional-parameters-for-training-api). |
+| `RequiredPaddingValue` | 400 | The `'paddingValue'` field is required in the request when `'fillNAMethod'` is `'Fixed'`. | You need to provide a valid padding value when `'fillNAMethod'` is `'Fixed'`. For more information, see [Best practices for using the Anomaly Detector multivariate API](./best-practices-multivariate.md#optional-parameters-for-training-api). |
+| `RequiredSource` | 400 | The `'source'` field is required in the request. | Your training request hasn't specified a value for the `'source'` field. An example is `{"source": <Your Blob SAS>}`. |
+| `RequiredStartTime` | 400 | The `'startTime'` field is required in the request. | Your training request hasn't specified a value for the `'startTime'` field. An example is `{"startTime": "2021-01-01T00:00:00Z"}`. |
+| `InvalidTimestampFormat` | 400 | Invalid timestamp format. The `<timestamp>` format is not a valid format. | The format of timestamp in the request body isn't correct. Try `import pandas as pd; pd.to_datetime(timestamp)` to verify. |
+| `RequiredEndTime` | 400 | The `'endTime'` field is required in the request. | Your training request hasn't specified a value for the `'startTime'` field. An example is `{"endTime": "2021-01-01T00:00:00Z"}`. |
+| `InvalidSlidingWindow` | 400 | The `'slidingWindow'` field must be an integer between 28 and 2880. | The `'slidingWindow'` field must be an integer between 28 and 2880 (inclusive). |
+
+### Get a multivariate model with a model ID
+
+| Error code | HTTP error code | Error message | Comment |
| | | - | |
-| `ModelNotExist` | 404 | The model does not exist. | The model with corresponding model ID does not exist. Please check the model ID in the request URL. |
+| `ModelNotExist` | 404 | The model does not exist. | The model with corresponding model ID doesn't exist. Check the model ID in the request URL. |
-### List Multivariate Models
+### List multivariate models
-| Error Code | HTTP Error Code | Error Message | Comment |
+| Error code | HTTP error code | Error message | Comment |
| | | - | |
-|`InvalidRequestParameterError`| 400 | Invalid values for $skip or $top … | Please check whether the values for the two parameters are numerical. $skip and $top are used to list the models with pagination. Because the API only returns 10 most recently updated models, you could use $skip and $top to get models updated earlier. |
+|`InvalidRequestParameterError`| 400 | Invalid values for $skip or $top. | Check whether the values for the two parameters are numerical. The values $skip and $top are used to list the models with pagination. Because the API only returns the 10 most recently updated models, you could use $skip and $top to get models updated earlier. |
-### Anomaly Detection with a Trained Model
+### Anomaly detection with a trained model
-| Error Code | HTTP Error Code | Error Message | Comment |
+| Error code | HTTP error code | Error message | Comment |
| -- | | | |
-| `ModelNotExist` | 404 | The model does not exist. | The model used for inference does not exist. Please check the model ID in the request URL. |
-| `ModelFailed` | 400 | Model failed to be trained. | The model is not successfully trained. Please get detailed information by getting the model with model ID. |
-| `ModelNotReady` | 400 | The model is not ready yet. | The model is not ready yet. Please wait for a while until the training process completes. |
-| `InvalidFileSize` | 413 | File \<file> exceeds the file size limit (\<size limit> bytes). | The size of inference data exceeds the upper limit (2GB currently). Please use less data for inference. |
+| `ModelNotExist` | 404 | The model does not exist. | The model used for inference doesn't exist. Check the model ID in the request URL. |
+| `ModelFailed` | 400 | Model failed to be trained. | The model isn't successfully trained. Get detailed information by getting the model with model ID. |
+| `ModelNotReady` | 400 | The model is not ready yet. | The model isn't ready yet. Wait for a while until the training process completes. |
+| `InvalidFileSize` | 413 | File \<file> exceeds the file size limit (\<size limit> bytes). | The size of inference data exceeds the upper limit, which is currently 2 GB. Use less data for inference. |
-### Get Detection Results
+### Get detection results
-| Error Code | HTTP Error Code | Error Message | Comment |
+| Error code | HTTP error code | Error message | Comment |
| - | | -- | |
-| `ResultNotExist` | 404 | The result does not exist. | The result per request does not exist. Either inference has not completed or result has expired (7 days). |
+| `ResultNotExist` | 404 | The result does not exist. | The result per request doesn't exist. Either inference hasn't completed or the result has expired. The expiration time is seven days. |
-### Data Processing Errors
+### Data processing errors
-The following error codes do not have associated HTTP Error codes.
+The following error codes don't have associated HTTP error codes.
-| Error Code | Error Message | Comment |
+| Error code | Error message | Comment |
| | | |
-| `NoVariablesFound` | No variables found. Please check that your files are organized as per instruction. | No csv files could be found from the data source. This is typically caused by wrong organization of files. Please refer to the sample data for the desired structure. |
+| `NoVariablesFound` | No variables found. Check that your files are organized as per instruction. | No CSV files could be found from the data source. This error is typically caused by incorrect organization of files. See the sample data for the desired structure. |
| `DuplicatedVariables` | There are multiple variables with the same name. | There are duplicated variable names. | | `FileNotExist` | File \<filename> does not exist. | This error usually happens during inference. The variable has appeared in the training data but is missing in the inference data. |
-| `RedundantFile` | File \<filename> is redundant. | This error usually happens during inference. The variable was not in the training data but appeared in the inference data. |
-| `FileSizeTooLarge` | The size of file \<filename> is too large. | The size of the single csv file \<filename> exceeds the limit. Please train with less data. |
-| `ReadingFileError` | Errors occurred when reading \<filename>. \<error messages> | Failed to read the file \<filename>. You may refer to \<error messages> for more details or verify with `pd.read_csv(filename)` in a local environment. |
-| `FileColumnsNotExist` | Columns timestamp or value in file \<filename> do not exist. | Each csv file must have two columns with names **timestamp** and **value** (case sensitive). |
-| `VariableParseError` | Variable \<variable> parse \<error message> error. | Cannot process the \<variable> due to runtime errors. Please refer to the \<error message> for more details or contact us with the \<error message>. |
-| `MergeDataFailed` | Failed to merge data. Please check data format. | Data merge failed. This is possibly due to wrong data format, organization of files, etc. Please refer to the sample data for the current file structure. |
-| `ColumnNotFound` | Column \<column> cannot be found in the merged data. | A column is missing after merge. Please verify the data. |
-| `NumColumnsMismatch` | Number of columns of merged data does not match the number of variables. | Please verify the data. |
-| `TooManyData` | Too many data points. Maximum number is 1000000 per variable. | Please reduce the size of input data. |
-| `NoData` | There is no effective data | There is no data to train/inference after processing. Please check the start time and end time. |
-| `DataExceedsLimit` | The length of data whose timestamp is between `startTime` and `endTime` exceeds limit(\<limit>). | The size of data after processing exceeds the limit. (Currently no limit on processed data.) |
-| `NotEnoughInput` | Not enough data. The length of data is \<data length>, but the minimum length should be larger than sliding window which is \<sliding window size>. | The minimum number of data points for inference is the size of sliding window. Try to provide more data for inference. |
+| `RedundantFile` | File \<filename> is redundant. | This error usually happens during inference. The variable wasn't in the training data but appeared in the inference data. |
+| `FileSizeTooLarge` | The size of file \<filename> is too large. | The size of the single CSV file \<filename> exceeds the limit. Train with less data. |
+| `ReadingFileError` | Errors occurred when reading \<filename>. \<error messages> | Failed to read the file \<filename>. For more information, see the \<error messages> or verify with `pd.read_csv(filename)` in a local environment. |
+| `FileColumnsNotExist` | Columns timestamp or value in file \<filename> do not exist. | Each CSV file must have two columns with the names **timestamp** and **value** (case sensitive). |
+| `VariableParseError` | Variable \<variable> parse \<error message> error. | Can't process the \<variable> because of runtime errors. For more information, see the \<error message> or contact us with the \<error message>. |
+| `MergeDataFailed` | Failed to merge data. Check data format. | Data merge failed. This error is possibly because of the wrong data format or the incorrect organization of files. See the sample data for the current file structure. |
+| `ColumnNotFound` | Column \<column> cannot be found in the merged data. | A column is missing after merge. Verify the data. |
+| `NumColumnsMismatch` | Number of columns of merged data does not match the number of variables. | Verify the data. |
+| `TooManyData` | Too many data points. Maximum number is 1000000 per variable. | Reduce the size of input data. |
+| `NoData` | There is no effective data. | There's no data to train/inference after processing. Check the start time and end time. |
+| `DataExceedsLimit`. | The length of data whose timestamp is between `startTime` and `endTime` exceeds limit(\<limit>). | The size of data after processing exceeds the limit. Currently, there's no limit on processed data. |
+| `NotEnoughInput` | Not enough data. The length of data is \<data length>, but the minimum length should be larger than sliding window, which is \<sliding window size>. | The minimum number of data points for inference is the size of the sliding window. Try to provide more data for inference. |
cognitive-services Customize Pronunciation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/customize-pronunciation.md
Title: Create structured text data
+ Title: Create phonetic pronunciation data
description: Use phonemes to customize pronunciation of words in Speech-to-Text.
Last updated 03/01/2022
-# Create structured text data
+# Create phonetic pronunciation data
Custom speech allows you to provide different pronunciations for specific words using the Universal Phone Set. The Universal Phone Set (UPS) is a machine-readable phone set that is based on the International Phonetic Set Alphabet (IPA). The IPA is used by linguists world-wide and is accepted as a standard.
See the sections in this article for the Universal Phone Set for each locale.
- [Upload your data](how-to-custom-speech-upload-data.md) - [Inspect your data](how-to-custom-speech-inspect-data.md)-- [Train your model](how-to-custom-speech-train-model.md)
+- [Train your model](how-to-custom-speech-train-model.md)
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
With Speech containers, you can build a speech application architecture that's o
| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.0.0 | Generally available | | Text-to-speech | Converts text to natural-sounding speech with plain text input or Speech Synthesis Markup Language (SSML). | 1.15.0 | Generally available | | Speech language identification | Detects the language spoken in audio files. | 1.5.0 | Preview |
-| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 1.12.0 | Generally available |
+| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.0.0 | Generally available |
## Prerequisites
Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/)
Starting in container version 3.0.0, select customers can run speech-to-text containers in an environment without internet accessibility. For more information, see [Run Cognitive Services containers in disconnected environments](../containers/disconnected-containers.md).
+Starting in container version 2.0.0, select customers can run neural-text-to-speech containers in an environment without internet accessibility. For more information, see [Run Cognitive Services containers in disconnected environments](../containers/disconnected-containers.md).
+ # [Speech-to-text](#tab/stt) To run the standard speech-to-text container, execute the following `docker run` command:
cognitive-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-container-support.md
Azure Cognitive Services containers provide the following set of Docker containe
| [Speech Service API][sp-containers-cstt] | **Custom Speech-to-text** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-custom-speech-to-text)) | Transcribes continuous real-time speech into text using a custom model. | Generally available | | [Speech Service API][sp-containers-tts] | **Text-to-speech** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-text-to-speech)) | Converts text to natural-sounding speech. | Generally available | | [Speech Service API][sp-containers-ctts] | **Custom Text-to-speech** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-custom-text-to-speech)) | Converts text to natural-sounding speech using a custom model. | Gated preview |
-| [Speech Service API][sp-containers-ntts] | **Neural Text-to-speech** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-neural-text-to-speech)) | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | Generally available. |
+| [Speech Service API][sp-containers-ntts] | **Neural Text-to-speech** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-neural-text-to-speech)) | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | Generally available. <br> container can also [run in disconnected environments](containers/disconnected-containers.md). |
| [Speech Service API][sp-containers-lid] | **Speech language detection** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-language-detection)) | Determines the language of spoken audio. | Gated preview | ### Vision containers
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
This container image has the following tags available. You can also find a full
# [Latest version](#tab/current)
-Release notes for `v1.12.0`:
+Release notes for `v2.0.0`:
**Features**
-* Support `am-et-amehaneural` and `am-et-mekdesneural` and `so-so-muuseneural` and `so-so-ubaxneural`.
+* Support for using containers in [disconnected environments](disconnected-containers.md).
+* Support `ar-bh-lailaneural` and `ar-eg-salmaneural` and `ar-eg-shakirneural` and `ar-sa-hamedneural` and `ar-sa-zariyahneural`.
+* `es-MX-Dalia` model upgrade.
| Image Tags | Notes | ||:| | `latest` | Container image with the `en-US` locale and `en-US-AriaNeural` voice. |
-| `1.12.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.12.0-amd64-en-us-arianeural`. |
+| `2.0.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `2.0.0-amd64-en-us-arianeural`. |
-| v1.12.0 Locales and voices | Notes |
+| v2.0.0 Locales and voices | Notes |
|-|:| | `am-et-amehaneural` | Container image with the `am-ET` locale and `am-ET-Amehaneural` voice. | | `am-et-mekdesneural` | Container image with the `am-ET` locale and `am-ET-Mekdesneural` voice. |
+| `ar-bh-lailaneural` | Container image with the `ar-BH` locale and `ar-BH-Lailaneural` voice. |
+| `ar-eg-salmaneura` | Container image with the `ar-EG` locale and `ar-eg-Salmaneura` voice. |
+| `ar-eg-shakirneural` | Container image with the `ar-EG` locale and `ar-eg-shakirneural` voice. |
+| `ar-sa-hamedneural` | Container image with the `ar-SA` locale and `ar-sa-Hamedneural` voice. |
+| `ar-sa-zariyahneural` | Container image with the `ar-SA` locale and `ar-sa-Zariyahneural` voice. |
| `cs-cz-antoninneural` | Container image with the `cs-CZ` locale and `cs-CZ-Antoninneural` voice. | | `cs-cz-vlastaneural` | Container image with the `cs-CZ` locale and `cs-CZ-Vlastaneural` voice. | | `de-ch-janneural` | Container image with the `de-CH` locale and `de-CH-Janneural` voice. |
Release notes for `v1.12.0`:
# [Previous version](#tab/previous)
+Release notes for `v1.12.0`:
+
+**Features**
+* Support `am-et-amehaneural` and `am-et-mekdesneural` and `so-so-muuseneural` and `so-so-ubaxneural`.
+ Release notes for `v1.11.0`: **Features**
Release notes for `v1.4.0`:
Release notes for `v1.3.0`: * The Neural Text-to-speech container is now generally available.
+| v1.12.0 Locales and voices | Notes |
+|-|:|
+| `am-et-amehaneural` | Container image with the `am-ET` locale and `am-ET-Amehaneural` voice. |
+| `am-et-mekdesneural` | Container image with the `am-ET` locale and `am-ET-Mekdesneural` voice. |
+| `cs-cz-antoninneural` | Container image with the `cs-CZ` locale and `cs-CZ-Antoninneural` voice. |
+| `cs-cz-vlastaneural` | Container image with the `cs-CZ` locale and `cs-CZ-Vlastaneural` voice. |
+| `de-ch-janneural` | Container image with the `de-CH` locale and `de-CH-Janneural` voice. |
+| `de-ch-lenineural` | Container image with the `de-CH` locale and `de-CH-Lenineural` voice. |
+| `de-de-conradneural` | Container image with the `de-DE` locale and `de-DE-ConradNeural` voice. |
+| `de-de-katjaneural` | Container image with the `de-DE` locale and `de-DE-KatjaNeural` voice. |
+| `en-au-natashaneural` | Container image with the `en-AU` locale and `en-AU-NatashaNeural` voice. |
+| `en-au-williamneural` | Container image with the `en-AU` locale and `en-AU-WilliamNeural` voice. |
+| `en-ca-claraneural` | Container image with the `en-CA` locale and `en-CA-ClaraNeural` voice. |
+| `en-ca-liamneural` | Container image with the `en-CA` locale and `en-CA-LiamNeural` voice. |
+| `en-gb-libbyneural` | Container image with the `en-GB` locale and `en-GB-LibbyNeural` voice. |
+| `en-gb-ryanneural` | Container image with the `en-GB` locale and `en-GB-RyanNeural` voice. |
+| `en-gb-sonianeural` | Container image with the `en-GB` locale and `en-GB-SoniaNeural` voice. |
+| `en-us-arianeural` | Container image with the `en-US` locale and `en-US-AriaNeural` voice. |
+| `en-us-guyneural` | Container image with the `en-US` locale and `en-US-GuyNeural` voice. |
+| `en-us-jennyneural` | Container image with the `en-US` locale and `en-US-JennyNeural` voice. |
+| `es-es-alvaroneural` | Container image with the `es-ES` locale and `es-ES-AlvaroNeural` voice. |
+| `es-es-elviraneural` | Container image with the `es-ES` locale and `es-ES-ElviraNeural` voice. |
+| `es-mx-dalianeural` | Container image with the `es-MX` locale and `es-MX-DaliaNeural` voice. |
+| `es-mx-jorgeneural` | Container image with the `es-MX` locale and `es-MX-JorgeNeural` voice. |
+| `fr-ca-antoineneural` | Container image with the `fr-CA` locale and `fr-CA-AntoineNeural` voice. |
+| `fr-ca-jeanneural` | Container image with the `fr-CA` locale and `fr-CA-JeanNeural` voice. |
+| `fr-ca-sylvieneural` | Container image with the `fr-CA` locale and `fr-CA-SylvieNeural` voice. |
+| `fr-fr-deniseneural` | Container image with the `fr-FR` locale and `fr-FR-DeniseNeural` voice. |
+| `fr-fr-henrineural` | Container image with the `fr-FR` locale and `fr-FR-HenriNeural` voice. |
+| `hi-in-madhurneural` | Container image with the `hi-IN` locale and `hi-IN-MadhurNeural` voice. |
+| `hi-in-swaraneural` | Container image with the `hi-IN` locale and `hi-IN-Swaraneural` voice. |
+| `it-it-diegoneural` | Container image with the `it-IT` locale and `it-IT-DiegoNeural` voice. |
+| `it-it-elsaneural` | Container image with the `it-IT` locale and `it-IT-ElsaNeural` voice. |
+| `it-it-isabellaneural` | Container image with the `it-IT` locale and `it-IT-IsabellaNeural` voice. |
+| `ja-jp-keitaneural` | Container image with the `ja-JP` locale and `ja-JP-KeitaNeural` voice. |
+| `ja-jp-nanamineural` | Container image with the `ja-JP` locale and `ja-JP-NanamiNeural` voice. |
+| `ko-kr-injoonneural` | Container image with the `ko-KR` locale and `ko-KR-InJoonNeural` voice. |
+| `ko-kr-sunhineural` | Container image with the `ko-KR` locale and `ko-KR-SunHiNeural` voice. |
+| `pt-br-antonioneural` | Container image with the `pt-BR` locale and `pt-BR-AntonioNeural` voice. |
+| `pt-br-franciscaneural` | Container image with the `pt-BR` locale and `pt-BR-FranciscaNeural` voice. |
+| `so-so-muuseneural` | Container image with the `so-SO` locale and `so-SO-Muuseneural` voice. |
+| `so-so-ubaxneural` | Container image with the `so-SO` locale and `so-SO-Ubaxneural` voice. |
+| `tr-tr-ahmetneural` | Container image with the `tr-TR` locale and `tr-TR-AhmetNeural` voice. |
+| `tr-tr-emelneural` | Container image with the `tr-TR` locale and `tr-TR-EmelNeural` voice. |
+| `zh-cn-xiaoxiaoneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoxiaoNeural` voice. |
+| `zh-cn-xiaoyouneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoYouNeural` voice. |
+| `zh-cn-yunyangneural` | Container image with the `zh-CN` locale and `zh-CN-YunYangNeural` voice. |
+| `zh-cn-yunyeneural` | Container image with the `zh-CN` locale and `zh-CN-YunYeNeural` voice. |
+| `zh-cn-xiaochenneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoChenNeural` voice. |
+| `zh-cn-xiaohanneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoHanNeural` voice. |
+| `zh-cn-xiaomoneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoMoNeural` voice. |
+| `zh-cn-xiaoqiuneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoQiuNeural` voice. |
+| `zh-cn-xiaoruineural` | Container image with the `zh-CN` locale and `zh-CN-XiaoRuiNeural` voice. |
+| `zh-cn-xiaoshuangneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoShuangNeural` voice.|
+| `zh-cn-xiaoxuanneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoXuanNeural` voice. |
+| `zh-cn-xiaoyanneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoYanNeural` voice. |
+| `zh-cn-yunxineural` | Container image with the `zh-CN` locale and `zh-CN-YunXiNeural` voice. |
+ | Image Tags | Notes | ||:| | `1.11.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.11.0-amd64-en-us-arianeural`. |
cognitive-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/disconnected-containers.md
Previously updated : 01/20/2022 Last updated : 03/11/2022
Containers enable you to run Cognitive Services APIs in your own environment, and are great for your specific security and data governance requirements. Disconnected containers enable you to use several of these APIs completely disconnected from the internet. Currently, the following containers can be run in this manner: * [Speech to Text (Standard)](../speech-service/speech-container-howto.md?tabs=stt)
+* [Neural Text to Speech](../speech-service/speech-container-howto.md?tabs=ntts)
* [Text Translation (Standard)](../translator/containers/translator-how-to-install-container.md#host-computer) * [Language Understanding (LUIS)](../LUIS/luis-container-howto.md) * Azure Cognitive Service for Language
After you have configured the container, use the next section to run the contain
## Run the container in a disconnected environment > [!IMPORTANT]
-> If you're using the Translator or Speech-to-text containers, read the **Additional parameters** section below for information on commands or additional parameters you will need to use.
+> If you're using the Translator, Neural text-to-speech, or Speech-to-text containers, read the **Additional parameters** section below for information on commands or additional parameters you will need to use.
Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
If you're using the [Translator container](../translator/containers/translator-h
-e TRANSLATORSYSTEMCONFIG=/path/to/model/config/translatorsystemconfig.json ```
-#### Speech-to-text container
+#### Speech-to-text and Neural text-to-speech containers
-The [speech-to-text container](../speech-service/speech-container-howto.md?tabs=stt) provides two default directories, `license` and `output`, by default for writing the license file and billing log at runtime. When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
+The [speech-to-text](../speech-service/speech-container-howto.md?tabs=stt) and [neural text-to-speech](../speech-service/speech-container-howto.md?tabs=ntts) containers provide a default directory for writing the license file and billing log at runtime. When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
Below is a sample command to set file/directory ownership.
cognitive-services Model Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/model-lifecycle.md
Previously updated : 12/10/2021 Last updated : 03/15/2022
The model-version used in your API request will be included in the response obje
Use the table below to find which model versions are supported by each feature.
-| Endpoint | Supported Versions | Latest Generally Available version | Latest preview version |
+| Feature | Supported versions | Latest Generally Available version | Latest preview version |
|--|||| | Custom text classification | `2021-11-01-preview` | | `2021-11-01-preview` | | Conversational language understanding | `2021-11-01-preview` | | `2021-11-01-preview` |
-| Sentiment Analysis and opinion mining | `2019-10-01`, `2020-04-01`, `2021-10-01-preview` | `2020-04-01` | `2021-10-01-preview` |
+| Sentiment Analysis and opinion mining | `2019-10-01`, `2020-04-01`, `2021-10-01` | `2021-10-01` | |
| Language Detection | `2019-10-01`, `2020-07-01`, `2020-09-01`, `2021-01-05` | `2021-01-05` | | | Entity Linking | `2019-10-01`, `2020-02-01` | `2020-02-01` | | | Named Entity Recognition (NER) | `2019-10-01`, `2020-02-01`, `2020-04-01`,`2021-01-15`,`2021-06-01` | `2021-06-01` | |
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/how-to/call-api.md
Previously updated : 03/01/2022 Last updated : 03/15/2022
If you're using the REST API, to get Opinion Mining in your results, you must in
By default, sentiment analysis will use the latest available AI model on your text. You can also configure your API requests to use a specific [model version](../../concepts/model-lifecycle.md).
-### Using a preview model version
+<!--### Using a preview model version
To use the a preview model version in your API calls, you must specify the model version using the model version parameter. For example, if you were sending a request using Python:
See the reference documentation for more information.
* [Python](/python/api/azure-ai-textanalytics/azure.ai.textanalytics.textanalyticsclient#analyze-sentiment-documents-kwargs-) * [Java](/java/api/com.azure.ai.textanalytics.models.analyzesentimentoptions.setmodelversion#com_azure_ai_textanalytics_models_AnalyzeSentimentOptions_setModelVersion_java_lang_String_) * [JavaScript](/javascript/api/@azure/ai-text-analytics/analyzesentimentoptions)
+-->
### Input languages
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Previously updated : 03/07/2022 Last updated : 03/15/2022
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* Model improvements for latest model-version for [text summarization](text-summarization/overview.md)
+* Model 2021-10-01 is Generally Available (GA) for [Sentiment Analysis and Opinion Mining](sentiment-opinion-mining/overview.md), featuring enhanced modeling for emojis and better accuracy across all supported languages.
+
+* [Question Answering](question-answering/overview.md): Active learning v2 incorporates a better clustering logic providing improved accuracy of suggestions. It considers user actions when suggestions are accepted or rejected to avoid duplicate suggestions, and improve query suggestions.
+ ## December 2021 * The version 3.1-preview.x REST endpoints and 5.1.0-beta.x client library have been retired. Please upgrade to the General Available version of the API(v3.1). If you're using the client libraries, use package version 5.1.0 or higher. See the [migration guide](./concepts/migrate-language-service-latest.md) for details.
communication-services Join Teams Meeting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/join-teams-meeting.md
During a meeting, Communication Services users will be able to use core audio, v
Additional information on required dataflows for joining Teams meetings is available at the [client and server architecture page](client-and-server-architecture.md). The [Group Calling Hero Sample](../samples/calling-hero-sample.md) provides example code for joining a Teams meeting from a web application.
+## Chat storage
+
+During a Teams meeting, all chat messages sent by Teams users or Communication Services users are stored in the geographic region associated with the Microsoft 365 organization hosting the meeting. For more information, review the article [Location of data in Microsoft Teams](/microsoftteams/location-of-data-in-teams). For each Communication Services user in the meetings, there is also a copy of the most recently sent message that is stored in the geographic region associated with the Communication Services resource used to develop the Communication Services application. For more information, review the article [Region availability and data residency](/azure/communication-services/concepts/privacy).
+
+If the hosting Microsoft 365 organization has defined a retention policy that deletes chat messages for any of the Teams users in the meeting, then all copies of the most recently sent message that have been stored for Communication Services users will also be deleted in accordance with the policy. If there is not a retention policy defined, then the copies of the most recently sent message for all Communication Services users will be deleted after 30 days. For more information about Teams retention policies, review the article [Learn about retention for Microsoft Teams](/microsoft-365/compliance/retention-policies-teams).
+ ## Diagnostics and call analytics After a Teams meeting ends, diagnostic information about the meeting is available using the [Communication Services logging and diagnostics](./logging-and-diagnostics.md) and using the [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) in the Teams admin center. Communication Services users will appear as "Anonymous" in Call Analytics screens. Communication Services users aren't included in the [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality).
cosmos-db Glowroot Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/glowroot-cassandra.md
Glowroot is an application performance management tool used to optimize and moni
* [Install JAVA (version 8) for Windows](https://developers.redhat.com/products/openjdk/download) > [!NOTE] > Note that there are certain known incompatible build targets with newer versions. If you already have a newer version of JAVA, you can still download JDK8.
-> If you have newer JAVA installed in addition to JDK8: Set the %JAVA_HOME% variable in the local command prompt to target JDK8. This will only change java version for the current session and leave global machine settings intact.
+> If you have newer JAVA installed in addition to JDK8: Set the %JAVA_HOME% variable in the local command prompt to target JDK8. This will only change Java version for the current session and leave global machine settings intact.
* [Install maven](https://maven.apache.org/download.cgi) * Verify successful installation by running: `mvn --version`
cosmos-db Load Data Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/load-data-table.md
Title: 'Tutorial: Java app to load sample data into a Cassandra API table in Azure Cosmos DB'
-description: This tutorial shows how to load sample user data to a Cassandra API table in Azure Cosmos DB by using a java application.
+description: This tutorial shows how to load sample user data to a Cassandra API table in Azure Cosmos DB by using a Java application.
cosmos-db Configure Periodic Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-periodic-backup-restore.md
You can configure storage redundancy for periodic backup mode at the time of acc
## <a id="configure-backup-interval-retention"></a>Modify the backup interval and retention period
-Azure Cosmos DB automatically takes a full backup of your data for every 4 hours and at any point of time, the latest two backups are stored. This configuration is the default option and itΓÇÖs offered without any extra cost. You can change the default backup interval and retention period during the Azure Cosmos account creation or after the account is created. The backup configuration is set at the Azure Cosmos account level and you need to configure it on each account. After you configure the backup options for an account, itΓÇÖs applied to all the containers within that account. Currently you can change them backup options from Azure portal only.
+Azure Cosmos DB automatically takes a full backup of your data for every 4 hours and at any point of time, the latest two backups are stored. This configuration is the default option and itΓÇÖs offered without any extra cost. You can change the default backup interval and retention period during the Azure Cosmos account creation or after the account is created. The backup configuration is set at the Azure Cosmos account level and you need to configure it on each account. After you configure the backup options for an account, itΓÇÖs applied to all the containers within that account. You can modify these settings using the Azure portal as described below, or via [PowerShell](configure-periodic-backup-restore.md#modify-backup-options-using-azure-powershell) or the [Azure CLI](configure-periodic-backup-restore.md#modify-backup-options-using-azure-cli).
If you have accidentally deleted or corrupted your data, **before you create a support request to restore the data, make sure to increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way, the Azure Cosmos DB team has enough time to restore your account.
cosmos-db How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-move-regions.md
Previously updated : 05/13/2021 Last updated : 03/15/2022
Azure Cosmos DB supports data replication natively, so moving data from one regi
## Migrate Azure Cosmos DB account metadata
-Azure Cosmos DB does not natively support migrating account metadata from one region to another. To migrate both the account metadata and customer data from one region to another, you must create a new account in the desired region and then copy the data manually.
+Azure Cosmos DB does not natively support migrating account metadata from one region to another. To migrate both the account metadata and customer data from one region to another, you must create a new account in the desired region and then copy the data manually.
+
+> [!IMPORTANT]
+> It is not necessary to migrate the account metadata if the data is stored or moved to a different region. The region in which the account metadata resides has no impact on the performance, security or any other operational aspects of your Azure Cosmos DB account.
A near-zero-downtime migration for the SQL API requires the use of the [change feed](change-feed.md) or a tool that uses it. If you're migrating the MongoDB API, the Cassandra API, or another API, or to learn more about options for migrating data between accounts, see [Options to migrate your on-premises or cloud data to Azure Cosmos DB](cosmosdb-migrationchoices.md).
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-dotnet-v3.md
Previously updated : 02/23/2022 Last updated : 03/07/2022 ms.devlang: csharp
The `FeedOptions` class in SDK v2 has now been renamed to `QueryRequestOptions`
`FeedOptions.EnableCrossPartitionQuery` has been removed and the default behavior in SDK 3.0 is that cross-partition queries will be executed without the need to enable the property specifically.
-`FeedOptions.PopulateQueryMetrics` is enabled by default with the results being present in the diagnostics property of the response.
+`FeedOptions.PopulateQueryMetrics` is enabled by default with the results being present in the `FeedResponse.Diagnostics` property of the response.
`FeedOptions.RequestContinuation` has now been promoted to the query methods themselves.
CosmosClient client = cosmosClientBuilder.Build();
### Exceptions
-Where the v2 SDK used `DocumentClientException` to signal errors during operations, the v3 SDK uses `CosmosClientException`, which exposes the `StatusCode`, `Diagnostics`, and other response-related information. All the complete information is serialized when `ToString()` is used:
+Where the v2 SDK used `DocumentClientException` to signal errors during operations, the v3 SDK uses `CosmosException`, which exposes the `StatusCode`, `Diagnostics`, and other response-related information. All the complete information is serialized when `ToString()` is used:
```csharp
-catch (CosmosClientException ex)
+catch (CosmosException ex)
{ HttpStatusCode statusCode = ex.StatusCode; CosmosDiagnostics diagnostics = ex.Diagnostics;
cosmos-db Tutorial Sql Api Dotnet Bulk Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/tutorial-sql-api-dotnet-bulk-import.md
Before following the instructions in this article, make sure that you have the f
## Step 2: Set up your .NET project
-Open the Windows command prompt or a Terminal window from your local computer. You will run all the commands in the next sections from the command prompt or terminal. Run the following dotnet new command to create a new app with the name *bulk-import-demo*. The `--langVersion` parameter sets the *LangVersion* property in the created project file.
+Open the Windows command prompt or a Terminal window from your local computer. You will run all the commands in the next sections from the command prompt or terminal. Run the following dotnet new command to create a new app with the name *bulk-import-demo*.
```bash
- dotnet new console ΓÇôlangVersion:8 -n bulk-import-demo
+ dotnet new console -n bulk-import-demo
``` Change your directory to the newly created app folder. You can build the application with:
You can now proceed to the next tutorial:
Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cost-management-billing Mca Request Billing Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-request-billing-ownership.md
tags: billing
Previously updated : 11/17/2021 Last updated : 03/14/2022
Before you begin, make sure that the person you're requesting billing ownership
- For an Enterprise Agreement, the person must be an Account Owner. - For a Microsoft Online Subscription Agreement, the person must be an Account Administrator.
+> [!NOTE]
+> To perform a transfer, the destination account must be a paid account with a valid form of payment. For example, if the destination is an Azure free account, you can upgrade it to a pay-as-you-go Azure plan under a Microsoft Customer Agreement. Then you can make the transfer.
+ When you're ready, use the following instructions. You can also go along with the following video that outlines each step of the process. >[!VIDEO https://www.youtube.com/embed/gfiUI2YLsgc]
data-factory Connector Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sql-server.md
Previously updated : 02/08/2022 Last updated : 03/10/2022 # Copy and transform data to and from SQL Server by using Azure Data Factory or Azure Synapse Analytics
When you copy data from and to SQL Server, the following mappings are used from
| xml |String | >[!NOTE]
-> For data types that map to the Decimal interim type, currently Copy activity supports precision up to 28. If you have data that requires precision larger than 28, consider converting to a string in a SQL query.
+> For data types that map to the Decimal interim type, currently Copy activity supports precision up to 28. If you have data that requires precision larger than 28, consider converting to a string in a SQL query.
+>
+> When copying data from SQL Server using Azure Data Factory, the bit data type is mapped to the Boolean interim data type. If you have data that need to be kept as the bit data type, use queries with [T-SQL CAST or CONVERT](/sql/t-sql/functions/cast-and-convert-transact-sql?view=sql-server-ver15&preserve-view=true).
## Lookup activity properties
data-factory Connector Troubleshoot Ftp Sftp Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-ftp-sftp-http.md
Previously updated : 10/01/2021 Last updated : 03/11/2022
This article provides suggestions to troubleshoot common problems with the FTP,
1. On the ADF portal, hover on the SFTP linked service, and open its payload by selecting the code button. 1. Add `"allowKeyboardInteractiveAuth": true` in the "typeProperties" section.
+### Unable to connect to SFTP due to key exchange algorithms provided by SFTP are not supported in ADF
+
+- **Symptoms**: You are unable to connect to SFTP via ADF and meet the following error message: `Failed to negotiate key exchange algorithm.`
+
+- **Cause**: The key exchange algorithms provided by the SFTP server are not supported in ADF. The key exchange algorithms supported by ADF are:
+ - diffie-hellman-group-exchange-sha256
+ - diffie-hellman-group-exchange-sha1
+ - diffie-hellman-group14-sha1
+ - diffie-hellman-group1-sha1
+ ## HTTP ### Error code: HttpFileFailedToRead
data-factory Transform Data Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-script.md
The following table describes these JSON properties:
|scripts.parameter.type |The data type of the parameter. The type is logical type and follows type mapping of each connector. |No | |scripts.parameter.direction |The direction of the parameter. It can be Input, Output, InputOutput. The value is ignored if the direction is Output. ReturnValue type is not supported. Set the return value of SP to an output parameter to retrieve it. |No | |scripts.parameter.size |The max size of the parameter. Only applies to Output/InputOutput direction parameter of type string/byte[]. |No |
-|scriptReference |The reference to a remotely stored script file. |No |
-|scriptReference.linkedServiceName |The linked service of the script location. |No |
-|scriptReference.path |The file path to the script file. Only a single file is supported. |No |
-|scriptReference.parameter |The array of parameters of the script. |No |
-|scriptReference.parameter.name |The name of the parameter. |No |
-|scriptReference.parameter.value |The value of the parameter. |No |
-|scriptReference.parameter.type |The data type of the parameter. The type is logical type and follows type mapping of each connector. |No |
-|scriptReference.parameter.direction |The direction of the parameter. It can be Input, Output, InputOutput. The value is ignored if the direction is Output. ReturnValue type is not supported. Set the return value of SP to an output parameter to retrieve it. |No |
-|scriptReference.parameter.size |The max size of the parameter. Only applies to types that can be variable size. |No |
|logSettings |The settings to store the output logs. If not specified, script log is disabled. |No | |logSettings.logDestination |The destination of log output. It can be ActivityOutput or ExternalStore. Default: ActivityOutput. |No | |logSettings.logLocationSettings |The settings of the target location if logDestination is ExternalStore. |No |
Sample output:
Inline scripts integrate well with Pipeline CI/CD since the script is stored as part of the pipeline metadata.
-### Script file reference
--
-If you have you a custom process to generate scripts and would like to reference it in the pipeline rather than use an in-line script, you cam specify the file path on a storage.
- ### Logging :::image type="content" source="media/transform-data-using-script/logging-settings.png" alt-text="Screenshot showing the UI for the logging settings for a script.":::
databox-online Azure Stack Edge Pro 2 Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy.md
Your **Get started** page displays the various settings that are required to con
Follow these steps to configure the network for your device.
-1. In the local web UI of your device, go to the **Get started** page.
+1. In the local web UI of your device, go to the **Get started** page. On the **Set up a single node device** tile, select **Start**.
-2. On the **Network** tile, select **Configure**.
+ ![Screenshot of the Get started page in the local web UI of an Azure Stack Edge device. The Start button on the Set up a single node device tile is highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/setup-type-single-node-1.png)
++
+2. On the **Network** tile, select **Needs setup**.
![Screenshot of the Get started page in the local web UI of an Azure Stack Edge device. The Needs setup is highlighted on the Network tile.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/network-1.png)
databox-online Azure Stack Edge Pro 2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-overview.md
Azure Stack Edge Pro 2 has the following capabilities:
|Bandwidth throttling| Throttle to limit bandwidth usage during peak hours. <br> For more information, see [Manage bandwidth schedules on your Azure Stack Edge](azure-stack-edge-gpu-manage-bandwidth-schedules.md).| |Easy ordering| Bulk ordering and tracking of the device via Azure Edge Hardware Center. <br> For more information, see [Order a device via Azure Edge Hardware Center](azure-stack-edge-pro-2-deploy-prep.md#create-a-new-resource).| |Specialized network functions|Use the Marketplace experience from Azure Network Function Manager to rapidly deploy network functions. The functions deployed on Azure Stack Edge include mobile packet core, SD-WAN edge, and VPN services. <br>For more information, see [What is Azure Network Function Manager? (Preview)](../network-function-manager/overview.md).|
-|Scale out file server|The device is available as a single node or a two-node cluster. For more information, see [What is clustering on Azure Stack Edge devices? (Preview)](azure-stack-edge-placeholder.md).|
+|Scale out file server|The device is available as a single node or a two-node cluster. For more information, see [What is clustering on Azure Stack Edge devices? (Preview)](azure-stack-edge-gpu-clustering-overview.md).|
<!--|ExpressRoute | Added security through ExpressRoute. Use peering configuration where traffic from local devices to the cloud storage endpoints travels over the ExpressRoute. For more information, see [ExpressRoute overview](../expressroute/expressroute-introduction.md).|-->
databox-online Azure Stack Edge Pro R Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-overview.md
Previously updated : 01/05/2022 Last updated : 03/14/2022 #Customer intent: As an IT admin, I need to understand what Azure Stack Edge Pro R is and how it works so I can use it to process and transform data before sending to Azure.
Azure Stack Edge Pro R has the following capabilities:
|Edge compute workloads |Allows analysis, processing, filtering of data. Supports VMs and containerized workloads. <ul><li>For information on VM workloads, see [VM overview on Azure Stack Edge](azure-stack-edge-gpu-virtual-machine-overview.md).</li> <li>For containerized workloads, see [Kubernetes overview on Azure Stack Edge](azure-stack-edge-gpu-kubernetes-overview.md)</li></ul> | |Accelerated AI inferencing| Enabled by an Nvidia T4 GPU. <br> For more information, see [GPU sharing on your Azure Stack Edge device](azure-stack-edge-gpu-sharing.md).| |Data access | Direct data access from Azure Storage Blobs and Azure Files using cloud APIs for additional data processing in the cloud. Local cache on the device is used for fast access of most recently used files.|
-|Disconnected mode| Device and service can be optionally managed via Azure Stack Hub. Deploy, run, manage applications in offline mode. <br> Disconnected mode supports offline upload scenarios.|
+|Disconnected mode| Deploy, run, manage applications in offline mode. <br> Disconnected mode supports offline upload scenarios. For more information, see Use [Azure Stack Edge in disconnected mode](azure-stack-edge-gpu-disconnected-scenario.md)|
|Supported file transfer protocols |Support for standard SMB, NFS, and REST protocols for data ingestion. <br> For more information on supported versions, go to [Azure Stack Edge Pro R system requirements](azure-stack-edge-gpu-system-requirements.md).| |Data refresh | Ability to refresh local files with the latest from cloud. <br> For more information, see [Refresh a share on your Azure Stack Edge](azure-stack-edge-gpu-manage-shares.md#refresh-shares).| |Double encryption | Use of self-encrypting drives provides the first layer of encryption. VPN provides the second layer of encryption. BitLocker support to locally encrypt data and secure data transfer to cloud over *https*. <br> For more information, see [Configure VPN on your Azure Stack Edge Pro R device](azure-stack-edge-mini-r-configure-vpn-powershell.md).|
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
Title: How to enable Microsoft Defender for Containers in Microsoft Defender for
description: Enable the container protections of Microsoft Defender for Containers zone_pivot_groups: k8s-host Previously updated : 02/28/2022 Last updated : 03/15/2022 # Enable Microsoft Defender for Containers
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers Previously updated : 03/09/2022 Last updated : 03/15/2022 # Overview of Microsoft Defender for Containers
On this page, you'll learn how you can use Defender for Containers to improve, m
Defender for Containers helps with the core aspects of container security: -- **Environment hardening** - Defender for Containers protects your Kubernetes clusters whether they're running on Azure Kubernetes Service, Kubernetes on-prem / IaaS, or Amazon EKS. By continuously assessing clusters, Defender for Containers provides visibility into misconfigurations and guidelines to help mitigate identified threats. Learn more in [Environment hardening through security recommendations](#environment-hardening-through-security-recommendations).
+- **Environment hardening** - Defender for Containers protects your Kubernetes clusters whether they're running on Azure Kubernetes Service, Kubernetes on-prem / IaaS, or Amazon EKS. By continuously assessing clusters, Defender for Containers provides visibility into misconfigurations and guidelines to help mitigate identified threats. Learn more in [Hardening](#hardening).
- **Vulnerability assessment** - Vulnerability assessment and management tools for images **stored** in ACR registries and **running** in Azure Kubernetes Service. Learn more in [Vulnerability assessment](#vulnerability-assessment). - **Run-time threat protection for nodes and clusters** - Threat protection for clusters and Linux nodes generates security alerts for suspicious activities. Learn more in [Run-time protection for Kubernetes nodes, clusters, and hosts](#run-time-protection-for-kubernetes-nodes-and-clusters).
+## Hardening
+
+### Continuous monitoring of your Kubernetes clusters - wherever they're hosted
+
+Defender for Cloud continuously assesses the configurations of your clusters and compares them with the initiatives applied to your subscriptions. When it finds misconfigurations, Defender for Cloud generates security recommendations. Use Defender for Cloud's **recommendations page** to view recommendations and remediate issues. For details of the relevant Defender for Cloud recommendations that might appear for this feature, see the [compute section](recommendations-reference.md#recs-container) of the recommendations reference table.
+
+For Kubernetes clusters on EKS, you'll need to connect your AWS account to Microsoft Defender for Cloud via the environment settings page as described in [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md). Then ensure you've enabled the CSPM plan.
+
+When reviewing the outstanding recommendations for your container-related resources, whether in asset inventory or the recommendations page, you can use the resource filter:
++
+### Kubernetes data plane hardening
+
+For a bundle of recommendations to protect the workloads of your Kubernetes containers, install the **Azure Policy for Kubernetes**. You can also auto deploy this component as explained in [enable auto provisioning of agents and extensions](enable-data-collection.md#auto-provision-mma). By default, auto provisioning is enabled when you enable Defender for Containers.
+
+With the add-on on your AKS cluster, every request to the Kubernetes API server will be monitored against the predefined set of best practices before being persisted to the cluster. You can then configure to **enforce** the best practices and mandate them for future workloads.
+
+For example, you can mandate that privileged containers shouldn't be created, and any future requests to do so will be blocked.
+
+Learn more in [Kubernetes data plane hardening](kubernetes-workload-protections.md).
++++
+## Vulnerability assessment
+
+### Scanning images in ACR registries
+
+Defender for Containers includes an integrated vulnerability scanner for scanning images in Azure Container Registry registries.
+
+There are four triggers for an image scan:
+
+- **On push** - Whenever an image is pushed to your registry, Defender for Containers automatically scans that image. To trigger the scan of an image, push it to your repository.
+
+- **Recently pulled** - Since new vulnerabilities are discovered every day, **Microsoft Defender for Containers** also scans, on a weekly basis, any image that has been pulled within the last 30 days. There's no extra charge for these rescans; as mentioned above, you're billed once per image.
+
+- **On import** - Azure Container Registry has import tools to bring images to your registry from Docker Hub, Microsoft Container Registry, or another Azure container registry. **Microsoft Defender for container Containers** scans any supported images you import. Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md).
+
+- **Continuous scan**- This trigger has two modes:
+
+ - A Continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
+
+ - (Preview) Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
+
+This scan typically completes within 2 minutes, but it might take up to 40 minutes. For every vulnerability identified, Defender for Cloud provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue.
+
+Defender for Cloud filters, and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts.
+++
+### View vulnerabilities for running images
+
+Defender for Containers expands on the registry scanning features by introducing the **preview feature** of run-time visibility of vulnerabilities powered by the Defender profile, or extension.
+
+> [!NOTE]
+> There's no Defender profile for Windows, it's only available on Linux OS.
+
+The new recommendation, **Running container images should have vulnerability findings resolved**, only shows vulnerabilities for running images, and relies on the Defender security profile, or extension to discover which images are currently running. This recommendation groups running images that have vulnerabilities, and provides details about the issues discovered, and how to remediate them. The Defender profile, or extension is used to gain visibility into vulnerable containers that are active.
+
+This recommendation shows running images, and their vulnerabilities based on ACR image. Images that are deployed from a non ACR registry, won't be scanned, and will appear under the Not applicable tab.
++
+## Run-time protection for Kubernetes nodes and clusters
+
+Defender for Cloud provides real-time threat protection for your containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers.
+
+Threat protection at the cluster level is provided by the Defender profile and analysis of the Kubernetes audit logs. Examples of events at this level include exposed Kubernetes dashboards, creation of high-privileged roles, and the creation of sensitive mounts.
+
+In addition, our threat detection goes beyond the Kubernetes management layer. Defender for Containers includes **host-level threat detection** with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload. Our global team of security researchers constantly monitor the threat landscape. They add container-specific alerts and vulnerabilities as they're discovered. Together, this solution monitors the growing attack surface of multi-cloud Kubernetes deployments and tracks the [MITRE ATT&CK® matrix for Containers](https://www.microsoft.com/security/blog/2021/04/29/center-for-threat-informed-defense-teams-up-with-microsoft-partners-to-build-the-attck-for-containers-matrix/), a framework that was developed by the [Center for Threat-Informed Defense](https://mitre-engenuity.org/ctid/) in close partnership with Microsoft and others.
+
+The full list of available alerts can be found in the [Reference table of alerts](alerts-reference.md#alerts-k8scluster).
++ ## Architecture overview The architecture of the various elements involved in the full range of protections provided by Defender for Containers varies depending on where your Kubernetes clusters are hosted.
Defender for Containers protects your clusters whether they're running in:
- **An unmanaged Kubernetes distribution** (using Azure Arc-enabled Kubernetes) - Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters hosted on-premises or on IaaS. > [!NOTE]
-> Defender for Containers' support for Arc-enabled Kubernetes clusters (and therefore AWS EKS too) is a preview feature.
+> Defender for Containers' support for Arc-enabled Kubernetes clusters (AWS EKS, and GCP GKE) is a preview feature.
For high-level diagrams of each scenario, see the relevant tabs below.
The following describes the components necessary in order to receive the full pr
-## Environment hardening through security recommendations
-
-### Continuous monitoring of your Kubernetes clusters - wherever they're hosted
-
-Defender for Cloud continuously assesses the configurations of your clusters and compares them with the initiatives applied to your subscriptions. When it finds misconfigurations, Defender for Cloud generates security recommendations. Use Defender for Cloud's **recommendations page** to view recommendations and remediate issues. For details of the relevant Defender for Cloud recommendations that might appear for this feature, see the [compute section](recommendations-reference.md#recs-container) of the recommendations reference table.
-
-For Kubernetes clusters on EKS, you'll need to connect your AWS account to Microsoft Defender for Cloud via the environment settings page as described in [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md). Then ensure you've enabled the CSPM plan.
-
-When reviewing the outstanding recommendations for your container-related resources, whether in asset inventory or the recommendations page, you can use the resource filter:
---
-### Environment hardening
-
-For a bundle of recommendations to protect the workloads of your Kubernetes containers, install the **Azure Policy for Kubernetes**. You can also auto deploy this component as explained in [enable auto provisioning of agents and extensions](enable-data-collection.md#auto-provision-mma). By default, auto provisioning is enabled when you enable Defender for Containers.
-
-With the add-on on your AKS cluster, every request to the Kubernetes API server will be monitored against the predefined set of best practices before being persisted to the cluster. You can then configure to **enforce** the best practices and mandate them for future workloads.
-
-For example, you can mandate that privileged containers shouldn't be created, and any future requests to do so will be blocked.
-
-Learn more in [Protect your Kubernetes workloads](kubernetes-workload-protections.md).
----
-## Vulnerability assessment
-
-### Scanning images in ACR registries
-
-Defender for Containers includes an integrated vulnerability scanner for scanning images in Azure Container Registry registries.
-
-There are four triggers for an image scan:
--- **On push** - Whenever an image is pushed to your registry, Defender for Containers automatically scans that image. To trigger the scan of an image, push it to your repository.--- **Recently pulled** - Since new vulnerabilities are discovered every day, **Microsoft Defender for Containers** also scans, on a weekly basis, any image that has been pulled within the last 30 days. There's no extra charge for these rescans; as mentioned above, you're billed once per image.--- **On import** - Azure Container Registry has import tools to bring images to your registry from Docker Hub, Microsoft Container Registry, or another Azure container registry. **Microsoft Defender for container Containers** scans any supported images you import. Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md).--- **Continuous scan**- This trigger has two modes:-
- - A Continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
-
- - (Preview) Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
-
-This scan typically completes within 2 minutes, but it might take up to 40 minutes. For every vulnerability identified, Defender for Cloud provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue.
-
-Defender for Cloud filters, and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts.
---
-### View vulnerabilities for running images
-
-Defender for Containers expands on the registry scanning features by introducing the **preview feature** of run-time visibility of vulnerabilities powered by the Defender profile, or extension.
-
-> [!NOTE]
-> There's no Defender profile for Windows, it's only available on Linux OS.
-
-The new recommendation, **Running container images should have vulnerability findings resolved**, only shows vulnerabilities for running images, and relies on the Defender security profile, or extension to discover which images are currently running. This recommendation groups running images that have vulnerabilities, and provides details about the issues discovered, and how to remediate them. The Defender profile, or extension is used to gain visibility into vulnerable containers that are active.
-
-This recommendation shows running images, and their vulnerabilities based on ACR image. Images that are deployed from a non ACR registry, won't be scanned, and will appear under the Not applicable tab.
--
-## Run-time protection for Kubernetes nodes and clusters
-
-Defender for Cloud provides real-time threat protection for your containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers.
-
-Threat protection at the cluster level is provided by the Defender profile and analysis of the Kubernetes audit logs. Examples of events at this level include exposed Kubernetes dashboards, creation of high-privileged roles, and the creation of sensitive mounts.
-
-In addition, our threat detection goes beyond the Kubernetes management layer. Defender for Containers includes **host-level threat detection** with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload. Our global team of security researchers constantly monitor the threat landscape. They add container-specific alerts and vulnerabilities as they're discovered. Together, this solution monitors the growing attack surface of multi-cloud Kubernetes deployments and tracks the [MITRE ATT&CK® matrix for Containers](https://www.microsoft.com/security/blog/2021/04/29/center-for-threat-informed-defense-teams-up-with-microsoft-partners-to-build-the-attck-for-containers-matrix/), a framework that was developed by the [Center for Threat-Informed Defense](https://mitre-engenuity.org/ctid/) in close partnership with Microsoft and others.
-
-The full list of available alerts can be found in the [Reference table of alerts](alerts-reference.md#alerts-k8scluster).
-- ## FAQ - Defender for Containers - [What happens to subscriptions with Microsoft Defender for Kubernetes or Microsoft Defender for Containers enabled?](#what-happens-to-subscriptions-with-microsoft-defender-for-kubernetes-or-microsoft-defender-for-containers-enabled)
No. ThereΓÇÖs no direct price increase. The new comprehensive Container security
### What are the options to enable the new plan at scale? WeΓÇÖve rolled out a new policy in Azure Policy, **Configure Microsoft Defender for Containers to be enabled**, to make it easier to enable the new plan at scale.
+### Does Microsoft Defender for Containers support AKS with virtual machines?
+No. If your cluster is deployed on an Azure Kubernetes Service (AKS) virtual machines, it's not recommended to enable the Microsoft Defender for Containers plan.
+### Do I need to install the Log Analytics VM extension on my AKS nodes for security protection?
+No, AKS is a managed service, and manipulation of the IaaS resources isn't supported. The Log Analytics VM extension is not needed and may result in additional charges.
## Next steps
defender-for-cloud Kubernetes Workload Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md
Title: Workload protections for your Kubernetes workloads
-description: Learn how to use Microsoft Defender for Cloud's set of Kubernetes workload protection security recommendations
+ Title: Kubernetes data plane hardening
+description: Learn how to use Microsoft Defender for Cloud's set of Kubernetes data plane hardening security recommendations
Last updated 03/08/2022
-# Protect your Kubernetes workloads
+# Protect your Kubernetes data plane hardening
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
-This page describes how to use Microsoft Defender for Cloud's set of security recommendations dedicated to Kubernetes workload protection.
+This page describes how to use Microsoft Defender for Cloud's set of security recommendations dedicated to Kubernetes data plane hardening.
> [!TIP] > For a list of the security recommendations that might appear for Kubernetes clusters and nodes, see the [Container recommendations](recommendations-reference.md#container-recommendations) of the recommendations reference table.
Microsoft Defender for Cloud includes a bundle of recommendations that are avail
- Add the [Required FQDN/application rules for Azure policy](../aks/limit-egress-traffic.md#azure-policy). - (For non AKS clusters) [Connect an existing Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md).
-## Enable Kubernetes workload protection
+## Enable Kubernetes data plane hardening
-When you enable Microsoft Defender for Containers, Azure Kubernetes Service clusters, and Azure Arc enabled Kubernetes clusters (Preview) protection are both enabled by default. You can configure your Kubernetes workload protections, when you enable Microsoft Defender for Containers.
+When you enable Microsoft Defender for Containers, Azure Kubernetes Service clusters, and Azure Arc enabled Kubernetes clusters (Preview) protection are both enabled by default. You can configure your Kubernetes data plane hardening, when you enable Microsoft Defender for Containers.
**To enable Azure Kubernetes Service clusters and Azure Arc enabled Kubernetes clusters (Preview)**:
If you disabled any of the default protections when you enabled Microsoft Defend
## Deploy the add-on to specified clusters
-You can manually configure the Kubernetes workload add-on, or extension protection through the Recommendations page. This can be accomplished by remediating the `Azure Policy add-on for Kubernetes should be installed and enabled on your clusters` recommendation, or `Azure policy extension for Kubernetes should be installed and enabled on your clusters`.
+You can manually configure the Kubernetes data plane hardening add-on, or extension protection through the Recommendations page. This can be accomplished by remediating the `Azure Policy add-on for Kubernetes should be installed and enabled on your clusters` recommendation, or `Azure policy extension for Kubernetes should be installed and enabled on your clusters`.
**To Deploy the add-on to specified clusters**:
For recommendations with parameters that need to be customized, you will need to
1. Open the **Parameters** tab and modify the values as required.
- :::image type="content" source="media/kubernetes-workload-protections/containers-parameter-requires-configuration.png" alt-text="Modifying the parameters for one of the recommendations in the Kubernetes workload protection bundle.":::
+ :::image type="content" source="media/kubernetes-workload-protections/containers-parameter-requires-configuration.png" alt-text="Modifying the parameters for one of the recommendations in the Kubernetes data plane hardening protection bundle.":::
1. Select **Review + save**.
spec:
## Next steps
-In this article, you learned how to configure Kubernetes workload protection.
+In this article, you learned how to configure Kubernetes data plane hardening.
For other related material, see the following pages:
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud Previously updated : 03/10/2022 Last updated : 03/15/2022 zone_pivot_groups: connect-aws-accounts
To protect your AWS-based resources, you can connect an account with one of two
- **Environment settings page (in preview)** (recommended) - This preview page provides a greatly improved, simpler, onboarding experience (including auto provisioning). This mechanism also extends Defender for Cloud's enhanced security features to your AWS resources: - **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your AWS resources alongside your Azure resources.
- - **Microsoft Defender for Containers** extends Defender for Cloud's container threat detection and advanced defenses to your **Amazon EKS clusters**.
- - **Microsoft Defender for servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md?tabs=tab/features-multi-cloud) table.
+ - **Microsoft Defender for Containers** brings threat detection and advanced defenses to your Amazon EKS clusters. This plan includes Kubernetes threat protection, behavioral analytics, Kubernetes best practices, admission control recommendations and more. You can view the full list of available features in [Defender for Containers feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
+ - **Microsoft Defender for servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [feature availability table](supported-machines-endpoint-solutions-clouds-servers.md?tabs=tab/features-multi-cloud).
For a reference list of all the recommendations Defender for Cloud can provide for AWS resources, see [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md).
Additional extensions should be enabled on Arc-connected machines. These extensi
- (Optional) Select **Configure**, to edit the configuration as required.
-1. By default the **Containers** plan is set to **On**. This is necessary to have Defender for Containers protect your AWS EKS clusters.
+1. By default the **Containers** plan is set to **On**. This is necessary to have Defender for Containers protect your AWS EKS clusters. Ensure you have fulfilled the [network requirements](defender-for-containers-enable.md?tabs=defender-for-container-eks#network-requirements) for the Defender for Containers plan.
> [!Note] > Azure Arc-enabled Kubernetes, the Defender Arc extension, and the Azure Policy Arc extension should be installed. Use the dedicated Defender for Cloud recommendations to deploy the extensions (and Arc, if necessary) as explained in [Protect Amazon Elastic Kubernetes Service clusters](defender-for-containers-enable.md?tabs=defender-for-container-eks). +
+ - (Optional) Select **Configure**, to edit the configuration as required. If you choose to disable this configuration, the `Threat detection (control plane)` feature will be disabled. Learn more about the [feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
+ 1. Select **Next: Configure access**. 1. Download the CloudFormation template.
AWS Systems Manager is required for automating tasks across your AWS resources.
### Step 4. Complete Azure Arc prerequisites
-1. Make sure the appropriate [Azure resources providers](../azure-arc/servers/agent-overview.md#register-azure-resource-providers) are registered:
+1. Make sure the appropriate [Azure resources providers](../azure-arc/servers/prerequisites.md#azure-resource-providers) are registered:
- Microsoft.HybridCompute - Microsoft.GuestConfiguration
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Title: Connect your GCP project to Microsoft Defender for Cloud description: Monitoring your GCP resources from Microsoft Defender for Cloud Previously updated : 03/09/2022 Last updated : 03/14/2022 zone_pivot_groups: connect-gcp-accounts
To protect your GCP-based resources, you can connect an account in two different
- **Defender for Cloud's CSPM features** extends to your GCP resources. This agentless plan assesses your GCP resources according to GCP-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to GCP. Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your GCP resources alongside your Azure resources. - **Microsoft Defender for servers** brings threat detection and advanced defenses to your GCP VM instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers table](supported-machines-endpoint-solutions-clouds-servers.md)
- - **Microsoft Defender for Containers** - Microsoft Defender for Containers brings threat detection and advanced defenses to your Google's Kubernetes Engine (GKE) Standard clusters. This plan includes Kubernetes threat protection, behavioral analytics, Kubernetes best practices, admission control recommendations and more.
+ - **Microsoft Defender for Containers** - Microsoft Defender for Containers brings threat detection and advanced defenses to your Google's Kubernetes Engine (GKE) Standard clusters. This plan includes Kubernetes threat protection, behavioral analytics, Kubernetes best practices, admission control recommendations and more. You can view the full list of available features in [Defender for Containers feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
:::image type="content" source="./media/quickstart-onboard-gcp/gcp-account-in-overview.png" alt-text="Screenshot of GCP projects shown in Microsoft Defender for Cloud's overview dashboard." lightbox="./media/quickstart-onboard-gcp/gcp-account-in-overview.png":::
Follow the steps below to create your GCP cloud connector.
1. Toggle the plans you want to connect to **On**. By default all necessary prerequisites and components will be provisioned. (Optional) Learn how to [configure each plan](#optional-configure-selected-plans).
+1. (**Containers only**) Ensure you have fulfilled the [network requirements](defender-for-containers-enable.md?tabs=defender-for-container-gcp#network-requirements) for the Defender for Containers plan.
+ 1. Select the **Next: Configure access**. 1. Select **Copy**.
Microsoft Defender for Containers brings threat detection, and advanced defences
- Defender for Cloud recommendations, for per cluster installation, which will appear on the Microsoft Defender for Cloud's Recommendations page. Learn how to [deploy the solution to specific clusters](defender-for-containers-enable.md?tabs=defender-for-container-gke#deploy-the-solution-to-specific-clusters). - Manual installation for [Arc-enabled Kubernetes](../azure-arc/kubernetes/quickstart-connect-cluster.md), and [extensions](../azure-arc/kubernetes/extensions.md).
+If you choose to disable all of available configuration options, no agents, or components will be deployed to your clusters. Learn more about the [features availability](supported-machines-endpoint-solutions-clouds-containers.md).
+ **To configure the Containers plan**: 1. Follow the steps to [Connect your GCP project](#connect-your-gcp-project).
defender-for-cloud Quickstart Onboard Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-machines.md
A machine with Azure Arc-enabled servers becomes an Azure resource and - when yo
In addition, Azure Arc-enabled servers provides enhanced capabilities such as the option to enable guest configuration policies on the machine, simplify deployment with other Azure services, and more. For an overview of the benefits, see [Supported cloud operations](../azure-arc/servers/overview.md#supported-cloud-operations). > [!NOTE]
-> Defender for Cloud's auto-deploy tools for deploying the Log Analytics agent don't support machines running Azure Arc. When you've connected your machines using Azure Arc, use the relevant Defender for Cloud recommendation to deploy the agent and benefit from the full range of protections offered by Defender for Cloud:
+> Defender for Cloud's auto-deploy tools for deploying the Log Analytics agent works with machines running Azure Arc however this capability is currently in preview . When you've connected your machines using Azure Arc, use the relevant Defender for Cloud recommendation to deploy the agent and benefit from the full range of protections offered by Defender for Cloud:
> > - [Log Analytics agent should be installed on your Linux-based Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/720a3e77-0b9a-4fa9-98b6-ddf0fd7e32c1) > - [Log Analytics agent should be installed on your Windows-based Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/27ac71b1-75c5-41c2-adc2-858f5db45b08)
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Title: Archive of what's new in Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud from six months ago and earlier. Previously updated : 03/08/2022 Last updated : 03/14/2022 # Archive for what's new in Defender for Cloud?
When the Azure Policy add-on for Kubernetes is installed on your Azure Kubernete
For example, you can mandate that privileged containers shouldn't be created, and any future requests to do so will be blocked.
-Learn more in [Workload protection best-practices using Kubernetes admission control](defender-for-containers-introduction.md#environment-hardening).
+Learn more in [Workload protection best-practices using Kubernetes admission control](defender-for-containers-introduction.md#hardening).
> [!NOTE] > While the recommendations were in preview, they didn't render an AKS cluster resource unhealthy, and they weren't included in the calculations of your secure score. with this GA announcement these will be included in the score calculation. If you haven't remediated them already, this might result in a slight impact on your secure score. Remediate them wherever possible as described in [Remediate recommendations in Azure Security Center](implement-security-recommendations.md).
When you've installed the Azure Policy add-on for Kubernetes on your AKS cluster
For example, you can mandate that privileged containers shouldn't be created, and any future requests to do so will be blocked.
-Learn more in [Workload protection best-practices using Kubernetes admission control](defender-for-containers-introduction.md#environment-hardening).
+Learn more in [Workload protection best-practices using Kubernetes admission control](defender-for-containers-introduction.md#hardening).
### Vulnerability assessment findings are now available in continuous export
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
Title: Microsoft Defender for Containers feature availability description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment. Previously updated : 03/08/2022 Last updated : 03/15/2022
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing Tier | Azure clouds availability | |--|--|--|--|--|--|--|--| | Compliance | Docker CIS | VMs | GA | X | Log Analytics agent | Defender for Servers | |
-| VA | Registry scan | ACR, Private ACR | GA | Γ£ô (Preview) | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| VA | View vulnerabilities for running images | AKS | Preview | X | Defender profile | Defender for Containers | Commercial clouds |
+| Vulnerability Assessment | Registry scan | ACR, Private ACR | GA | Γ£ô (Preview) | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Vulnerability Assessment | View vulnerabilities for running images | AKS | Preview | X | Defender profile | Defender for Containers | Commercial clouds |
| Hardening | Control plane recommendations | ACR, AKS | GA | Γ£ô | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Hardening | Kubernetes data plane recommendations | AKS | GA | X | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Runtime Threat Detection | Agentless threat detection | AKS | GA | Γ£ô | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Runtime Threat Detection | Agent-based threat detection | AKS | Preview | X | Defender profile | Defender for Containers | Commercial clouds |
-| Discovery and Auto provisioning | Discovery of uncovered/unprotected clusters | AKS | GA | Γ£ô | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Discovery and Auto provisioning | Auditlog collection for agentless threat detection | AKS | GA | Γ£ô | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Discovery and Auto provisioning | Auto provisioning of Defender profile | AKS | GA | X | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Discovery and Auto provisioning | Auto provisioning of Azure policy add-on | AKS | GA | X | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Runtime protection| Threat detection (control plane)| AKS | GA | Γ£ô | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Runtime protection| Threat detection (workload) | AKS | Preview | X | Defender profile | Defender for Containers | Commercial clouds |
+| Discovery and provisioning | Discovery of unprotected clusters | AKS | GA | Γ£ô | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and provisioning | Collection of control plane threat data | AKS | GA | Γ£ô | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and provisioning | Auto provisioning of Defender profile | AKS | Preview | X | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and provisioning | Auto provisioning of Azure policy add-on | AKS | GA | X | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | EC2 | Preview | X | Log Analytics agent | Defender for Servers |
-| VA | Registry scan | - | - | - | - | - |
-| VA | View vulnerabilities for running images | - | - | - | - | - |
+| Vulnerability Assessment | Registry scan | - | - | - | - | - |
+| Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - |
| Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | EKS | Preview | X | Azure Policy extension | Defender for Containers |
-| Runtime Threat Detection | Agentless threat detection | EKS | Preview | X | Agentless | Defender for Containers |
-| Runtime Threat Detection | Agent-based threat detection | EKS | Preview | X | Defender extension | Defender for Containers |
-| Discovery and Auto provisioning | Discovery of uncovered/unprotected clusters | EKS | Preview | X | Agentless | Free |
-| Discovery and Auto provisioning | Auditlog collection for agentless threat detection | EKS | Preview | X | Agentless | Defender for Containers |
-| Discovery and Auto provisioning | Auto provisioning of Defender extension | - | - | - | - | - |
-| Discovery and Auto provisioning | Auto provisioning of Azure policy extension | - | - | - | - | - |
+| Runtime protection| Threat detection (control plane)| EKS | Preview | Γ£ô | Agentless | Defender for Containers |
+| Runtime protection| Threat detection (workload) | EKS | Preview | X | Defender extension | Defender for Containers |
+| Discovery and provisioning | Discovery of unprotected clusters | EKS | Preview | X | Agentless | Free |
+| Discovery and provisioning | Collection of control plane threat data | EKS | Preview | Γ£ô | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Defender extension | - | - | - | - | - |
+| Discovery and provisioning | Auto provisioning of Azure policy extension | - | - | - | - | - |
<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | GCP VMs | Preview | X | Log Analytics agent | Defender for Servers |
-| VA | Registry scan | - | - | - | - | - |
-| VA | View vulnerabilities for running images | - | - | - | - | - |
+| Vulnerability Assessment | Registry scan | - | - | - | - | - |
+| Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - |
| Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | GKE | Preview | X | Azure Policy extension | Defender for Containers |
-| Runtime Threat Detection | Agentless threat detection | GKE | Preview | X | Agentless | Defender for Containers |
-| Runtime Threat Detection | Agent-based threat detection | GKE | Preview | X | Defender extension | Defender for Containers |
-| Discovery and Auto provisioning | Discovery of uncovered/unprotected clusters | GKE | Preview | X | Agentless | Free |
-| Discovery and Auto provisioning | Auditlog collection for agentless threat detection | GKE | Preview | X | Agentless | Defender for Containers |
-| Discovery and Auto provisioning | Auto provisioning of Defender DaemonSet | GKE | Preview | X | Agentless | Defender for Containers |
-| Discovery and Auto provisioning | Auto provisioning of Azure policy extension | GKE | Preview | X | Agentless | Defender for Containers |
+| Runtime protection| Threat detection (control plane)| GKE | Preview | Γ£ô | Agentless | Defender for Containers |
+| Runtime protection| Threat detection (workload) | GKE | Preview | X | Defender extension | Defender for Containers |
+| Discovery and provisioning | Discovery of unprotected clusters | GKE | Preview | X | Agentless | Free |
+| Discovery and provisioning | Collection of control plane threat data | GKE | Preview | Γ£ô | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Defender extension | GKE | Preview | X | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Azure policy extension | GKE | Preview | X | Agentless | Defender for Containers |
<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | Arc enabled VMs | Preview | X | Log Analytics agent | Defender for Servers |
-| VA | Registry scan | ACR, Private ACR | Preview | Γ£ô | Agentless | Defender for Containers |
-| VA | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
+| Vulnerability Assessment | Registry scan | ACR, Private ACR | Preview | Γ£ô (Preview) | Agentless | Defender for Containers |
+| Vulnerability Assessment | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
| Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | X | Azure Policy extension | Defender for Containers |
-| Runtime Threat Detection | Agentless threat detection | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
-| Runtime Threat Detection | Agent-based threat detection | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
-| Discovery and Auto provisioning | Discovery of uncovered/unprotected clusters | Arc enabled K8s clusters | Preview | X | Agentless | Free |
-| Discovery and Auto provisioning | Auditlog collection for threat detection | Arc enabled K8s clusters | Preview | Γ£ô | Defender extension | Defender for Containers |
-| Discovery and Auto provisioning | Auto provisioning of Defender extension | Arc enabled K8s clusters | Preview | Γ£ô | Agentless | Defender for Containers |
-| Discovery and Auto provisioning | Auto provisioning of Azure policy extension | Arc enabled K8s clusters | Preview | X | Agentless | Defender for Containers |
+| Runtime protection| Threat detection (control plane)| Arc enabled K8s clusters | Preview | Γ£ô | Defender extension | Defender for Containers |
+| Runtime protection| Threat detection (workload) | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
+| Discovery and provisioning | Discovery of unprotected clusters | Arc enabled K8s clusters | Preview | X | Agentless | Free |
+| Discovery and provisioning | Collection of control plane threat data | Arc enabled K8s clusters | Preview | Γ£ô | Defender extension | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Defender extension | Arc enabled K8s clusters | Preview | Γ£ô | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Azure policy extension | Arc enabled K8s clusters | Preview | X | Agentless | Defender for Containers |
<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| Aspect | Details | |--|--|
-| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md)<sup>[1](#footnote1)</sup><br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup><br>ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br><br>**Unsupported**<br> ΓÇó Any [taints](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) applied to your nodes *might* disrupt the configuration of Defender for Containers<br> |
+| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md)<sup>[1](#footnote1)</sup><br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup><br>ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br><br>**Unsupported**<br> ΓÇó Azure Kubernetes Service (AKS) Clusters without [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> |
<sup><a name="footnote1"></a>1</sup>The AKS Defender profile doesn't support AKS clusters that don't have RBAC role enabled.<br> <sup><a name="footnote2"></a>2</sup>Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.<br>
defender-for-iot How To Accelerate Alert Incident Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-accelerate-alert-incident-response.md
Title: Accelerate alert workflows description: Improve alert and incident workflows. Previously updated : 11/09/2021 Last updated : 03/10/2022
Alert groups are predefined. For details about alerts associated with alert grou
## Customize alert rules
-Use custom alert rules to more specifically pinpoint activity of interest to you.
-You can add custom alert rules based on:
+Add custom alert rule to pinpoint specific activity as needed for your organization such as for specific protocols, source or destination addresses, or a combination of parameters.
-- A category, for example a standard protocol, or port or file.
+For example, you might want to define an alert for an environment running MODBUS to detect any write commands to a memory register, on a specific IP address and ethernet destination. Another example would be an alert for any access to a specific IP address.
-- Traffic detections based proprietary protocols developed in a Horizon plugin. (Horizon Open Development Environment ODE).
+Use custom alert rule actions to instruct Defender for IT to take specific action when the alert is triggered, such as allowing users to access PCAP files from the alert, assigning alert severity, or generating an event that shows in the event timeline. Alert messages indicate that the alert was generated from a custom alert rule.
-- Source and destination addresses
+**To create a custom alert rule**:
-- A combination of protocol fields from all protocol layers. For example, in an environment running MODBUS, you may want to generate an alert when the sensor detects a write command to a memory register on a specific IP address and ethernet destination, or an alert when any access is performed to a specific IP address.
+1. On the sensor console, select **Custom alert rules** > **+ Create rule**.
-If the sensor detects the activity described in the rule, the alert is sent.
+1. In the **Create custom alert rule** pane that shows on the right, define the following fields:
-You can also use alert rule actions to instruct Defender for IoT to:
+ - **Alert name**. Enter a meaningful name for the alert.
-- Allow users to access PCAP file from the alert.-- Assign an alert severity.-- Generate an event rather than alert. The detected information will appear in the event timeline.
+ - **Alert protocol**. Select the protocol you want to detect. In specific cases, select one of the following protocols:
+ - For a database data or structure manipulation event, select **TNS** or **TDS**
+ - For a file event, select **HTTP**, **DELTAV**, **SMB**, or **FTP**, depending on the file type
+ - For a package download event, select **HTTP**
+ - For an open ports (dropped) event, select **TCP** or **UDP**, depending on the port type.
-The alert message indicates that a user-defined rule triggered the alert.
+ To create rules that monitor for specific changes in one of your OT protocols, such as S7 or CIP, use any parameters found on that protocol, such as `tag` or `sub-function`.
+
+ - **Message**. Define a message to display when the alert is triggered. Alert messages support alphanumeric characters and any traffic variables detected. For example, you might want to include the detected source and destination addresses. Use curly brackets (**{}**) to add variables to the alert message.
+ - **Direction**. Enter a source and/or destination IP address where you want to detect traffic.
-### Create custom alerts
+ - **Conditions**. Define one or more conditions that must be met to trigger the alert. Select the **+** sign to create a condition set with multiple conditions that use the **AND** operator. If you select a MAC address or IP address as a variable, you must convert the value from a dotted-decimal address to decimal format.
-**To create a custom alert rule:**
+ - **Detected**. Define a date and/or time range for the traffic you want to detect.
+ - **Action**. Define an action you want Defender for IoT to take automatically when the alert is triggered.
-1. Select **Custom Alerts** from the side menu of a sensor.
-
-1. Select **Create rule** (**+**).
+To edit a custom alert rule, select the rule and then select the options (**...**) menu > **Edit**. Modify the alert rule as needed and save your changes.
- :::image type="content" source="media/how-to-work-with-alerts-sensor/custom-alerts-rules.png" alt-text="Screenshot of the Create custom alert rules pane.":::
+Edits made to custom alert rules, such as changing a severity level or protocol, are tracked in the **Event timeline** page on the sensor console. For more information, see [Track sensor activity](how-to-track-sensor-activity.md).
-1. Define an alert name.
-1. Select protocol to detect.
-1. Define a message to display. Alert messages can contain alphanumeric characters you enter, as well as traffic variables detected. For example, include the detected source and destination addresses in the alert messages. Use { } to add variables to the message
-1. Select the engine that should detect the activity.
-1. Select the source and destination devices for the activity you want to detect.
+**To enable or disable custom alert rules**
-#### Create rule conditions
-
-Define one or several rule conditions. Two categories of conditions can be created:
-
-**Condition based on unique values**
-
-Create conditions based on unique values associated with the category selected. Rule conditions can comprise one or several sets of fields, operators, and values. Create condition sets, by using AND.
-
-**To create a rule condition:**
-
-1. Select a **Variable**. Variables represent fields configured in the plugin.
-
-7. Select an **Operator**:
-
- - (==) Equal to
-
- - (!=) Not equal to
-
- - (>) Greater than
-
-
- - In Range
-
- - Not in Range
- - Same as (field X same as field Y)
-
- - (>=) Greater than or equal to
- - (<) Less than
-
- - (<=) Less than or equal to
-
-8. Enter a **Value** as a number. If the variable you selected is a MAC address or IP address, the value must be converted from a dotted-decimal address to decimal format. Use an IP address conversion tool, for example <https://www.ipaddressguide.com/ip>.
-
- :::image type="content" source="media/how-to-work-with-alerts-sensor/custom-rule-conditions.png" alt-text="Screenshot of the Custom rule condition options.":::
-
-9. Select plus (**+**) to create a condition set.
-
-When the rule condition or condition set is met, the alert is sent. You will be notified if the condition logic is not valid.
-
-**Condition based on when activity took place**
-
-Create conditions based on when the activity was detected. In the Detected section, select a time period and day in which the detection must occur in order to send the alert. You can choose to send the alert if the activity is detected:
-- any time throughout the day -- during working hours-- after working hours-- a specific time-
-Use the Define working hours option to instruct Defender for IoT working hours for your organization.
-
-#### Define rule actions
-
-The following actions can be defined for the rule:
--- Indicate if the rule triggers an **Alarm** or **Event**.-- Assign a severity level to the alert (Critical, Major, Minor, Warning).-- Indicate if the alert will include a PCAP file.-
-The rule is added to the **Customized Alerts Rules** page.
--
-### Managing customer alert rules
-
-Manage the rules you create from the Custom alert rules page, for example:
---- Review the last time the rule was triggered, the number of times the alert was triggered for the rule in the last week, or the last time the rule was modified.-- Enable or disable rules.-- Delete rules.-
-Select the checkbox next to multiple rules to perform a bulk enable/disable or delete.
-
-### Tracking changes to custom alert rules
-
-Changes made to custom alert rules are tracked in the event timeline. For example if a user changes a severity level, the protocol detected or any other rule parameter.
-
-**To view changes to the alert rule:**
-
-1. Navigate to the Event timeline page.
+You can disable custom alert rules to prevent them from running without deleting them altogether.
+In the **Custom alert rules** page, select one or more rules, and then select **Enable**, **Disable**, or **Delete** in the toolbar as needed.
## Next steps
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
The Defender for IoT sensor and on-premises management console update packages i
- [Enhanced sensor Overview page](#enhanced-sensor-overview-page) - [New support diagnostics log](#new-support-diagnostics-log) - [Alert updates](#alert-updates)
+- [Custom alert updates](#custom-alert-updates)
- [CLI command updates](#cli-command-updates) - [Update to version 22.1.x](#update-to-version-221x) - [New connectivity model and firewall requirements](#new-connectivity-model-and-firewall-requirements)
The sensor console's **Custom alert rules** page now provides:
:::image type="content" source="media/how-to-manage-sensors-on-the-cloud/protocol-support-custom-alerts.png" alt-text="Screenshot of the updated Custom alerts dialog. "lightbox="media/how-to-manage-sensors-on-the-cloud/protocol-support-custom-alerts.png":::
+For more information and the updated custom alert procedure, see [Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules).
+ ### CLI command updates The Defender for Iot sensor software installation is now containerized. With the now-containerized sensor, you can use the *cyberx_host* user to investigate issues with other containers or the operating system, or to send files via FTP.
devtest-labs Connect Virtual Machine Through Browser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/connect-virtual-machine-through-browser.md
Title: Enable browser access to lab virtual machines
-description: Learn how to connect to your virtual machines through a browser.
+ Title: Connect to lab virtual machines through Browser connect
+description: Learn how to connect to lab virtual machines (VMs) through a browser if Browser connect is enabled for the lab.
Previously updated : 10/29/2021 Last updated : 03/14/2022
-# Connect to your lab virtual machines through a browser
+# Connect to DevTest Labs VMs through a browser with Azure Bastion
-DevTest Labs integrates with [Azure Bastion](../bastion/index.yml), which enables you to connect to your lab virtual machines (VM) through a browser. Once **Browser connect** is enabled, lab users can access their virtual machines through a browser.
+This article describes how to connect to DevTest Labs virtual machines (VMs) through a browser by using [Azure Bastion](../bastion/index.yml). Azure Bastion provides secure remote desktop protocol (RDP) or secure shell (SSH) access without using public IP addresses or exposing RDP or SSH ports to the internet.
-In this how-to guide, you'll connect to a lab VM using **Browser connect**.
+> [!IMPORTANT]
+> The VM's lab must be in a [Bastion-configured virtual network](enable-browser-connection-lab-virtual-machines.md#option-1-connect-a-lab-to-an-azure-bastion-enabled-virtual-network) and have [Browser connect enabled](enable-browser-connection-lab-virtual-machines.md#connect-to-lab-vms-through-azure-bastion). For more information, see [Enable browser connection to DevTest Labs VMs with Azure Bastion](enable-browser-connection-lab-virtual-machines.md).
-## Prerequisites
+To connect to a lab VM through a browser:
-- A lab VM, with a [Bastion-configured virtual network and the **Browser connect** setting turned on](enable-browser-connection-lab-virtual-machines.md).
+1. In the [Azure portal](https://portal.azure.com), search for and select **DevTest Labs**.
-- A web browser configured to allow pop-ups from `https://portal.azure.com:443`.
+1. On the **DevTest Labs** page, select your lab.
-## Launch virtual machine in a browser
+1. On the lab's **Overview** page, select the VM you want to connect to from the list under **My virtual machines**.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. On the VM's **Overview** page, from the top menu, select **Browser connect**.
-1. Navigate to your lab in **DevTest Labs**.
+1. In the **Browser connect** pane, enter the username and password for the VM, and select whether you want the VM to open in a new browser window.
-1. Select a virtual machine.
+1. Select **Connect**.
-1. From the top menu, select **Browser connect**.
+ :::image type="content" source="./media/connect-virtual-machine-through-browser/lab-vm-browser-connect.png" alt-text="Screenshot of the V M Overview screen with the Browser connect button highlighted.":::
-1. In the **Browser connect** section, enter your credentials and then select **Connect**.
+> [!NOTE]
+> If you don't see **Browser connect** on the VM's top menu, the lab isn't set up for Browser connect. You can select **Connect** to connect via [RDP](connect-windows-virtual-machine.md) or [SSH](connect-linux-virtual-machine.md).
- :::image type="content" source="./media/connect-virtual-machine-through-browser/lab-vm-browser-connect.png" alt-text="Screenshot of browser connect button.":::
-
-## Next Steps
-
-[Add a VM to a lab in Azure DevTest Labs](devtest-lab-add-vm.md)
devtest-labs Devtest Lab Delete Lab Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-delete-lab-vm.md
Title: Delete a lab or VM in a lab
-description: This article shows you how to delete a lab or delete a VM in a lab using the Azure portal(Azure DevTest Labs).
+ Title: Delete a lab virtual machine or a lab
+description: Learn how to delete a virtual machine from a lab or delete a lab in Azure DevTest Labs.
Previously updated : 01/24/2020 Last updated : 03/14/2022
-# Delete a lab or VM in a lab in Azure DevTest Labs
-This article shows you how to delete a lab or VM in a lab.
+# Delete labs or lab VMs in Azure DevTest Labs
-## Delete a lab
-When you delete a DevTest Labs instance from a resource group, the DevTest Labs service performs the following actions:
+This article shows you how to delete a virtual machine (VM) from a lab or delete a lab in Azure DevTest Labs.
+
+## Delete a VM from a lab
+
+When you create a VM in a lab, DevTest Labs automatically creates resources for the VM, like a disk, network interface, and public IP address, in a separate resource group. Deleting the VM deletes most of the resources created at VM creation, including the VM, network interface, and disk. However, deleting the VM doesn't delete:
+
+- Any resources you manually created in the VM's resource group.
+- The VM's key vault in the lab's resource group.
+- Any availability set, load balancer, or public IP address in the VM's resource group. These resources are shared by multiple VMs in a resource group.
+
+To delete a VM from a lab:
-- All the resources that were automatically created at the time of lab creation are automatically deleted. The resource group itself is not deleted. If you had manually created any resources this resource group, the service doesn't delete them. -- All VMs in the lab and resource groups associated with these VMs are automatically deleted. When you create a VM in a lab, the service creates resources (disk, network interface, public IP address, etc.) for the VM in a separate resource group. However, if you manually create any additional resources in these resource groups, the DevTest Labs service does not delete those resources and the resource group.
+1. On the lab's **Overview** page in the Azure portal, find the VM you want to delete in the list under **My virtual machines**.
-To delete a lab, do the following actions:
+1. Either:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **All resource** from menu on the left, select **DevTest Labs** for the type of service, and select the lab.
+ - Select **More options** (**...**) next to the VM listing, and select **Delete** from the context menu.
+ ![Screenshot of Delete selected on the V M's context menu on the lab Overview page.](media/devtest-lab-delete-lab-vm/delete-vm-menu-in-list.png)
- ![Select your lab](media/devtest-lab-delete-lab-vm/select-lab.png)
-3. On the **DevTest Lab** page, click **Delete** on the toolbar.
+ or
- ![Delete button](media/devtest-lab-delete-lab-vm/delete-button.png)
-4. On the **Confirmation** page, enter the **name** of your lab, and select **Delete**.
+ - Select the VM name in the list, and then on the VM's **Overview** page, select **Delete** from the top menu.
+ ![Screenshot of the Delete button on the V M Overview page.](media/devtest-lab-delete-lab-vm/delete-from-vm-page.png)
- ![Confirm](media/devtest-lab-delete-lab-vm/confirm-delete.png)
-5. To see the status of the operation, select **Notifications** icon (Bell).
+1. On the **Are you sure you want to delete it?** page, select **Delete**.
- ![Notifications](media/devtest-lab-delete-lab-vm/delete-status.png)
+ ![Screenshot of the V M deletion confirmation page.](media/devtest-lab-delete-lab-vm/select-lab.png)
-
-## Delete a VM in a lab
-If I delete a VM in a lab, some of the resources (not all) that were created at the time of lab creation are deleted. The following resources are not deleted:
+1. To check deletion status, select the **Notifications** icon on the Azure menu bar.
-- Key vault in the main resource group-- Availability set, load balancer, public IP address in the VM resource group. These resources are shared by multiple VMs in a resource group.
+## Delete a lab
+
+When you delete a lab from a resource group, DevTest Labs automatically deletes:
+
+- All VMs in the lab.
+- All resource groups associated with those VMs.
+- All resources that DevTest Labs automatically created during lab creation.
-Virtual machine, network interface, and disk associated with the VM are deleted.
+DevTest Labs doesn't delete the lab's resource group itself, and doesn't delete any resources you manually created in the lab's resource group.
-To delete a VM in a lab, do the following actions:
+> [!NOTE]
+> If you want to manually delete the lab's resource group, you must delete the lab first. You can't delete a resource group that has a lab in it.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **All resource** from menu on the left, select **DevTest Labs** for the type of service, and select the lab.
+To delete a lab:
- ![Select your lab](media/devtest-lab-delete-lab-vm/select-lab.png)
-3. Select **... (ellipsis)** for the VM in the list of VMs, and select **Delete**.
+1. On the lab's **Overview** page in the Azure portal, select **Delete** from the top toolbar.
- ![Delete VM in menu](media/devtest-lab-delete-lab-vm/delete-vm-menu-in-list.png)
-4. On the **confirmation** dialog box, select **Ok**.
-5. To see the status of the operation, select **Notifications** icon (Bell).
+ ![Screenshot of the Delete button on the lab Overview page.](media/devtest-lab-delete-lab-vm/delete-button.png)
-To delete a VM from the **Virtual Machine page**, select **Delete** from the toolbar as shown in the following image:
+1. On the **Are you sure you want to delete it?** page, under **Type the lab name**, type the lab name, and then select **Delete**.
-![Delete VM from VM page](media/devtest-lab-delete-lab-vm/delete-from-vm-page.png)
+ ![Screenshot of the lab deletion confirmation page.](media/devtest-lab-delete-lab-vm/confirm-delete.png)
+1. To check deletion status, select the **Notifications** icon on the Azure menu bar.
+
+ ![Screenshot of the Notifications icon on the Azure menu bar.](media/devtest-lab-delete-lab-vm/delete-status.png)
## Next steps
-If you want to create a lab, see the following articles:
-- [Create a lab](devtest-lab-create-lab.md)-- [Add a VM to the lab](devtest-lab-add-vm.md)
+- [Attach and detach data disks for lab VMs](devtest-lab-attach-detach-data-disk.md)
+- [Export or delete personal data](personal-data-delete-export.md)
+- [Move a lab to another region](how-to-move-labs.md)
+
digital-twins Concepts Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-models.md
The extending interface can't change any of the definitions of the parent interf
## Modeling best practices
+This section describes additional considerations and recommendations for modeling.
+
+### Use DTDL industry-standard ontologies
+
+If your solution is for a certain established industry (like smart buildings, smart cities, or energy grids), consider starting with a pre-existing set of models for you industry instead of designing your models from scratch. Microsoft has partnered with domain experts to create DTDL model sets based on industry standards, to help minimize reinvention and encourage consistency and simplicity across industry solutions. You can read more about these ontologies, including how to use them and what ontologies are available now, in [What is an ontology?](concepts-ontologies.md).
+
+### Consider query implications
+ While designing models to reflect the entities in your environment, it can be useful to look ahead and consider the [query](concepts-query-language.md) implications of your design. You may want to design properties in a way that will avoid large result sets from graph traversal. You may also want to model relationships that will need to be answered in a single query as single-level relationships.
-### Validating models
+### Validate models
[!INCLUDE [Azure Digital Twins: validate models info](../../includes/digital-twins-validate.md)]
digital-twins Concepts Ontologies Adopt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies-adopt.md
# Mandatory fields. Title: Adopting industry-standard ontologies
+ Title: Adopting DTDL-based industry ontologies
description: Learn about existing industry ontologies that can be adopted for Azure Digital Twins
#
-# Adopting an industry ontology
+# Adopting a DTDL industry ontology
-This article covers different sets of industry-standard ontologies that can be adopted to simplify solutions.
+Microsoft has partnered with domain experts to create DTDL model sets based on industry standards, to help minimize reinvention and simplify solutions. This article presents the industry ontologies that are currently available.
-Because it can be easier to start with an open-source Digital Twins Definition Language (DTDL) ontology than from a blank page, Microsoft is partnering with domain experts to publish ontologies. These ontologies represent widely accepted industry conventions and support various customer use cases.
+## List of ontologies
-The result is a set of open-source DTDL-based ontologies, which learn from, build on, or directly use industry standards. The ontologies are designed to meet the needs of downstream developers, with the potential to be widely adopted and extended by the industry.
+| Industry | Ontology repository | Description | Learn more |
+| | | | |
+| Smart buildings | [Digital Twins Definition Language-based RealEstateCore ontology for smart buildings](https://github.com/Azure/opendigitaltwins-building) | Microsoft has partnered with [RealEstateCore](https://www.realestatecore.io/) to deliver this open-source DTDL ontology for the real estate industry. [RealEstateCore](https://www.realestatecore.io/) is a Swedish consortium of real estate owners, software vendors, and research institutions.<br><br>This smart buildings ontology provides common ground for modeling smart buildings, using industry standards (like [BRICK Schema](https://brickschema.org/ontology/) or [W3C Building Topology Ontology](https://w3c-lbd-cg.github.io/bot/https://docsupdatetracker.net/index.html)) to avoid reinvention. The ontology also comes with best practices for how to consume and properly extend it. | You can read more about the partnership with RealEstateCore and goals for this initiative in the following blog post and embedded video: [RealEstateCore, a smart building ontology for digital twins, is now available](https://techcommunity.microsoft.com/t5/internet-of-things/realestatecore-a-smart-building-ontology-for-digital-twins-is/ba-p/1914794). |
+| Smart cities | [Digital Twins Definition Language (DTDL) ontology for Smart Cities](https://github.com/Azure/opendigitaltwins-smartcities) | Microsoft has collaborated with [Open Agile Smart Cities (OASC)](https://oascities.org/) and [Sirus](https://sirus.be/) to provide a DTDL-based ontology for smart cities, starting with [ETSI CIM NGSI-LD](https://www.etsi.org/committee/cim). | You can also read more about the partnerships and approach for smart cities in the following blog post and embedded video: [Smart Cities Ontology for Digital Twins](https://techcommunity.microsoft.com/t5/internet-of-things/smart-cities-ontology-for-digital-twins/ba-p/2166585). |
+| Energy grids | [Digital Twins Definition Language (DTDL) ontology for Energy Grid](https://github.com/Azure/opendigitaltwins-energygrid/) | This ontology was created to help solution providers accelerate development of digital twin solutions for energy use cases like monitoring grid assets, outage and impact analysis, simulation, and predictive maintenance. Additionally, the ontology can be used to enable the digital transformation and modernization of the energy grid. It's adapted from the [Common Information Model (CIM)](https://cimug.ucaiug.org/), a global standard for energy grid assets management, power system operations modeling, and physical energy commodity market. | You can also read more about the partnerships and approach for energy grids in the following blog post: [Energy Grid Ontology for Digital Twins](https://techcommunity.microsoft.com/t5/internet-of-things/energy-grid-ontology-for-digital-twins-is-now-available/ba-p/2325134). |
-At this time, Microsoft has worked with partners to develop ontologies for [smart buildings](#realestatecore-smart-building-ontology), [smart cities](#smart-cities-ontology), and [energy grids](#energy-grid-ontology). These ontologies provide common ground for modeling based on standards in these industries to avoid the need for reinvention.
-
-Each ontology is focused on an initial set of models. The ontology authors welcome you to contribute to extend the initial set of use cases and improve the existing models.
-
-## RealEstateCore smart building ontology
-
-Get the ontology from the following repository: [Digital Twins Definition Language-based RealEstateCore ontology for smart buildings](https://github.com/Azure/opendigitaltwins-building).
-
-Microsoft has partnered with [RealEstateCore](https://www.realestatecore.io/) to deliver this open-source DTDL ontology for the real estate industry. [RealEstateCore](https://www.realestatecore.io/) is a Swedish consortium of real estate owners, software vendors, and research institutions.
-
-This smart buildings ontology provides common ground for modeling smart buildings, using industry standards (like [BRICK Schema](https://brickschema.org/ontology/) or [W3C Building Topology Ontology](https://w3c-lbd-cg.github.io/bot/https://docsupdatetracker.net/index.html)) to avoid reinvention. The ontology also comes with best practices for how to consume and properly extend it.
-
-To learn more about the ontology's structure and modeling conventions, how to use it, how to extend it, and how to contribute, visit the ontology's repository on GitHub: [Azure/opendigitaltwins-building](https://github.com/Azure/opendigitaltwins-building).
-
-You can also read more about the partnership with RealEstateCore and goals for this initiative in the following blog post and embedded video: [RealEstateCore, a smart building ontology for digital twins, is now available](https://techcommunity.microsoft.com/t5/internet-of-things/realestatecore-a-smart-building-ontology-for-digital-twins-is/ba-p/1914794).
-
-## Smart cities ontology
-
-Get the ontology from the following repository: [Digital Twins Definition Language (DTDL) ontology for Smart Cities](https://github.com/Azure/opendigitaltwins-smartcities).
-
-Microsoft has collaborated with [Open Agile Smart Cities (OASC)](https://oascities.org/) and [Sirus](https://sirus.be/) to provide a DTDL-based ontology for smart cities, starting with [ETSI CIM NGSI-LD](https://www.etsi.org/committee/cim). Apart from ETSI NGSI-LD, we've also evaluated Saref4City, CityGML, ISO, and others.
-
-To learn more about the ontology, how to use it, and how to contribute, visit the ontology's repository on GitHub: [Azure/opendigitaltwins-smartcities](https://github.com/Azure/opendigitaltwins-smartcities).
-
-You can also read more about the partnerships and approach for smart cities in the following blog post and embedded video: [Smart Cities Ontology for Digital Twins](https://techcommunity.microsoft.com/t5/internet-of-things/smart-cities-ontology-for-digital-twins/ba-p/2166585).
-
-## Energy grid ontology
-
-Get the ontology from the following repository: [Digital Twins Definition Language (DTDL) ontology for Energy Grid](https://github.com/Azure/opendigitaltwins-energygrid/).
-
-This ontology was created to help solution providers accelerate development of digital twin solutions for energy use cases like monitoring grid assets, outage and impact analysis, simulation, and predictive maintenance. Additionally, the ontology can be used to enable the digital transformation and modernization of the energy grid. It's adapted from the [Common Information Model (CIM)](https://cimug.ucaiug.org/), a global standard for energy grid assets management, power system operations modeling, and physical energy commodity market.
-
-To learn more about the ontology, how to use it, and how to contribute, visit the ontology's repository on GitHub: [Azure/opendigitaltwins-energygrid](https://github.com/Azure/opendigitaltwins-energygrid/).
-
-You can also read more about the partnerships and approach for energy grids in the following blog post: [Energy Grid Ontology for Digital Twins](https://techcommunity.microsoft.com/t5/internet-of-things/energy-grid-ontology-for-digital-twins-is-now-available/ba-p/2325134).
+Each ontology is focused on an initial set of models. You can contribute to the ontologies by suggesting extensions or other improvements through the GitHub contribution process in each ontology repository.
## Next steps
digital-twins Concepts Ontologies Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies-extend.md
A portion of the hierarchy looks like the diagram below.
:::image type="content" source="media/concepts-ontologies-extend/real-estate-core-original.png" alt-text="Diagram illustrating part of the RealEstateCore space hierarchy. It shows elements for Space, Room, ConferenceRoom, and Office.":::
-For more information about the RealEstateCore ontology, see [Adopting industry-standard ontologies](concepts-ontologies-adopt.md#realestatecore-smart-building-ontology).
+For more information about the RealEstateCore ontology, see [Digital Twins Definition Language-based RealEstateCore ontology for smart buildings](https://github.com/Azure/opendigitaltwins-building) on GitHub.
## Extending the RealEstateCore space hierarchy
digital-twins Concepts Ontologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies.md
Reading this series of articles will guide you in how to use your models in your
## Next steps Read more about the strategies of adopting, converting, and authoring ontologies:
-* [Adopting industry-standard ontologies](concepts-ontologies-adopt.md)
+* [Adopting DTDL-based industry ontologies](concepts-ontologies-adopt.md)
* [Converting ontologies](concepts-ontologies-convert.md) * [Manage DTDL models](how-to-manage-model.md)
digital-twins Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/overview.md
You can think of these model definitions as a specialized vocabulary to describe
[!INCLUDE [digital-twins-versus-device-twins](../../includes/digital-twins-versus-device-twins.md)]
-*Models* are defined in a JSON-like language called [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md), and they describe twins by their state properties, telemetry events, commands, components, and relationships.
+*Models* are defined in a JSON-like language called [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md), and they describe twins by their state properties, telemetry events, commands, components, and relationships. Here are some other capabilities of models:
* Models define semantic *relationships* between your entities so that you can connect your twins into a graph that reflects their interactions. You can think of the models as nouns in a description of your world, and the relationships as verbs.
-* You can also specialize twins using model *inheritance*. One model can inherit from another.
+* You can specialize twins using model *inheritance*. One model can inherit from another.
+* You can design your own model sets from scratch, or get started with a pre-existing set of [DTDL industry ontologies](concepts-ontologies.md) based on common vocabulary for your industry.
-DTDL is used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). This type of commonality helps you keep your Azure Digital Twins solution connected and compatible with other parts of the Azure ecosystem.
+DTDL is also used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). This type of commonality helps you keep your Azure Digital Twins solution connected and compatible with other parts of the Azure ecosystem.
### Live execution environment
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-end-to-end.md
You can verify the twins that were created by running the following command, whi
Query ``` - You can now stop running the project. Keep the solution open in Visual Studio, though, as you'll continue using it throughout the tutorial. ## Set up the sample function app
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
To complete this tutorial, you need to:
1. Specify your **Azure SQL Managed Instance** by selecting your subscription, location, resource group from the corresponding drop-down lists and then select **Next**. 1. Select **Offline migration** as the migration mode. > [!NOTE]
- > In the offline migration mode, the source SQL Server database is not available for read and write activity while database backups are restored on target Azure SQL Managed Instance. Application downtime needs to be considered till the migration completes.
+ > In the offline migration mode, the source SQL Server database should not be used for write activity while database backups are restored on target Azure SQL Managed Instance. Application downtime needs to be considered till the migration completes.
1. Select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container. > [!NOTE]
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
To complete this tutorial, you need to:
1. Specify your **Azure SQL Managed Instance** by selecting your subscription, location, resource group from the corresponding drop-down lists and then select **Next**. 1. Select **Online migration** as the migration mode. > [!NOTE]
- > In the online migration mode, the source SQL Server database is available for read and write activity while database backups are continuously restored on target Azure SQL Managed Instance. Application downtime is limited to duration for the cutover at the end of migration.
+ > In the online migration mode, the source SQL Server database can be used for read and write activity while database backups are continuously restored on target Azure SQL Managed Instance. Application downtime is limited to duration for the cutover at the end of migration.
1. Select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container. > [!NOTE] > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
To complete this tutorial, you need to:
1. Specify your **target SQL Server on Azure Virtual Machine** by selecting your subscription, location, resource group from the corresponding drop-down lists and then select **Next**. 2. Select **Offline migration** as the migration mode. > [!NOTE]
- > In the offline migration mode, the source SQL Server database is not available for write activity while database backup files are restored on the target Azure SQL database. Application downtime persists through the start until the completion of the migration process.
+ > In the offline migration mode, the source SQL Server database should not be used for write activity while database backup files are restored on the target Azure SQL database. Application downtime persists through the start until the completion of the migration process.
3. Select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container. > [!NOTE] > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
To complete this tutorial, you need to:
1. Specify your **target SQL Server on Azure Virtual Machine** by selecting your subscription, location, resource group from the corresponding drop-down lists and then select **Next**. 2. Select **Online migration** as the migration mode. > [!NOTE]
- > In the online migration mode, the source SQL Server database is available for read and write activity while database backups are continuously restored on the target SQL Server on Azure Virtual Machine. Application downtime is limited to duration for the cutover at the end of migration.
+ > In the online migration mode, the source SQL Server database can be used for read and write activity while database backups are continuously restored on the target SQL Server on Azure Virtual Machine. Application downtime is limited to duration for the cutover at the end of migration.
3. In step 5, select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container. > [!NOTE] > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
event-grid Event Schema Azure Health Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-azure-health-data-services.md
+
+ Title: Azure Health Data Services as Event Grid source
+description: Describes the properties that are provided for Azure Health Data Services events with Azure Event Grid
+ Last updated : 02/03/2022++
+# Azure Health Data Services as an Event Grid source
+
+This article provides the properties and schema for Azure Health Data Services events. For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md).
+
+## Available event types
+
+### List of events for Azure Health Data Services REST APIs
+
+The following Fast Healthcare Interoperability Resources (FHIR&#174;) resource events are triggered when calling the REST APIs.
+
+ |Event name|Description|
+ |-|--|
+ |**FhirResourceCreated** |The event emitted after a FHIR resource gets created successfully.|
+ |**FhirResourceUpdated** |The event emitted after a FHIR resource gets updated successfully.|
+ |**FhirResourceDeleted** |The event emitted after a FHIR resource gets soft deleted successfully.|
+
+## Example event
+This section contains examples of what events message data would look like for each FHIR resource event.
+
+> [!Note]
+> Events data looks similar to these examples with the `metadataVersion` property set to a value of `1`.
+>
+> For more information, see [Azure Event Grid event schema properties](/azure/event-grid/event-schema#event-properties).
+
+### FhirResourceCreated event
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+```json
+{
+ "id": "e4c7f556-d72c-e7f7-1069-1e82ac76ab41",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "subject": "{fhir-account}.fhir.azurehealthcareapis.com/Patient/e0a1f743-1a70-451f-830e-e96477163902",
+ "data": {
+ "resourceType": "Patient",
+ "resourceFhirAccount": "{fhir-account}.fhir.azurehealthcareapis.com",
+ "resourceFhirId": "e0a1f743-1a70-451f-830e-e96477163902",
+ "resourceVersionId": 1
+ },
+ "eventType": "Microsoft.HealthcareApis.FhirResourceCreated",
+ "dataVersion": "1",
+ "metadataVersion": "1",
+ "eventTime": "2021-09-08T01:14:04.5613214Z"
+}
+```
+# [CloudEvent schema](#tab/cloud-event-schema)
+
+```json
+{
+ "id": "d674b9b7-7d1c-9b0a-8c48-139f3eb86c48",
+ "source": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "specversion": "1.0",
+ "type": "Microsoft.HealthcareApis.FhirResourceCreated",
+ "dataschema": "#1",
+ "subject": "{fhir-account}.fhir.azurehealthcareapis.com/Patient/e87ef649-abe1-485c-8c09-549d85dfe30b",
+ "time": "2022-02-03T16:48:09.6223354Z",
+ "data": {
+ "resourceType": "Patient",
+ "resourceFhirAccount": "{fhir-account}.fhir.azurehealthcareapis.com",
+ "resourceFhirId": "e87ef649-abe1-485c-8c09-549d85dfe30b",
+ "resourceVersionId": 1
+ }
+}
+```
++
+### FhirResourceUpdated event
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+```json
+{
+ "id": "634bd421-8467-f23c-b8cb-f6a31e41c32a",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "subject": "{fhir-account}.fhir.azurehealthcareapis.com/Patient/e0a1f743-1a70-451f-830e-e96477163902",
+ "data": {
+ "resourceType": "Patient",
+ "resourceFhirAccount": "{fhir-account}.fhir.azurehealthcareapis.com",
+ "resourceFhirId": "e0a1f743-1a70-451f-830e-e96477163902",
+ "resourceVersionId": 2
+ },
+ "eventType": "Microsoft.HealthcareApis.FhirResourceUpdated",
+ "dataVersion": "2",
+ "metadataVersion": "1",
+ "eventTime": "2021-09-08T01:29:12.0618739Z"
+}
+```
+# [CloudEvent schema](#tab/cloud-event-schema)
+
+```json
+{
+ "id": "5e45229e-c663-ea98-72d2-833428f48ad0",
+ "source": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "specversion": "1.0",
+ "type": "Microsoft.HealthcareApis.FhirResourceUpdated",
+ "dataschema": "#2",
+ "subject": "{fhir-account}.fhir.azurehealthcareapis.com/Patient/e87ef649-abe1-485c-8c09-549d85dfe30b",
+ "time": "2022-02-03T16:48:33.5147352Z",
+ "data": {
+ "resourceType": "Patient",
+ "resourceFhirAccount": "{fhir-account}.fhir.azurehealthcareapis.com",
+ "resourceFhirId": "e87ef649-abe1-485c-8c09-549d85dfe30b",
+ "resourceVersionId": 2
+ }
+}
+```
++
+### FhirResourceDeleted event
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+```json
+{
+ "id": "ef289b93-3159-b833-3a44-dc6b86ed1a8a",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "subject": "{fhir-account}.fhir.azurehealthcareapis.com/Patient/e0a1f743-1a70-451f-830e-e96477163902",
+ "data": {
+ "resourceType": "Patient",
+ "resourceFhirAccount": "{fhir-account}.fhir.azurehealthcareapis.com",
+ "resourceFhirId": "e0a1f743-1a70-451f-830e-e96477163902",
+ "resourceVersionId": 3
+ },
+ "eventType": "Microsoft.HealthcareApis.FhirResourceDeleted",
+ "dataVersion": "3",
+ "metadataVersion": "1",
+ "eventTime": "2021-09-08T01:31:58.5175837Z"
+}
+```
+# [CloudEvent schema](#tab/cloud-event-schema)
+
+```json
+{
+ "id": "14648a6e-d978-950e-ee9c-f84c70dba8d3",
+ "source": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "specversion": "1.0",
+ "type": "Microsoft.HealthcareApis.FhirResourceDeleted",
+ "dataschema": "#3",
+ "subject": "{fhir-account}.fhir.azurehealthcareapis.com/Patient/e87ef649-abe1-485c-8c09-549d85dfe30b",
+ "time": "2022-02-03T16:48:38.7338799Z",
+ "data": {
+ "resourceType": "Patient",
+ "resourceFhirAccount": "{fhir-account}.fhir.azurehealthcareapis.com",
+ "resourceFhirId": "e87ef649-abe1-485c-8c09-549d85dfe30b",
+ "resourceVersionId": 3
+ }
+}
+```
++
+## Next steps
+
+* For an introduction to Azure Event Grid, see [What is Event Grid?](overview.md)
+* For more information about creating an Azure Event Grid subscription, see [Event Grid subscription schema](subscription-creation-schema.md).
+
+(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
event-grid System Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/system-topics.md
Here's the current list of Azure services that support creation of system topics
- [Azure Container Registry](event-schema-container-registry.md) - [Azure Event Hubs](event-schema-event-hubs.md) - [Azure FarmBeats](event-schema-farmbeats.md)
+- [Azure Health Data Services](event-schema-azure-health-data-services.md)
- [Azure IoT Hub](event-schema-iot-hub.md) - [Azure Key Vault](event-schema-key-vault.md) - [Azure Kubernetes Service](event-schema-aks.md)
expressroute Expressroute Bfd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-bfd.md
You can control the BGP timers by configuring a lower BGP keep-alive and hold-ti
In this scenario, BFD can help. BFD provides low-overhead link failure detection in a subsecond time interval.
+> [!NOTE]
+> BFD provides faster failover time when a link failure is detected, but the overall connection convergence will take up to a minute for failover between ExpressRoute virtual network gateways and MSEEs.
+>
## Enabling BFD
For more information or help, check out the following links:
<!--Link References--> [CreateCircuit]: ./expressroute-howto-circuit-portal-resource-manager.md [CreatePeering]: ./expressroute-howto-routing-portal-resource-manager.md
-[ResetPeering]: ./expressroute-howto-reset-peering.md
+[ResetPeering]: ./expressroute-howto-reset-peering.md
frontdoor Concept Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/concept-private-link.md
After you approve the request, a private IP address gets assigned from Front Doo
Azure Front Door private endpoints are available in the following regions during public preview: East US, West US 2, South Central US, UK South, and Japan East.
+The backends that support direct private end point connectivity are now limited to Storage (Azure Blobs) and App Services. All other backends will have to be put behind an Internal Load Balancer as explained in the Next Steps below.
+ For the best latency, you should always pick an Azure region closest to your origin when choosing to enable Front Door private link endpoint. ## Next steps
governance Exemption Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/exemption-structure.md
# Azure Policy exemption structure
-The Azure Policy exemptions (preview) feature is used to _exempt_ a resource hierarchy or an
+The Azure Policy exemptions feature is used to _exempt_ a resource hierarchy or an
individual resource from evaluation of initiatives or definitions. Resources that are _exempt_ count toward overall compliance, but can't be evaluated or have a temporary waiver. For more information, see [Understand scope in Azure Policy](./scope.md). Azure Policy exemptions only work with [Resource Manager modes](./definition-structure.md#resource-manager-modes) and don't work with [Resource Provider modes](./definition-structure.md#resource-provider-modes).
-> [!IMPORTANT]
-> This feature is free during **preview**. For pricing details, see
-> [Azure Policy pricing](https://azure.microsoft.com/pricing/details/azure-policy/). For more
-> information about previews, see
-> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-You use JSON to create a policy exemption. The policy exemption contains elements for:
+You use JavaScript Object Notation (JSON) to create a policy exemption. The policy exemption contains elements for:
- display name - description
You use JSON to create a policy exemption. The policy exemption contains element
For example, the following JSON shows a policy exemption in the **waiver** category of a resource to an initiative assignment named `resourceShouldBeCompliantInit`. The resource is _exempt_ from only two of the policy definitions in the initiative, the `customOrgPolicy` custom policy definition
-(reference `requiredTags`) and the 'Allowed locations' built-in policy definition (ID:
+(reference `requiredTags`) and the **Allowed locations** built-in policy definition (ID:
`e56962a6-4747-49cd-b67b-bf8b01975c4c`, reference `allowedLocations`): ```json
resource hierarchy or individual resource is _exempt_ from.
## Policy definition IDs
-If the `policyAssignmentId` is for an initiative assignment, the `policyDefinitionReferenceIds`
-property may be used to specify which policy definition(s) in the initiative the subject resource
+If the `policyAssignmentId` is for an initiative assignment, the **policyDefinitionReferenceIds** property may be used to specify which policy definition(s) in the initiative the subject resource
has an exemption to. As the resource may be exempted from one or more included policy definitions, this property is an _array_. The values must match the values in the initiative definition in the `policyDefinitions.policyDefinitionReferenceId` fields.
Two exemption categories exist and are used to group exemptions:
## Expiration To set when a resource hierarchy or an individual resource is no longer _exempt_ from an assignment,
-set the `expiresOn` property. This optional property must be in the Universal ISO 8601 DateTime
+set the **expiresOn** property. This optional property must be in the Universal ISO 8601 DateTime
format `yyyy-MM-ddTHH:mm:ss.fffffffZ`. > [!NOTE]
assignment.
- Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md). - Review what a management group is with
- [Organize your resources with Azure management groups](../../management-groups/overview.md).
+ [Organize your resources with Azure management groups](../../management-groups/overview.md).
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/overview.md
For more information about policy parameters, see
An initiative definition is a collection of policy definitions that are tailored toward achieving a singular overarching goal. Initiative definitions simplify managing and assigning policy definitions. They simplify by grouping a set of policies as one single item. For example, you could
-create an initiative titled **Enable Monitoring in Azure Security Center**, with a goal to monitor
-all the available security recommendations in your Azure Security Center.
+create an initiative titled **Enable Monitoring in Microsoft Defender for Cloud**, with a goal to monitor
+all the available security recommendations in your Microsoft Defender for Cloud instance.
> [!NOTE] > The SDK, such as Azure CLI and Azure PowerShell, use properties and parameters named **PolicySet**
all the available security recommendations in your Azure Security Center.
Under this initiative, you would have policy definitions such as: -- **Monitor unencrypted SQL Database in Security Center** - For monitoring unencrypted SQL databases
+- **Monitor unencrypted SQL Database in Microsoft Defender for Cloud** - For monitoring unencrypted SQL databases
and servers.-- **Monitor OS vulnerabilities in Security Center** - For monitoring servers that don't satisfy the
+- **Monitor OS vulnerabilities in Microsoft Defender for Cloud** - For monitoring servers that don't satisfy the
configured baseline.-- **Monitor missing Endpoint Protection in Security Center** - For monitoring servers without an
+- **Monitor missing Endpoint Protection in Microsoft Defender for Cloud** - For monitoring servers without an
installed endpoint protection agent. Like policy parameters, initiative parameters help simplify initiative management by reducing
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
The name of each built-in links to the policy definition in the Azure portal. Us
**Source** column to view the source on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy). The built-ins are grouped by the **category** property in **metadata**. To jump to a specific **category**, use the menu on the right
-side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature.
+side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> (Windows) or <kbd>Cmd</kbd>-<kbd>F</kbd> (macOS) to use your browser's search feature.
## API for FHIR
healthcare-apis Access Healthcare Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/access-healthcare-apis.md
Title: Access Azure Healthcare APIs
-description: This article describes the different ways for accessing the services in your applications using tools and programming languages.
+ Title: Access Azure Health Data Services
+description: This article describes the different ways to access Azure Health Data Services in your applications using tools and programming languages.
Previously updated : 01/06/2022 Last updated : 02/11/2022
-# Access Healthcare APIs
+# Access Azure Health Data Services
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this article, you'll learn about the different ways to access the services in your applications. After you've provisioned a FHIR service, DICOM service, or IoT connector, you can then access them in your applications using tools like Postman, cURL, REST Client in Visual Studio Code, and with programming languages such as Python and C#.
+In this article, you'll learn about the different ways to access Azure Health Data Services in your applications. After you've provisioned a FHIR service, DICOM service, or IoT connector, you can then access them in your applications using tools like Postman, cURL, REST Client in Visual Studio Code, and with programming languages such as Python and C#.
## Access the FHIR service
The IoT connector works with the IoT Hub and Event Hubs in your subscription to
## Next steps
-In this document, you learned about the tools and programming languages that you can use to access the services in your applications. To learn how to deploy an instance of the Healthcare APIs service using the Azure portal, see
+In this document, you learned about the tools and programming languages that you can use to access Azure Health Data Services in your applications. To learn how to deploy an instance of Azure Health Data Services using the Azure portal, see
>[!div class="nextstepaction"]
->[Deploy Healthcare APIs (preview) workspace using Azure portal](healthcare-apis-quickstart.md)
+>[Deploy Azure Health Data Services workspace using the Azure portal](healthcare-apis-quickstart.md)
healthcare-apis Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/authentication-authorization.md
Title: Azure Healthcare APIs Authentication and Authorization
-description: This article provides an overview of the authentication and authorization of the Healthcare APIs.
+ Title: Azure Health Data Services Authentication and Authorization
+description: This article provides an overview of the authentication and authorization of the Azure Health Data Services.
Previously updated : 07/19/2021 Last updated : 03/14/2022
-# Authentication & Authorization for the Healthcare APIs (preview)
-
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-This article provides an overview of the authentication and authorization process for accessing the Healthcare APIs services.
+# Authentication and Authorization for Azure Health Data Services
## Authentication
-The Healthcare APIs is a collection of secured managed services using [Azure Active Directory (Azure AD)](../active-directory/index.yml), a global identity provider that supports [OAuth 2.0](https://oauth.net/2/).
+ Azure Health Data Services is a collection of secured managed services using [Azure Active Directory (Azure AD)](../active-directory/index.yml), a global identity provider that supports [OAuth 2.0](https://oauth.net/2/).
-For the Healthcare APIs services to access Azure resources, such as storage accounts and event hubs, you must **enable the system managed identity**, and **grant proper permissions** to the managed identity. For more information, see [Azure managed identities](../active-directory/managed-identities-azure-resources/overview.md).
+For the Azure Health Data Services to access Azure resources, such as storage accounts and event hubs, you must **enable the system managed identity**, and **grant proper permissions** to the managed identity. For more information, see [Azure managed identities](../active-directory/managed-identities-azure-resources/overview.md).
-The Healthcare APIs do not support other identity providers. However, customers can use their own identity provider to secure applications, and enable them to interact with the Healthcare APIs by managing client applications and user data access controls.
+Azure Health Data Services doesn't support other identity providers. However, customers can use their own identity provider to secure applications, and enable them to interact with the Healthcare APIs by managing client applications and user data access controls.
The client applications are registered in the Azure AD and can be used to access the Healthcare APIs. User data access controls are done in the applications or services that implement business logic.
The client applications are registered in the Azure AD and can be used to access
Authenticated users and client applications of the Healthcare APIs must be granted with proper application roles.
-The FHIR service of the Healthcare APIs provides the following roles:
+FHIR service of Azure Health Data Services provides the following roles:
* **FHIR Data Reader**: Can read (and search) FHIR data. * **FHIR Data Writer**: Can read, write, and soft delete FHIR data.
The FHIR service of the Healthcare APIs provides the following roles:
* **FHIR Data Contributor**: Can perform all data plane operations. * **FHIR Data Converter**: Can use the converter to perform data conversion.
-The DICOM service of the Healthcare APIs provides the following roles:
+DICOM service of Azure Health Data Services provides the following roles:
* **DICOM Data Owner**: Can read, write, and delete DICOM data. * **DICOM Data Read**: Can read DICOM data.
-The IoT Connector does not require application roles, but it does rely on the ΓÇ£Azure Event Hubs Data ReceiverΓÇ¥ to retrieve data stored in the event hub of the customerΓÇÖs subscription.
+The MedTech service doesn't require application roles, but it does rely on the "Azure Event Hubs Data Receiver" to retrieve data stored in the event hub of the customer's subscription.
## Authorization
-After being granted with proper application roles, the authenticated users and client applications can access the Healthcare APIs services by obtaining a **valid access token** issued by Azure AD, and perform specific operations defined by the application roles.
+After being granted with proper application roles, the authenticated users and client applications can access Azure Health Data Services by obtaining a **valid access token** issued by Azure AD, and perform specific operations defined by the application roles.
-* For the FHIR service, the access token is specific to the service or resource.
-* For the DICOM service, the access token is granted to the `dicom.healthcareapis.azure.com` resource, not a specific service.
-* For the IoT Connector, the access token is not required because it is not exposed to the users or client applications.
+* For FHIR service, the access token is specific to the service or resource.
+* For DICOM service, the access token is granted to the `dicom.healthcareapis.azure.com` resource, not a specific service.
+* For MedTech service, the access token isnΓÇÖt required because it isnΓÇÖt exposed to the users or client applications.
### Steps for authorization There are two common ways to obtain an access token, outlined in detail by the Azure AD documentation: [authorization code flow](../active-directory/develop/v2-oauth2-auth-code-flow.md) and [client credentials flow](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
-For obtaining an access token for the Healthcare APIs, these are the steps using **authorization code flow**:
+For obtaining an access token for the Azure Health Data Services, these are the steps using **authorization code flow**:
1. **The client sends a request to the Azure AD authorization endpoint.** Azure AD redirects the client to a sign-in page where the user authenticates using appropriate credentials (for example: username and password, or a two-factor authentication). **Upon successful authentication, an authorization code is returned to the client.** Azure AD only allows this authorization code to be returned to a registered reply URL configured in the client application registration.
For obtaining an access token for the Healthcare APIs, these are the steps using
3. **The client makes a request to the Healthcare APIs**, for example, a `GET` request to search all patients in the FHIR service. When making the request, it **includes the access token in an `HTTP` request header**, for example, **`Authorization: Bearer xxx`**.
-4. **The Healthcare APIs service validates that the token contains appropriate claims (properties in the token).** If it is valid, it completes the request and returns data to the client.
+4. **The Healthcare APIs service validates that the token contains appropriate claims (properties in the token).** If itΓÇÖs valid, it completes the request and returns data to the client.
-In the **client credentials flow**, permissions are granted directly to the application itself. When the application presents a token to a resource, the resource enforces that the application itself has authorization to perform an action since there is no user involved in the authentication. Therefore, it is different from the **authorization code flow** in the following ways:
+In the **client credentials flow**, permissions are granted directly to the application itself. When the application presents a token to a resource, the resource enforces that the application itself has authorization to perform an action since thereΓÇÖs no user involved in the authentication. Therefore, itΓÇÖs different from the **authorization code flow** in the following ways:
-- The user or the client does not need to log in interactively-- The authorization code is not required.
+- The user or the client doesnΓÇÖt need to log in interactively
+- The authorization code isnΓÇÖt required.
- The access token is obtained directly through application permissions. ### Access token
You can use online tools such as [https://jwt.ms](https://jwt.ms/) to view the t
|**Claim type** |**Value** |**Notes** | |||-|
-|aud |https://xxx.fhir.azurehealthcareapis.com|Identifies the intended recipient of the token. In `id_tokens`, the audience is your app's Application ID, assigned to your app in the Azure portal. Your app should validate this value and reject the token if the value does not match.|
+|aud |https://xxx.fhir.azurehealthcareapis.com|Identifies the intended recipient of the token. In `id_tokens`, the audience is your app's Application ID, assigned to your app in the Azure portal. Your app should validate this value and reject the token if the value doesnΓÇÖt match.|
|iss |https://sts.windows.net/{tenantid}/|Identifies the security token service (STS) that constructs and returns the token, and the Azure AD tenant in which the user was authenticated. If the token was issued by the v2.0 endpoint, the URI will end in `/v2.0`. The GUID that indicates that the user is a consumer user from a Microsoft account is `9188040d-6c67-4c5b-b112-36a304b66dad`. Your app should use the GUID portion of the claim to restrict the set of tenants that can sign in to the app, if it's applicable.| |iat |(time stamp) |"Issued At" indicates when the authentication for this token occurred.| |nbf |(time stamp) |The "nbf" (not before) claim identifies the time before which the JWT MUST NOT be accepted for processing.|
You can use online tools such as [https://jwt.ms](https://jwt.ms/) to view the t
|aio |E2ZgYxxx |An internal claim used by Azure AD to record data for token reuse. Should be ignored.| |appid |e97e1b8c-xxx |This is the application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD.| |appidacr |1 |Indicates how the client was authenticated. For a public client, the value is "0". If client ID and client secret are used, the value is "1". If a client certificate was used for authentication, the value is "2".|
-|idp |https://sts.windows.net/{tenantid}/|Records the identity provider that authenticated the subject of the token. This value is identical to the value of the Issuer claim unless the user account is not in the same tenant as the issuer - guests, for instance. If the claim is not present, it means that the value of iss can be used instead. For personal accounts being used in an organizational context (for instance, a personal account invited to an Azure AD tenant), the idp claim may be 'live.com' or an STS URI containing the Microsoft account tenant 9188040d-6c67-4c5b-b112-36a304b66dad.|
-|oid |For example, tenantid |This is the immutable identifier for an object in the Microsoft identity system, in this case, a user account. This ID uniquely identifies the user across applications - two different applications signing in the same user will receive the same value in the oid claim. The Microsoft Graph will return this ID as the ID property for a given user account. Because the oid allows multiple apps to correlate users, the profile scope is required to receive this claim. Note: If a single user exists in multiple tenants, the user will contain a different object ID in each tenant - they are considered different accounts, even though the user logs into each account with the same credentials.|
+|idp |https://sts.windows.net/{tenantid}/|Records the identity provider that authenticated the subject of the token. This value is identical to the value of the Issuer claim unless the user account isnΓÇÖt in the same tenant as the issuer - guests, for instance. If the claim isnΓÇÖt present, it means that the value of iss can be used instead. For personal accounts being used in an organizational context (for instance, a personal account invited to an Azure AD tenant), the idp claim may be 'live.com' or an STS URI containing the Microsoft account tenant 9188040d-6c67-4c5b-b112-36a304b66dad.|
+|oid |For example, tenantid |This is the immutable identifier for an object in the Microsoft identity system, in this case, a user account. This ID uniquely identifies the user across applications - two different applications signing in the same user will receive the same value in the oid claim. The Microsoft Graph will return this ID as the ID property for a given user account. Because the oid allows multiple apps to correlate users, the profile scope is required to receive this claim. Note: If a single user exists in multiple tenants, the user will contain a different object ID in each tenant - theyΓÇÖre considered different accounts, even though the user logs into each account with the same credentials.|
|rh |0.ARoxxx |An internal claim used by Azure to revalidate tokens. It should be ignored.|
-|sub |For example, tenantid |The principal about which the token asserts information, such as the user of an app. This value is immutable and cannot be reassigned or reused. The subject is a pairwise identifier - it is unique to a particular application ID. Therefore, if a single user signs into two different apps using two different client IDs, those apps will receive two different values for the subject claim. This may or may not be desired depending on your architecture and privacy requirements.|
+|sub |For example, tenantid |The principal about which the token asserts information, such as the user of an app. This value is immutable and canΓÇÖt be reassigned or reused. The subject is a pairwise identifier - itΓÇÖs unique to a particular application ID. Therefore, if a single user signs into two different apps using two different client IDs, those apps will receive two different values for the subject claim. This may or may not be desired depending on your architecture and privacy requirements.|
|tid |For example, tenantid |A GUID that represents the Azure AD tenant that the user is from. For work and school accounts, the GUID is the immutable tenant ID of the organization that the user belongs to. For personal accounts, the value is 9188040d-6c67-4c5b-b112-36a304b66dad. The profile scope is required in order to receive this claim. |uti |bY5glsxxx |An internal claim used by Azure to revalidate tokens. It should be ignored.| |ver |1 |Indicates the version of the token.|
To obtain an access token, you can use tools such as Postman, the Rest Client ex
## Encryption
-When you create a new service of Azure Healthcare APIs, your data is encrypted using **Microsoft-managed keys** by default.
+When you create a new service of Azure Health Data Services, your data is encrypted using **Microsoft-managed keys** by default.
* FHIR service provides encryption of data at rest when data is persisted in the data store.
-* DICOM service provides encryption of data at rest when imaging data including embedded metadata is persisted in the data store. When metadata is extracted and persisted in the FHIR service, it is encrypted automatically.
-* IoT Connector, after data mapping and normalization, persists device messages to the FHIR service, which is encrypted automatically. In cases where device messages are sent to Azure event hubs, which uses Azure Storage to store the data, data is automatically encrypted with Azure Storage Service Encryption (Azure SSE).
+* DICOM service provides encryption of data at rest when imaging data including embedded metadata is persisted in the data store. When metadata is extracted and persisted in the FHIR service, itΓÇÖs encrypted automatically.
+* IoT Connector, after data mapping and normalization, persists device messages to the FHIR service, which is encrypted automatically. In cases where device messages are sent to Azure Event Hubs, which use Azure Storage to store the data, data is automatically encrypted with Azure Storage Service Encryption (Azure SSE).
## Next steps
-In this document, you learned the authentication and authorization of the Healthcare APIs. To learn how to deploy an instance of the Healthcare APIs service, see
+In this document, you learned the authentication and authorization of Azure Health Data Services. To learn how to deploy an instance of Azure Health Data Services, see
>[!div class="nextstepaction"]
->[Deploy Healthcare APIs (preview) workspace using Azue portal](healthcare-apis-quickstart.md)
+>[Deploy Azure Health Data Services workspace using the Azure portal](healthcare-apis-quickstart.md)
healthcare-apis Autoscale Azure Api Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/autoscale-azure-api-fhir.md
Previously updated : 02/11/2022 Last updated : 02/15/2022
The autoscale feature is designed to scale computing resources including the dat
## What is the guidance on when to enable autoscale?
-In general, customers should consider autoscale when their workloads vary signficantly and are unpredictable.
+In general, customers should consider autoscale when their workloads vary significantly and are unpredictable.
## How to enable autoscale?
Once the change is completed, the new billing rates will be based on manual scal
## How to adjust the maximum throughput RU/s?
-When autoscale is enabled, the system calculates and sets the initial `Tmax` value. The scalability is governed by the maximum throughput `RU/s` value, or `Tmax`, and scales between `0.1 *Tmax` (or 10% `Tmax`) and `Tmax RU/s`. The `Tmax` increases automatically as the total data size grows. To ensure maximum scalability, the `Tmax` value should be kept as-is. However, customers can request that the value be changed to something betweeen 10% and 100% of the `Tmax` value.
+When autoscale is enabled, the system calculates and sets the initial `Tmax` value. The scalability is governed by the maximum throughput `RU/s` value, or `Tmax`, and scales between `0.1 *Tmax` (or 10% `Tmax`) and `Tmax RU/s`. The `Tmax` increases automatically as the total data size grows. To ensure maximum scalability, the `Tmax` value should be kept as-is. However, customers can request that the value be changed to something between 10% and 100% of the `Tmax` value.
You can increase the max `RU/s` or `Tmax` value and go as high as the service supports. When the service is busy, the throughput `RU/s` are scaled up to the `Tmax` value. When the service is idle, the throughput `RU/s` are scaled down to 10% `Tmax` value.
You should be able to see the Max data collection size over the time period sele
[ ![Screenshot of cosmosdb_collection_size](media/cosmosdb/cosmosdb-collection-size.png) ](media/cosmosdb/cosmosdb-collection-size.png#lightbox)
-Use the formular to calculate required RU/s.
+Use the formula to calculate required RU/s.
- Manual scale: storage in GB * 40 - Autoscale: storage in GB * 400
healthcare-apis Azure Active Directory Identity Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-active-directory-identity-configuration.md
Previously updated : 08/05/2021 Last updated : 02/15/2022
Using [authorization code flow](../../active-directory/azuread-dev/v1-protocols-
![FHIR Authorization](media/azure-ad-hcapi/fhir-authorization.png)
-1. The client sends a request to the `/authorize` endpoint of Azure AD. Azure AD will redirect the client to a sign in page where the user will authenticate using appropriate credentials (for example username and password or two-factor authentication). See details on [obtaining an authorization code](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#request-an-authorization-code). Upon successful authentication, an *authorization code* is returned to the client. Azure AD will only allow this authorization code to be returned to a registered reply URL configured in the client application registration.
-1. The client application exchanges the authorization code for an *access token* at the `/token` endpoint of Azure AD. When requesting a token, the client application may have to provide a client secret (the applications password). See details on [obtaining an access token](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#use-the-authorization-code-to-request-an-access-token).
+1. The client sends a request to the `/authorize` endpoint of Azure AD. Azure AD will redirect the client to a sign-in page where the user will authenticate using appropriate credentials (for example username and password or two-factor authentication). See details on [obtaining an authorization code](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#request-an-authorization-code). Upon successful authentication, an *authorization code* is returned to the client. Azure AD will only allow this authorization code to be returned to a registered reply URL configured in the client application registration.
+1. The client application exchanges the authorization code for an *access token* at the `/token` endpoint of Azure AD. When you request a token, the client application may have to provide a client secret (the applications password). See details on [obtaining an access token](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#use-the-authorization-code-to-request-an-access-token).
1. The client makes a request to the Azure API for FHIR, for example `GET /Patient` to search all patients. When making the request, it includes the access token in an HTTP request header, for example `Authorization: Bearer eyJ0e...`, where `eyJ0e...` represents the Base64 encoded access token. 1. The Azure API for FHIR validates that the token contains appropriate claims (properties in the token). If everything checks out, it will complete the request and return a FHIR bundle with results to the client.
The pertinent sections of the Azure AD documentation are:
* [Authorization code flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md). * [Client credentials flow](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
-There are other variations (for example on behalf of flow) for obtaining a token. Check the Azure AD documentation for details. When using the Azure API for FHIR, there are also some shortcuts for obtaining an access token (for debugging purposes) [using the Azure CLI](get-healthcare-apis-access-token-cli.md).
+There are other variations (for example on behalf of flow) for obtaining a token. Check the Azure AD documentation for details. When you use Azure API for FHIR, there are some shortcuts for obtaining an access token (for debugging purposes) [using the Azure CLI](get-healthcare-apis-access-token-cli.md).
## Next steps
healthcare-apis Azure Api Fhir Access Token Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-access-token-validation.md
Previously updated : 08/05/2021 Last updated : 02/15/2022 # Azure API for FHIR access token validation
How Azure API for FHIR validates the access token will depend on implementation
## Validate token has no issues with identity provider
-The first step in the token validation is to verify that the token was issued by the correct identity provider and that it hasn't been modified. The FHIR server will be configured to use a specific identity provider known as the authority `Authority`. The FHIR server will retrieve information about the identity provider from the `/.well-known/openid-configuration` endpoint. When using Azure AD, the full URL would be:
+The first step in the token validation is to verify that the token was issued by the correct identity provider and that it hasn't been modified. The FHIR server will be configured to use a specific identity provider known as the authority `Authority`. The FHIR server will retrieve information about the identity provider from the `/.well-known/openid-configuration` endpoint. When you use Azure AD, the full URL is:
``` GET https://login.microsoftonline.com/<TENANT-ID>/.well-known/openid-configuration
When using the Azure API for FHIR, the server will validate:
We recommend that the FHIR service be [configured to use Azure RBAC](configure-azure-rbac.md) to manage data plane role assignments. But you can also [configure local RBAC](configure-local-rbac.md) if your FHIR service uses an external or secondary Azure Active Directory tenant.
-When using the OSS Microsoft FHIR server for Azure, the server will validate:
+When you use the OSS Microsoft FHIR server for Azure, the server will validate:
1. The token has the right `Audience` (`aud` claim). 1. The token has a role in the `roles` claim, which is allowed access to the FHIR server.
healthcare-apis Azure Api Fhir Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-resource-manager-template.md
Previously updated : 10/27/2021 Last updated : 02/15/2022 # Quickstart: Use an ARM template to deploy Azure API for FHIR
healthcare-apis Azure Api For Fhir Additional Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-for-fhir-additional-settings.md
Previously updated : 11/22/2019 Last updated : 02/15/2022 # Additional settings for Azure API for FHIR
-In this how-to guide, we will review the additional settings you may want to set in your Azure API for FHIR. There are additional pages that drill into even more details.
+In this how-to guide, we'll review the additional settings you may want to set in your Azure API for FHIR. There are additional pages that drill into even more details.
## Configure Database settings
For more information on how to change the default settings, see [configure datab
## Access control
-The Azure API for FHIR will only allow authorized users to access the FHIR API. You can configure authorized users through two different mechanisms. The primary and recommended way to configure access control is using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/index.yml), which is accessible through the **Access control (IAM)** blade. Azure RBAC only works if you want to secure data plane access using the Azure Active Directory tenant associated with your subscription. If you wish to use a different tenant, the Azure API for FHIR offers a local FHIR data plane access control mechanism. The configuration options are not as rich when using the local RBAC mechanism. For details, choose one of the following options:
+The Azure API for FHIR will only allow authorized users to access the FHIR API. You can configure authorized users through two different mechanisms. The primary and recommended way to configure access control is using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/index.yml), which is accessible through the **Access control (IAM)** blade. Azure RBAC only works if you want to secure data plane access using the Azure Active Directory tenant associated with your subscription. If you wish to use a different tenant, the Azure API for FHIR offers a local FHIR data plane access control mechanism. The configuration options aren't as rich when using the local RBAC mechanism. For details, choose one of the following options:
-* [Azure RBAC for FHIR data plane](configure-azure-rbac.md). This is the preferred option when you are using the Azure Active Directory tenant associated with your subscription.
+* [Azure RBAC for FHIR data plane](configure-azure-rbac.md). This is the preferred option when you're using the Azure Active Directory tenant associated with your subscription.
* [Local FHIR data plane access control](configure-local-rbac.md). Use this option only when you need to use an external Azure Active Directory tenant for data plane access control. ## Enable diagnostic logging
healthcare-apis Carin Implementation Guide Blue Button Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/carin-implementation-guide-blue-button-tutorial.md
Previously updated : 11/29/2021 Last updated : 02/15/2022 # CARIN Implementation Guide for Blue Button&#174; for Azure API for FHIR
healthcare-apis Centers For Medicare Tutorial Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/centers-for-medicare-tutorial-introduction.md
Previously updated : 12/16/2021 Last updated : 02/15/2022 # Centers for Medicare and Medicaid Services (CMS) Interoperability and Patient Access rule introduction
The Azure API for FHIR has the following capabilities to help you configure your
The Patient Access API describes adherence to four FHIR implementation guides:
-* [CARIN IG for Blue Button®](http://hl7.org/fhir/us/carin-bb/STU1/https://docsupdatetracker.net/index.html): Payers are required to make patients' claims and encounters data available according to the CARIN IG for Blue Button Implementation Guide (C4BB IG). The C4BB IG provides a set of resources that payers can display to consumers via a FHIR API and includes the details required for claims data in the Interoperability and Patient Access API. This implementation guide uses the ExplanationOfBenefit (EOB) Resource as the main resource, pulling in other resources as they are referenced.
+* [CARIN IG for Blue Button®](http://hl7.org/fhir/us/carin-bb/STU1/https://docsupdatetracker.net/index.html): Payers are required to make patients' claims and encounters data available according to the CARIN IG for Blue Button Implementation Guide (C4BB IG). The C4BB IG provides a set of resources that payers can display to consumers via a FHIR API and includes the details required for claims data in the Interoperability and Patient Access API. This implementation guide uses the ExplanationOfBenefit (EOB) Resource as the main resource, pulling in other resources as they're referenced.
* [HL7 FHIR Da Vinci PDex IG](http://hl7.org/fhir/us/davinci-pdex/STU1/https://docsupdatetracker.net/index.html): The Payer Data Exchange Implementation Guide (PDex IG) is focused on ensuring that payers provide all relevant patient clinical data to meet the requirements for the Patient Access API. This uses the US Core profiles on R4 Resources and includes (at a minimum) encounters, providers, organizations, locations, dates of service, diagnoses, procedures, and observations. While this data may be available in FHIR format, it may also come from other systems in the format of claims data, HL7 V2 messages, and C-CDA documents. * [HL7 US Core IG](https://www.hl7.org/fhir/us/core/toc.html): The HL7 US Core Implementation Guide (US Core IG) is the backbone for the PDex IG described above. While the PDex IG limits some resources even further than the US Core IG, many resources just follow the standards in the US Core IG.
The Provider Directory API describes adherence to one implementation guide:
## Touchstone
-To test adherence to the various implementation guides, [Touchstone](https://touchstone.aegis.net/touchstone/) is a great resource. Throughout the upcoming tutorials, we'll focus on ensuring that the Azure API for FHIR is configured to successfully pass various Touchstone tests. The Touchstone site has a lot of great documentation to help you get up and running.
+To test adherence to the various implementation guides, [Touchstone](https://touchstone.aegis.net/touchstone/) is a great resource. Throughout the upcoming tutorials, we'll focus on ensuring that the Azure API for FHIR is configured to successfully pass various Touchstone tests. The Touchstone site has a great amount of documentation to help you get up and running.
## Next steps
healthcare-apis Configure Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-azure-rbac.md
Previously updated : 12/02/2021 Last updated : 02/15/2022 # Configure Azure RBAC for FHIR
-In this article, you will learn how to use [Azure role-based access control (Azure RBAC)](../../role-based-access-control/index.yml) to assign access to the Azure API for FHIR data plane. Azure RBAC is the preferred methods for assigning data plane access when data plane users are managed in the Azure Active Directory tenant associated with your Azure subscription. If you are using an external Azure Active Directory tenant, refer to the [local RBAC assignment reference](configure-local-rbac.md).
+In this article, you'll learn how to use [Azure role-based access control (Azure RBAC)](../../role-based-access-control/index.yml) to assign access to the Azure API for FHIR data plane. Azure RBAC is the preferred methods for assigning data plane access when data plane users are managed in the Azure Active Directory tenant associated with your Azure subscription. If you're using an external Azure Active Directory tenant, refer to the [local RBAC assignment reference](configure-local-rbac.md).
## Confirm Azure RBAC mode
To use Azure RBAC, your Azure API for FHIR must be configured to use your Azure
:::image type="content" source="media/rbac/confirm-azure-rbac-mode.png" alt-text="Confirm Azure RBAC mode":::
-The **Authority** should be set to the Azure Active directory tenant associated with your subscription and there should be no GUIDs in the box labeled **Allowed object IDs**. You will also notice that the box is disabled and a label indicates that Azure RBAC should be used to assign data plane roles.
+The **Authority** should be set to the Azure Active directory tenant associated with your subscription and there should be no GUIDs in the box labeled **Allowed object IDs**. You'll also notice that the box is disabled and a label indicates that Azure RBAC should be used to assign data plane roles.
## Assign roles
-To grant users, service principals or groups access to the FHIR data plane, click **Access control (IAM)**, then click **Role assignments** and click **+ Add**:
+To grant users, service principals or groups access to the FHIR data plane, select **Access control (IAM)**, then select **Role assignments** and select **+ Add**:
:::image type="content" source="media/rbac/add-azure-rbac-role-assignment.png" alt-text="Add Azure role assignment":::
healthcare-apis Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-cross-origin-resource-sharing.md
Title: Configure cross-origin resource sharing in Azure API for FHIR
description: This article describes how to configure cross-origin resource sharing in Azure API for FHIR. Previously updated : 3/11/2019 Last updated : 02/15/2022
healthcare-apis Configure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-database.md
Previously updated : 11/15/2019 Last updated : 02/15/2022 # Configure database settings
Throughput must be provisioned to ensure that sufficient system resources are av
To change this setting in the Azure portal, navigate to your Azure API for FHIR and open the Database blade. Next, change the Provisioned throughput to the desired value depending on your performance needs. You can change the value up to a maximum of 10,000 RU/s. If you need a higher value, contact Azure support.
-If the database throughput is greater than 10,000 RU/s or if the data stored in the database is more than 50 GB, your client application must be capable of handling continuation tokens. A new partition is created in the database for every throughput increase of 10,000 RU/s or if the amount of data stored is more than 50 GB. Multiple partitions creates a multi-page response in which pagination is implemented by using continuation tokens.
+If the database throughput is greater than 10,000 RU/s or if the data stored in the database is more than 50 GB, your client application must be capable of handling continuation tokens. A new partition is created in the database for every throughput increase of 10,000 RU/s or if the amount of data stored is more than 50 GB. Multiple partitions create a multi-page response in which pagination is implemented by using continuation tokens.
> [!NOTE] > Higher value means higher Azure API for FHIR throughput and higher cost of the service.
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-export-data.md
Previously updated : 01/28/2022 Last updated : 02/15/2022
healthcare-apis Configure Local Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-local-rbac.md
Previously updated : 01/05/2022 Last updated : 02/15/2022 ms.devlang: azurecli # Configure local RBAC for FHIR
-This article explains how to configure the Azure API for FHIR to use a secondary Azure Active Directory (Azure AD) tenant for data access. Use this mode only if it is not possible for you to use the Azure AD tenant associated with your subscription.
+This article explains how to configure the Azure API for FHIR to use a secondary Azure Active Directory (Azure AD) tenant for data access. Use this mode only if it isn't possible for you to use the Azure AD tenant associated with your subscription.
> [!NOTE] > If your FHIR service is configured to use your primary Azure AD tenant associated with your subscription, [use Azure RBAC to assign data plane roles](configure-azure-rbac.md).
In the authority box, enter a valid secondary Azure Active Directory tenant. Onc
You can read the article on how to [find identity object IDs](find-identity-object-ids.md) for more details.
-After entering the required Azure AD object IDs, click **Save** and wait for changes to be saved before trying to access the data plane using the assigned users, service principals, or groups. The object IDs are granted with all permissions, an equivalent of the "FHIR Data Contributor" role.
+After entering the required Azure AD object IDs, select **Save** and wait for changes to be saved before trying to access the data plane using the assigned users, service principals, or groups. The object IDs are granted with all permissions, an equivalent of the "FHIR Data Contributor" role.
-The local RBAC setting is only visible from the authentication blade; it is not visible from the Access Control (IAM) blade.
+The local RBAC setting is only visible from the authentication blade; it isn't visible from the Access Control (IAM) blade.
> [!NOTE] > Only a single tenant is supported for RBAC or local RBAC. To disable the local RBAC function, you can change it back to the valid tenant (or primary tenant) associated with your subscription, and remove all Azure AD object IDs in the "Allowed object IDs" box.
healthcare-apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-private-link.md
Previously updated : 01/20/2022 Last updated : 02/15/2022
Ensure the region for the new private endpoint is the same as the region for you
![Azure portal Basics Tab](media/private-link/private-link-portal2.png)
-For the resource type, search and select **Microsoft.HealthcareApis/services**. For the resource, select the FHIR resource. For target sub-resource, select **FHIR**.
+For the resource type, search and select **Microsoft.HealthcareApis/services**. For the resource, select the FHIR resource. For target subresource, select **FHIR**.
![Azure portal Resource Tab](media/private-link/private-link-portal1.png)
-If you do not have an existing Private DNS Zone set up, select **(New)privatelink.azurehealthcareapis.com**. If you already have your Private DNS Zone configured, you can select it from the list. It must be in the format of **privatelink.azurehealthcareapis.com**.
+If you don't have an existing Private DNS Zone set up, select **(New)privatelink.azurehealthcareapis.com**. If you already have your Private DNS Zone configured, you can select it from the list. It must be in the format of **privatelink.azurehealthcareapis.com**.
![Azure portal Configuration Tab](media/private-link/private-link-portal3.png)
After the deployment is complete, you can go back to **Private endpoint connecti
### Manual Approval
-For manual approval, select the second option under Resource, "Connect to an Azure resource by resource ID or alias". For Target sub-resource, enter "fhir" as in Auto Approval.
+For manual approval, select the second option under Resource, "Connect to an Azure resource by resource ID or alias". For Target subresource, enter "fhir" as in Auto Approval.
![Manual Approval](media/private-link/private-link-manual.png)
You can configure VNet peering from the portal or using PowerShell, CLI scripts,
### Add VNet link to the private link zone
-In the Azure portal, select the resource group of the FHIR server. Select and open the Private DNS zone, **privatelink.azurehealthcareapis.com**. Select **Virtual network links** under the *settings* section. Click the Add button to add your second VNet to the private DNS zone. Enter the link name of your choice, select the subscription and the VNet you just created. Optionally, you can enter the resource ID for the second VNet. Select **Enable auto registration**, which automatically adds a DNS record for your VM connected to the second VNet. When you delete a VNet link, the DNS record for the VM is also deleted.
+In the Azure portal, select the resource group of the FHIR server. Select and open the Private DNS zone, **privatelink.azurehealthcareapis.com**. Select **Virtual network links** under the *settings* section. Select the **Add** button to add your second VNet to the private DNS zone. Enter the link name of your choice, select the subscription and the VNet you created. Optionally, you can enter the resource ID for the second VNet. Select **Enable auto registration**, which automatically adds a DNS record for your VM connected to the second VNet. When you delete a VNet link, the DNS record for the VM is also deleted.
For more information on how private link DNS zone resolves the private endpoint IP address to the fully qualified domain name (FQDN) of the resource such as the FHIR server, see [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md).
Private endpoints can only be deleted from the Azure portal from the **Overview*
## Test and troubleshoot private link and VNet peering
-To ensure that your FHIR server is not receiving public traffic after disabling public network access, select the /metadata endpoint for your server from your computer. You should receive a 403 Forbidden.
+To ensure that your FHIR server isn't receiving public traffic after disabling public network access, select the /metadata endpoint for your server from your computer. You should receive a 403 Forbidden.
> [!NOTE] > It can take up to 5 minutes after updating the public network access flag before public traffic is blocked.
To ensure your private endpoint can send traffic to your server:
### Use nslookup
-You can use the **nslookup** tool to verify connectivity. If the private link is configured properly, you should see the FHIR server URL resolves to the valid private IP address, as shown below. Note that IP address **168.63.129.16** is a virtual public IP address used in Azure. For more information, see [What is IP address 168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md)
+You can use the **nslookup** tool to verify connectivity. If the private link is configured properly, you should see the FHIR server URL resolves to the valid private IP address, as shown below. Note that the IP address **168.63.129.16** is a virtual public IP address used in Azure. For more information, see [What is IP address 168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md)
``` C:\Users\testuser>nslookup fhirserverxxx.azurehealthcareapis.com
Address: 172.21.0.4
Aliases: fhirserverxxx.azurehealthcareapis.com ```
-If the private link is not configured properly, you may see the public IP address instead and a few aliases including the Traffic Manager endpoint. This indicates that the private link DNS zone cannot resolve to the valid private IP address of the FHIR server. When VNet peering is configured, one possible reason is that the second peered VNet hasn't been added to the private link DNS zone. As a result, you will see the HTTP error 403, "Access to xxx was denied", when trying to access the /metadata endpoint of the FHIR server.
+If the private link isn't configured properly, you may see the public IP address instead and a few aliases including the Traffic Manager endpoint. This indicates that the private link DNS zone canΓÇÖt resolve to the valid private IP address of the FHIR server. When VNet peering is configured, one possible reason is that the second peered VNet hasn't been added to the private link DNS zone. As a result, you'll see the HTTP error 403, "Access to xxx was denied", when trying to access the /metadata endpoint of the FHIR server.
``` C:\Users\testuser>nslookup fhirserverxxx.azurehealthcareapis.com
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/convert-data.md
Title: Data conversion for Azure API for FHIR description: Use the $convert-data endpoint and customize-converter templates to convert data in Azure API for FHIR. -+ Previously updated : 05/11/2021 Last updated : 03/02/2022
+# Converting your data to FHIR for Azure API for FHIR
-# How to convert data to FHIR (Preview)
+The `$convert-data` custom endpoint in the FHIR service is meant for data conversion from different data types to FHIR. It uses the Liquid template engine and the templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project as the default templates. You can customize these conversion templates as needed. Currently it supports three types of data conversion: **C-CDA to FHIR**, **HL7v2 to FHIR**, **JSON to FHIR**.
-> [!IMPORTANT]
-> This capability is in public preview, and it's provided without a service level agreement.
-> It's not recommended for production workloads. Certain features might not be supported
-> or might have constrained capabilities. For more information, see
-> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-The $convert-data custom endpoint in the FHIR service is meant for data conversion from different data types to FHIR. It uses the Liquid template engine and the templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project as the default templates. You can customize these conversion templates as needed. Currently it supports two types of conversion, **C-CDA to FHIR** and **HL7v2 to FHIR** conversion.
+> [!NOTE]
+> `$convert-data` endpoint can be used as a component within an ETL pipeline for the conversion of raw healthcare data from legacy formats into FHIR format. However, it is not an ETL pipeline in itself. We recommend you to use an ETL engine such as Logic Apps or Azure Data Factory for a complete workflow in preparing your FHIR data to be persisted into the FHIR server. The workflow might include: data reading and ingestion, data validation, making $convert-data API calls, data pre/post-processing, data enrichment, and data de-duplication.
## Use the $convert-data endpoint
-The `$convert-data` operation is integrated into the FHIR service to run as part of the service. You can make API calls to the server to convert your data into FHIR:
+The `$convert-data` operation is integrated into the FHIR service to run as part of the service. After enabling `$convert-data` in your server, you can make API calls to the server to convert your data into FHIR:
`https://<<FHIR service base URL>>/$convert-data`
$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource
| Parameter Name | Description | Accepted values | | -- | -- | -- |
-| inputData | Data to be converted. | A valid JSON String|
-| inputDataType | Data type of input. | ```HL7v2```, ``Ccda`` |
-| templateCollectionReference | Reference to an [OCI image ](https://github.com/opencontainers/image-spec) template collection on [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/). It is the image containing Liquid templates to use for conversion. It can be a reference either to the default templates or a custom template image that is registered within the FHIR service. See below to learn about customizing the templates, hosting those on ACR, and registering to the FHIR service. | For **HL7v2** default templates: <br>```microsofthealth/fhirconverter:default``` <br>``microsofthealth/hl7v2templates:default``<br><br>For **C-CDA** default templates: ``microsofthealth/ccdatemplates:default`` <br>\<RegistryServer\>/\<imageName\>@\<imageDigest\>, \<RegistryServer\>/\<imageName\>:\<imageTag\> |
-| rootTemplate | The root template to use while transforming the data. | For **HL7v2**:<br>```ADT_A01```, ```OML_O21```, ```ORU_R01```, ```VXU_V04```<br><br> For **C-CDA**:<br>```CCD```, `ConsultationNote`, `DischargeSummary`, `HistoryandPhysical`, `OperativeNote`, `ProcedureNote`, `ProgressNote`, `ReferralNote`, `TransferSummary` |
+| inputData | Data to be converted. | For `Hl7v2`: string <br> For `Ccda`: XML <br> For `Json`: JSON |
+| inputDataType | Data type of input. | ```HL7v2```, ``Ccda``, ``Json`` |
+| templateCollectionReference | Reference to an [OCI image ](https://github.com/opencontainers/image-spec) template collection on [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/). It's the image containing Liquid templates to use for conversion. It can be a reference either to the default templates or a custom template image that is registered within the FHIR service. See below to learn about customizing the templates, hosting those on ACR, and registering to the FHIR service. | For ***default/sample*** templates: <br> **HL7v2** templates: <br>```microsofthealth/fhirconverter:default``` <br>``microsofthealth/hl7v2templates:default``<br> **C-CDA** templates: <br> ``microsofthealth/ccdatemplates:default`` <br> **JSON** templates: <br> ``microsofthealth/jsontemplates:default`` <br><br> For ***custom*** templates: <br> \<RegistryServer\>/\<imageName\>@\<imageDigest\>, \<RegistryServer\>/\<imageName\>:\<imageTag\> |
+| rootTemplate | The root template to use while transforming the data. | For **HL7v2**:<br> "ADT_A01", "ADT_A02", "ADT_A03", "ADT_A04", "ADT_A05", "ADT_A08", "ADT_A11", "ADT_A13", "ADT_A14", "ADT_A15", "ADT_A16", "ADT_A25", "ADT_A26", "ADT_A27", "ADT_A28", "ADT_A29", "ADT_A31", "ADT_A47", "ADT_A60", "OML_O21", "ORU_R01", "ORM_O01", "VXU_V04", "SIU_S12", "SIU_S13", "SIU_S14", "SIU_S15", "SIU_S16", "SIU_S17", "SIU_S26", "MDM_T01", "MDM_T02"<br><br> For **C-CDA**:<br> "CCD", "ConsultationNote", "DischargeSummary", "HistoryandPhysical", "OperativeNote", "ProcedureNote", "ProgressNote", "ReferralNote", "TransferSummary" <br><br> For **JSON**: <br> "ExamplePatient", "Stu3ChargeItem" <br> |
+
+> [!NOTE]
+> JSON templates are sample templates for use, not "default" templates that adhere to any pre-defined JSON message types. JSON doesn't have any standardized message types, unlike HL7v2 messages or C-CDA documents. Therefore, instead of default templates we provide you with some sample templates that you can use as a starting guide for your own customized templates.
> [!WARNING] > Default templates are released under MIT License and are **not** supported by Microsoft Support. > > Default templates are provided only to help you get started quickly. They may get updated when we update versions of the Azure API for FHIR. Therefore, you must verify the conversion behavior and **host your own copy of templates** on an Azure Container Registry, register those to the Azure API for FHIR, and use in your API calls in order to have consistent data conversion behavior across the different versions of Azure API for FHIR. -
-**Sample request:**
+#### Sample Request
```json {
$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource
} ```
-**Sample response:**
+#### Sample Response
```json {
You can use the [FHIR Converter extension](https://marketplace.visualstudio.com/
## Host and use templates
-It's strongly recommended that you host your own copy of templates on ACR. There're four steps involved in hosting your own copy of templates and using those in the $convert-data operation:
+It's recommended that you host your own copy of templates on ACR. There are four steps involved in hosting your own copy of templates and using those in the $convert-data operation:
1. Push the templates to your Azure Container Registry. 1. Enable Managed Identity on your Azure API for FHIR instance.
After creating an ACR instance, you can use the _FHIR Converter: Push Templates_
Browse to your instance of Azure API for FHIR service in the Azure portal, and then select the **Identity** blade. Change the status to **On** to enable managed identity in Azure API for FHIR.
-![Enable Managed Identity](media/convert-data/fhir-mi-enabled.png)
+[ ![Screen image of Enable Managed Identity.](media/convert-data/fhir-mi-enabled.png) ](media/convert-data/fhir-mi-enabled.png#lightbox)
### Provide access of the ACR to Azure API for FHIR
Change the status to **On** to enable managed identity in Azure API for FHIR.
1. Assign the [AcrPull](../../role-based-access-control/built-in-roles.md#acrpull) role.
- ![Add role assignment page](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ [ ![Screen image of Add role assignment page.](../../../includes/role-based-access-control/media/add-role-assignment-page.png) ](../../../includes/role-based-access-control/media/add-role-assignment-page.png#lightbox)
For more information about assigning roles in the Azure portal, see [Azure built-in roles](../../role-based-access-control/role-assignments-portal.md).
For more information about assigning roles in the Azure portal, see [Azure built
You can register the ACR server using the Azure portal, or using CLI. #### Registering the ACR server using Azure portal
-Browse to the **Artifacts** blade under **Data transformation** in your Azure API for FHIR instance. You will see the list of currently registered ACR servers. Select **Add**, and then select your registry server from the drop-down menu. You'll need to select **Save** for the registration to take effect. It may take a few minutes to apply the change and restart your instance.
+Browse to the **Artifacts** blade under **Data transformation** in your Azure API for FHIR instance. You'll see the list of currently registered ACR servers. Select **Add**, and then select your registry server from the drop-down menu. You'll need to select **Save** for the registration to take effect. It may take a few minutes to apply the change and restart your instance.
#### Registering the ACR server using CLI You can register up to 20 ACR servers in the Azure API for FHIR.
az healthcareapis acr add --login-servers "fhiracr2021.azurecr.io fhiracr2020.az
Select **Networking** of the Azure storage account from the portal.
- :::image type="content" source="media/convert-data/networking-container-registry.png" alt-text="Container registry.":::
+ :::image type="content" source="media/convert-data/networking-container-registry.png" alt-text=" Screen image of the container registry.":::
Select **Selected networks**.
In the table below, you'll find the IP address for the Azure region where the Az
> [!NOTE]
-> The above steps are similar to the configuration steps described in the document How to export FHIR data. For more information, see [Secure Export to Azure Storage](../data-transformation/export-data.md#secure-export-to-azure-storage)
+> The above steps are similar to the configuration steps described in the document How to export FHIR data. For more information, see [Secure Export to Azure Storage](export-data.md#secure-export-to-azure-storage)
### Verify Make a call to the $convert-data API specifying your template reference in the templateCollectionReference parameter. `<RegistryServer>/<imageName>@<imageDigest>`+
+## Next steps
+
+In this article, you learned about data conversion for Azure API for FHIR. For more information about related GitHub Projects for Azure API for FHIR, see
+
+>[!div class="nextstepaction"]
+>[Related GitHub Projects for Azure API for FHIR](fhir-github-projects.md)
+
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/copy-to-synapse.md
Previously updated : 01/28/2022 Last updated : 02/28/2022 # Copy data from Azure API for FHIR to Azure Synapse Analytics
-In this article, you'll learn a couple of ways to copy data from Azure API for FHIR to [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), which is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics.
+In this article, you'll learn three ways to copy data from Azure API for FHIR to [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), which is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics.
-Copying data from the FHIR server to Synapse involves exporting the data using the FHIR `$export` operation followed by a series of steps to transform and load the data to Synapse. This article will walk you through two of the several approaches, both of which will show how to convert FHIR resources into tabular formats while copying them into Synapse.
+* Use the [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md) OSS tool
+* Use the [FHIR to CDM pipeline generator](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) OSS tool
+* Use $export and load data to Synapse using T-SQL
-* **Load exported data to Synapse using T-SQL:** Use `$export` operation to copy FHIR resources into a **Azure Data Lake Gen 2 (ADL Gen 2) blob storage** in `NDJSON` format. Load the data from the storage into **serverless or dedicated SQL pools** in Synapse using T-SQL. Convert these steps into a robust data movement pipeline using [Synapse pipelines](../../synapse-analytics/get-started-pipelines.md).
-* **Use the tools from the FHIR Analytics Pipelines OSS repo:** The [FHIR Analytics Pipeline](https://github.com/microsoft/FHIR-Analytics-Pipelines) repo contains tools that can create an **Azure Data Factory (ADF) pipeline** to copy FHIR data into a **Common Data Model (CDM) folder**, and from the CDM folder to Synapse.
+## Using the FHIR to Synapse Sync Agent OSS tool
-## Load exported data to Synapse using T-SQL
+> [!Note]
+> [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
+
+The **FHIR to Synapse Sync Agent** is a Microsoft OSS project released under MIT License. It's an Azure function that extracts data from a FHIR server using FHIR Resource APIs, converts it to hierarchical Parquet files, and writes it to Azure Data Lake in near real time. This also contains a script to create external tables and views in [Synapse Serverless SQL pool](../../synapse-analytics/sql/on-demand-workspace-overview.md) pointing to the Parquet files.
+
+This solution enables you to query against the entire FHIR data with tools such as Synapse Studio, SSMS, and Power BI. You can also access the Parquet files directly from a Synapse Spark pool. You should consider this solution if you want to access all of your FHIR data in near real time, and want to defer custom transformation to downstream systems.
+
+Follow the OSS [documentation](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md) for installation and usage instructions.
+
+## Using the FHIR to CDM pipeline generator OSS tool
+
+> [!Note]
+> [FHIR to CDM pipeline generator](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
+
+The **FHIR to CDM pipeline generator** is a Microsoft OSS project released under MIT License. It's a tool to generate an ADF pipeline for copying a snapshot of data from a FHIR server using $export API, transforming it to csv format, and writing to a [CDM folder](https://docs.microsoft.com/common-data-model/data-lake) in Azure Data Lake Storage Gen 2. The tool requires a user-created configuration file containing instructions to project and flatten FHIR Resources and fields into tables. You can also follow the instructions for creating a downstream pipeline in Synapse workspace to move data from CDM folder to Synapse dedicated SQL pool.
+
+This solution enables you to transform the data into tabular format as it gets written to CDM folder. You should consider this solution if you want to transform FHIR data into a custom schema after it's extracted from the FHIR server.
-### `$export` for moving FHIR data into Azure Data Lake Gen 2 storage
+Follow the OSS [documentation](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) for installation and usage instructions.
+
+## Loading exported data to Synapse using T-SQL
+
+In this approach, you use the FHIR `$export` operation to copy FHIR resources into a **Azure Data Lake Gen 2 (ADL Gen 2) blob storage** in `NDJSON` format. Subsequently, you load the data from the storage into **serverless or dedicated SQL pools** in Synapse using T-SQL. You can convert these steps into a robust data movement pipeline using [Synapse pipelines](../../synapse-analytics/get-started-pipelines.md).
:::image type="content" source="media/export-data/export-azure-storage-option.png" alt-text="Azure storage to Synapse using $export." lightbox="media/export-data/export-azure-storage-option.png":::
-#### Configure your FHIR server to support `$export`
+### Using `$export` to copy data
+
+#### Configuring `$export` in the FHIR server
-Azure API for FHIR implements the `$export` operation defined by the FHIR specification to export all or a filtered subset of FHIR data in `NDJSON` format. In addition, it supports [de-identified export](./de-identified-export.md) to anonymize FHIR data during the export. If you use `$export`, you get de-identification feature by default its capability is already integrated in `$export`.
+Azure API for FHIR implements the `$export` operation defined by the FHIR specification to export all or a filtered subset of FHIR data in `NDJSON` format. In addition, it supports [de-identified export](./de-identified-export.md) to anonymize FHIR data during the export.
-To export FHIR data to Azure blob storage, you first need to configure your FHIR server to export data to the storage account. YouΓÇÖll need to (1) enable Managed Identity, (2) go to Access Control in the storage account and add role assignment, (3) select your storage account for `$export`. More step by step can be found [here](./configure-export-data.md).
+To export FHIR data to Azure blob storage, you first need to configure your FHIR server to export data to the storage account. You'll need to (1) enable Managed Identity, (2) go to Access Control in the storage account and add role assignment, (3) select your storage account for `$export`. More step by step can be found [here](./configure-export-data.md).
You can configure the server to export the data to any kind of Azure storage account, but we recommend exporting to ADL Gen 2 for best alignment with Synapse.
After configuring your FHIR server, you can follow the [documentation](./export-
https://{{FHIR service base URL}}/Group/{{GroupId}}/$export?_container={{BlobContainer}} ```
-You can also use `_type` parameter in the `$export` call above to restrict the resources we you want to export. For example, the following call will export only `Patient`, `MedicationRequest`, and `Observation` resources:
+You can also use `_type` parameter in the `$export` call above to restrict the resources that you want to export. For example, the following call will export only `Patient`, `MedicationRequest`, and `Observation` resources:
```rest https://{{FHIR service base URL}}/Group/{{GroupId}}/$export?_container={{BlobContainer}}&
_type=Patient,MedicationRequest,Condition
For more information on the different parameters supported, check out our `$export` page section on the [query parameters](./export-data.md#settings-and-parameters).
-### Create a Synapse workspace
+### Using Synapse for Analytics
-Before using Synapse, you'll need a Synapse workspace. YouΓÇÖll create an Azure Synapse Analytics service on Azure portal. More step-by-step guide can be found [here](../../synapse-analytics/get-started-create-workspace.md). You need an `ADLSGEN2` account to create a workspace. Your Azure Synapse workspace will use this storage account to store your Synapse workspace data.
+#### Creating a Synapse workspace
-After creating a workspace, you can view your workspace on Synapse Studio by signing into your workspace on https://web.azuresynapse.net, or launching Synapse Studio in the Azure portal.
+Before using Synapse, you'll need a Synapse workspace. You'll create an Azure Synapse Analytics service on Azure portal. More step-by-step guide can be found [here](../../synapse-analytics/get-started-create-workspace.md). You need an `ADLSGEN2` account to create a workspace. Your Azure Synapse workspace will use this storage account to store your Synapse workspace data.
+
+After creating a workspace, you can view your workspace in Synapse Studio by signing into your workspace on [https://web.azuresynapse.net](https://web.azuresynapse.net), or launching Synapse Studio in the Azure portal.
#### Creating a linked service between Azure storage and Synapse
-To copy your data to Synapse, you need to create a linked service that connects your Azure Storage account with Synapse. More step-by-step instructions can be found [here](../../synapse-analytics/data-integration/data-integration-sql-pool.md#create-linked-services).
+To copy your data to Synapse, you need to create a linked service that connects your Azure Storage account, where you've exported your data, with Synapse. More step-by-step instructions can be found [here](../../synapse-analytics/data-integration/data-integration-sql-pool.md#create-linked-services).
1. In Synapse Studio, browse to the **Manage** tab and under **External connections**, select **Linked services**. 2. Select **New** to add a new linked service. 3. Select **Azure Data Lake Storage Gen2** from the list and select **Continue**. 4. Enter your authentication credentials. Select **Create** when finished.
-Now that you have a linked service between your ADL Gen 2 storage and Synapse, youΓÇÖre ready to use Synapse SQL pools to load and analyze your FHIR data.
+Now that you have a linked service between your ADL Gen 2 storage and Synapse, you're ready to use Synapse SQL pools to load and analy