Updates from: 03/18/2022 02:16:51
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Create Resource Forest Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-resource-forest-powershell.md
To complete this article, you need the following resources and privileges:
* Install and configure Azure AD PowerShell. * If needed, follow the instructions to [install the Azure AD PowerShell module and connect to Azure AD](/powershell/azure/active-directory/install-adv2). * Make sure that you sign in to your Azure AD tenant using the [Connect-AzureAD][Connect-AzureAD] cmdlet.
-* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
-* You need [Domain Services Contributor](/azure/role-based-access-control/built-in-roles#contributor) Azure role to create the required Azure AD DS resources.
+* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
+* You need [Domain Services Contributor](../role-based-access-control/built-in-roles.md#contributor) Azure role to create the required Azure AD DS resources.
## Sign in to the Azure portal
For more conceptual information about forest types in Azure AD DS, see [What are
[Install-Script]: /powershell/module/powershellget/install-script <!-- EXTERNAL LINKS -->
-[powershell-gallery]: https://www.powershellgallery.com/
+[powershell-gallery]: https://www.powershellgallery.com/
active-directory-domain-services Migrate From Classic Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/migrate-from-classic-vnet.md
To prepare the managed domain for migration, complete the following steps:
1. Create a variable to hold the credentials for by the migration script using the [Get-Credential][get-credential] cmdlet.
- The user account you specify needs [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS and [Domain Services Contributor](/azure/role-based-access-control/built-in-roles#contributor) Azure role to create the required Azure AD DS resources.
+ The user account you specify needs [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS and [Domain Services Contributor](../role-based-access-control/built-in-roles.md#contributor) Azure role to create the required Azure AD DS resources.
When prompted, enter an appropriate user account and password:
With your managed domain migrated to the Resource Manager deployment model, [cre
[migration-benefits]: concepts-migration-benefits.md <!-- EXTERNAL LINKS -->
-[powershell-script]: https://www.powershellgallery.com/packages/Migrate-Aadds/
+[powershell-script]: https://www.powershellgallery.com/packages/Migrate-Aadds/
active-directory-domain-services Powershell Scoped Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-scoped-synchronization.md
To complete this article, you need the following resources and privileges:
* If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * An Azure Active Directory Domain Services managed domain enabled and configured in your Azure AD tenant. * If needed, complete the tutorial to [create and configure an Azure Active Directory Domain Services managed domain][tutorial-create-instance].
-* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to change the Azure AD DS synchronization scope.
+* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Azure AD roles in your tenant to change the Azure AD DS synchronization scope.
## Scoped synchronization overview
To learn more about the synchronization process, see [Understand synchronization
[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md <!-- EXTERNAL LINKS -->
-[Connect-AzureAD]: /powershell/module/azuread/connect-azuread
+[Connect-AzureAD]: /powershell/module/azuread/connect-azuread
active-directory-domain-services Scoped Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/scoped-synchronization.md
To complete this article, you need the following resources and privileges:
* If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * An Azure Active Directory Domain Services managed domain enabled and configured in your Azure AD tenant. * If needed, complete the tutorial to [create and configure an Azure Active Directory Domain Services managed domain][tutorial-create-instance].
-* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to change the Azure AD DS synchronization scope.
+* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Azure AD roles in your tenant to change the Azure AD DS synchronization scope.
## Scoped synchronization overview
To learn more about the synchronization process, see [Understand synchronization
[concepts-sync]: synchronization.md [tutorial-create-instance]: tutorial-create-instance.md [create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
active-directory-domain-services Template Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/template-create-instance.md
To complete this article, you need the following resources:
* Install and configure Azure AD PowerShell. * If needed, follow the instructions to [install the Azure AD PowerShell module and connect to Azure AD](/powershell/azure/active-directory/install-adv2). * Make sure that you sign in to your Azure AD tenant using the [Connect-AzureAD][Connect-AzureAD] cmdlet.
-* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
+* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
* You need Domain Services Contributor Azure role to create the required Azure AD DS resources. ## DNS naming requirements
To see the managed domain in action, you can [domain-join a Windows VM][windows-
[Get-AzSubscription]: /powershell/module/Az.Accounts/Get-AzSubscription [cloud-shell]: ../cloud-shell/cloud-shell-windows-users.md [naming-prefix]: /windows-server/identity/ad-ds/plan/selecting-the-forest-root-domain
-[New-AzResourceGroupDeployment]: /powershell/module/Az.Resources/New-AzResourceGroupDeployment
+[New-AzResourceGroupDeployment]: /powershell/module/Az.Resources/New-AzResourceGroupDeployment
active-directory-domain-services Tutorial Configure Ldaps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-ldaps.md
To complete this tutorial, you need the following resources and privileges:
* If needed, [create and configure an Azure Active Directory Domain Services managed domain][create-azure-ad-ds-instance]. * The *LDP.exe* tool installed on your computer. * If needed, [install the Remote Server Administration Tools (RSAT)][rsat] for *Active Directory Domain Services and LDAP*.
-* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to enable secure LDAP.
+* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Azure AD roles in your tenant to enable secure LDAP.
## Sign in to the Azure portal
In this tutorial, you learned how to:
<!-- EXTERNAL LINKS --> [rsat]: /windows-server/remote/remote-server-administration-tools [ldap-query-basics]: /windows/desktop/ad/creating-a-query-filter
-[New-SelfSignedCertificate]: /powershell/module/pki/new-selfsignedcertificate
+[New-SelfSignedCertificate]: /powershell/module/pki/new-selfsignedcertificate
active-directory-domain-services Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-networking.md
To complete this tutorial, you need the following resources and privileges:
* If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * An Azure Active Directory tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant].
-* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
+* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
* You need Domain Services Contributor Azure role to create the required Azure AD DS resources. * An Azure Active Directory Domain Services managed domain enabled and configured in your Azure AD tenant. * If needed, the first tutorial [creates and configures an Azure Active Directory Domain Services managed domain][create-azure-ad-ds-instance].
To see this managed domain in action, create and join a virtual machine to the d
[create-azure-ad-ds-instance]: tutorial-create-instance.md [create-join-windows-vm]: join-windows-vm.md [peering-overview]: ../virtual-network/virtual-network-peering-overview.md
-[network-considerations]: network-considerations.md
+[network-considerations]: network-considerations.md
active-directory-domain-services Tutorial Create Forest Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-forest-trust.md
To complete this tutorial, you need the following resources and privileges:
## Sign in to the Azure portal
-In this tutorial, you create and configure the outbound forest trust from Azure AD DS using the Azure portal. To get started, first sign in to the [Azure portal](https://portal.azure.com). You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to modify an Azure AD DS instance.
+In this tutorial, you create and configure the outbound forest trust from Azure AD DS using the Azure portal. To get started, first sign in to the [Azure portal](https://portal.azure.com). You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Azure AD roles in your tenant to modify an Azure AD DS instance.
## Networking considerations
active-directory-domain-services Tutorial Create Instance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance-advanced.md
To complete this tutorial, you need the following resources and privileges:
* If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * An Azure Active Directory tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant].
-* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
+* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
* You need Domain Services Contributor Azure role to create the required Azure AD DS resources. Although not required for Azure AD DS, it's recommended to [configure self-service password reset (SSPR)][configure-sspr] for the Azure AD tenant. Users can change their password without SSPR, but SSPR helps if they forget their password and need to reset it.
To see this managed domain in action, create and join a virtual machine to the d
[availability-zones]: ../availability-zones/az-overview.md [concepts-sku]: administration-concepts.md#azure-ad-ds-skus
-<!-- EXTERNAL LINKS -->
+<!-- EXTERNAL LINKS -->
active-directory-domain-services Tutorial Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md
To complete this tutorial, you need the following resources and privileges:
* If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * An Azure Active Directory tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant].
-* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
+* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
* You need Domain Services Contributor Azure role to create the required Azure AD DS resources. * A virtual network with DNS servers that can query necessary infrastructure such as storage. DNS servers that can't perform general internet queries might block the ability to create a managed domain.
Before you domain-join VMs and deploy applications that use the managed domain,
[concepts-sku]: administration-concepts.md#azure-ad-ds-skus <!-- EXTERNAL LINKS -->
-[naming-prefix]: /windows-server/identity/ad-ds/plan/selecting-the-forest-root-domain#selecting-a-prefix
+[naming-prefix]: /windows-server/identity/ad-ds/plan/selecting-the-forest-root-domain#selecting-a-prefix
active-directory Sap Successfactors Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md
Based on the attribute-mapping, during full sync Azure AD provisioning service s
For each SuccessFactors user, the provisioning service looks for an account in the target (Azure AD/on-premises Active Directory) using the matching attribute defined in the mapping. For example: if *personIdExternal* maps to *employeeId* and is set as the matching attribute, then the provisioning service uses the *personIdExternal* value to search for the user with *employeeId* filter. If a user match is found, then it updates the target attributes. If no match is found, then it creates a new entry in the target.
-To validate the data returned by your OData API endpoint for a specific `personIdExternal`, update the `SuccessFactorsAPIEndpoint` in the API query below with your API data center server URL and use a tool like [Postman](https://www.postman.com/downloads/) to invoke the query.
+To validate the data returned by your OData API endpoint for a specific `personIdExternal`, update the `SuccessFactorsAPIEndpoint` in the API query below with your API data center server URL and use a tool like [Postman](https://www.postman.com/downloads/) to invoke the query. If the "in" filter does not work, you can try the "eq" filter.
``` https://[SuccessFactorsAPIEndpoint]/odata/v2/PerPerson?$format=json&
By using JSONPath transformation, you can customize the behavior of the Azure AD
This section covers how you can customize the provisioning app for the following HR scenarios: * [Retrieving additional attributes](#retrieving-additional-attributes) * [Retrieving custom attributes](#retrieving-custom-attributes)
-* [Handling worker conversion scenario](#handling-worker-conversion-scenario)
-* [Handling rehire scenario](#handling-rehire-scenario)
+* [Handling worker conversion and rehire scenario](#handling-worker-conversion-and-rehire-scenario)
* [Handling global assignment scenario](#handling-global-assignment-scenario) * [Handling concurrent jobs scenario](#handling-concurrent-jobs-scenario)
+* [Retrieving position details](#retrieving-position-details)
+* [Provisioning users in the Onboarding module](#provisioning-users-in-the-onboarding-module)
### Retrieving additional attributes
Extending this scenario:
* If you want to map *custom35* attribute from the *User* entity, then use the JSONPath `$.employmentNav.results[0].userNav.custom35` * If you want to map *customString35* attribute from the *EmpEmployment* entity, then use the JSONPath `$.employmentNav.results[0].customString35`
-### Handling worker conversion scenario
+### Handling worker conversion and rehire scenario
-Worker conversion is the process of converting an existing full-time employee to a contractor or a contractor to full-time. In this scenario, Employee Central adds a new *EmpEmployment* entity along with a new *User* entity for the same *Person* entity. The *User* entity nested under the previous *EmpEmployment* entity is set to null. To handle this scenario so that the new employment data shows up when a conversion occurs, you can bulk update the provisioning app schema using the steps listed below:
+**About worker conversion scenario:** Worker conversion is the process of converting an existing full-time employee to a contractor or a contractor to full-time. In this scenario, Employee Central adds a new *EmpEmployment* entity along with a new *User* entity for the same *Person* entity. The *User* entity nested under the previous *EmpEmployment* entity is set to null.
+
+**About rehire scenario:** In SuccessFactors, there are two options to process rehires:
+* Option 1: Create a new person profile in Employee Central
+* Option 2: Reuse existing person profile in Employee Central
+
+If your HR process uses Option 1, then no changes are required to the provisioning schema.
+If your HR process uses Option 2, then Employee Central adds a new *EmpEmployment* entity along with a new *User* entity for the same *Person* entity.
+
+To handle both these scenarios so that the new employment data shows up when a conversion or rehire occurs, you can bulk update the provisioning app schema using the steps listed below:
1. Open the attribute-mapping blade of your SuccessFactors provisioning app. 1. Scroll down and click **Show advanced options**.
Worker conversion is the process of converting an existing full-time employee to
>![Screenshot shows the Schema editor with Download select to save a copy of the schema.](media/sap-successfactors-integration-reference/download-schema.png#lightbox) 1. In the schema editor, press Ctrl-H key to open the find-replace control. 1. In the find text box, copy, and paste the value `$.employmentNav.results[0]`
-1. In the replace text box, copy, and paste the value `$.employmentNav.results[?(@.userNav != null)]`. Note the whitespace surrounding the `!=` operator, which is important for successful processing of the JSONPath expression.
- >![find-replace-conversion](media/sap-successfactors-integration-reference/find-replace-conversion-scenario.png#lightbox)
-1. Click on the "replace all" option to update the schema.
-1. Save the schema.
-1. The above process updates all JSONPath expressions as follows:
- * Old JSONPath: `$.employmentNav.results[0].jobInfoNav.results[0].departmentNav.name_localized`
- * New JSONPath: `$.employmentNav.results[?(@.userNav != null)].jobInfoNav.results[0].departmentNav.name_localized`
-1. Restart provisioning.
-
-### Handling rehire scenario
-
-Usually there are two options to process rehires:
-* Option 1: Create a new person profile in Employee Central
-* Option 2: Reuse existing person profile in Employee Central
-
-If your HR process uses Option 1, then no changes are required to the provisioning schema.
-If your HR process uses Option 2, then Employee Central adds a new *EmpEmployment* entity along with a new *User* entity for the same *Person* entity. Unlike the conversion scenario, the *User* entity in the previous *EmpEmployment* entity is not set to null.
-
-To handle this rehire scenario (option 2), so that the latest employment data shows up for rehire profiles, you can bulk update the provisioning app schema using the steps listed below:
-
-1. Open the attribute-mapping blade of your SuccessFactors provisioning app.
-1. Scroll down and click **Show advanced options**.
-1. Click on the link **Review your schema here** to open the schema editor.
-1. Click on the **Download** link to save a copy of the schema before editing.
-1. In the schema editor, press Ctrl-H key to open the find-replace control.
-1. In the find text box, copy, and paste the value `$.employmentNav.results[0]`
1. In the replace text box, copy, and paste the value `$.employmentNav.results[-1:]`. This JSONPath expression returns the latest *EmpEmployment* record.
+ >![find-replace-conversion](media/sap-successfactors-integration-reference/find-replace-conversion-scenario.png#lightbox)
1. Click on the "replace all" option to update the schema. 1. Save the schema. 1. The above process updates all JSONPath expressions as follows:
To handle this rehire scenario (option 2), so that the latest employment data sh
* New JSONPath: `$.employmentNav.results[-1:].jobInfoNav.results[0].departmentNav.name_localized` 1. Restart provisioning.
-This schema change also supports the worker conversion scenario.
### Handling global assignment scenario
To fetch attributes belonging to the standard assignment and global assignment u
1. Click on the **Download** link to save a copy of the schema before editing. 1. In the schema editor, press Ctrl-H key to open the find-replace control. 1. In the find text box, copy, and paste the value `$.employmentNav.results[0]`
-1. In the replace text box, copy, and paste the value `$.employmentNav.results[?(@.assignmentClass == 'ST')]`.
+1. In the replace text box, copy, and paste the value `$.employmentNav.results[?(@.assignmentClass == 'ST')]`. Note the whitespace surrounding the == operator, which is important for successful processing of the JSONPath expression.
1. Click on the "replace all" option to update the schema. 1. Save the schema. 1. The above process updates all JSONPath expressions as follows:
Usually the *personIdExternal* attribute value in SuccessFactors matches the *us
1. Ensure that an extensionAttribute *(extensionAttribute1-15)* in Azure AD always stores the *userId* of every worker's active employment record. This can be achieved by mapping SuccessFactors *userId* attribute to an extensionAttribute in Azure AD. > [!div class="mx-imgBorder"] > ![Inbound UserID attribute mapping](./media/sap-successfactors-integration-reference/inbound-userid-attribute-mapping.png)
-1. For guidance regarding JSONPath settings, refer to the section [Handling rehire scenario](#handling-rehire-scenario) to ensure the *userId* value of the active employment record flows into Azure AD.
+1. For guidance regarding JSONPath settings, refer to the section [Handling worker conversion and rehire scenario](#handling-worker-conversion-and-rehire-scenario) to ensure the *userId* value of the active employment record flows into Azure AD.
1. Save the mapping. 1. Run the provisioning job to ensure that the *userId* values flow into Azure AD. > [!NOTE]
active-directory Workday Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-integration-reference.md
The following *Get_Workers* request queries for effective-dated updates that hap
If any of the above queries returns a future-dated hire, then the following *Get_Workers* request is used to fetch information about a future-dated new hire. The *WID* attribute of the new hire is used to perform the lookup and the effective date is set to the date and time of hire.
+>[!NOTE]
+>Future-dated hires in Workday have the Active field set to "0" and it changes to "1" on the hire date. The connector by design queries for future-hire information effective on the date of hire and that is why it always gets future hire Worker profile with Active field set to "1". This allows you to setup the Azure AD profile for future hires in advance with the all the right information pre-populated. If you'd like to delay the enabling of the Azure AD account for future hires, use the transformation function [DateDiff](functions-for-customizing-application-data.md#datediff).
++ ```xml <!-- Workday incremental sync query to get new hire data effective as on hire date/first day of work --> <!-- Replace version with Workday Web Services version present in your connection URL -->
To get this data, as part of the *Get_Workers* response, use the following XPATH
## Handling different HR scenarios
+This section covers how you can customize the provisioning app for the following HR scenarios:
+* [Support for worker conversions](#support-for-worker-conversions)
+* [Retrieving international job assignments and secondary job details](#retrieving-international-job-assignments-and-secondary-job-details)
+ ### Support for worker conversions When a worker converts from employee to contingent worker or from contingent worker to employee, the Workday connector automatically detects this change and links the AD account to the active worker profile so that all AD attributes are in sync with the active worker profile. No configuration changes are required to enable this functionality. Here is the description of the provisioning behavior when a conversion happens.
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
Let's cover each step:
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-alt.png" alt-text="Screenshot of the Sign-in if FIDO2 is also enabled.":::
-1. After the user clicks the link, the client is redirected to the certauth endpoint, which is [https://certauth.login.microsoftonline.com](https://certauth.login.microsoftonline.com) for Azure Global. For [Azure Government](/azure/azure-government/compare-azure-government-global-azure#guidance-for-developers), the certauth endpoint is [https://certauth.login.microsoftonline.us](https://certauth.login.microsoftonline.us). For the correct endpoint for other environments, see the specific Microsoft cloud docs.
+1. After the user clicks the link, the client is redirected to the certauth endpoint, which is [https://certauth.login.microsoftonline.com](https://certauth.login.microsoftonline.com) for Azure Global. For [Azure Government](../../azure-government/compare-azure-government-global-azure.md#guidance-for-developers), the certauth endpoint is [https://certauth.login.microsoftonline.us](https://certauth.login.microsoftonline.us). For the correct endpoint for other environments, see the specific Microsoft cloud docs.
The endpoint performs mutual authentication and requests the client certificate as part of the TLS handshake. You will see an entry for this request in the Sign-in logs. There is a [known issue](#known-issues) where User ID is displayed instead of Username.
For the next test scenario, configure the authentication policy where the **poli
- [Limitations with Azure AD CBA](concept-certificate-based-authentication-limitations.md) - [How to configure Azure AD CBA](how-to-certificate-based-authentication.md) - [FAQ](certificate-based-authentication-faq.yml)-- [Troubleshoot Azure AD CBA](troubleshoot-certificate-based-authentication.md)
+- [Troubleshoot Azure AD CBA](troubleshoot-certificate-based-authentication.md)
active-directory Concept Conditional Access Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-session.md
Organizations can use this control to require Azure AD to pass device informatio
For more information on the use and configuration of app-enforced restrictions, see the following articles: - [Enabling limited access with SharePoint Online](/sharepoint/control-access-from-unmanaged-devices)-- [Enabling limited access with Exchange Online](https://aka.ms/owalimitedaccess)
+- [Enabling limited access with Exchange Online](/microsoft-365/security/office-365-security/secure-email-recommended-policies?view=o365-worldwide#limit-access-to-exchange-online-from-outlook-on-the-web)
## Conditional Access application control
active-directory Howto Conditional Access Policy Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-location.md
More information about the location condition in Conditional Access can be found
1. Under **Include**, select **Selected locations** 1. Select the blocked location you created for your organization. 1. Click **Select**.
-1. Under **Access controls** > select **Block Access**, and select **Select**.
+1. Under **Access controls** > select **Block Access**, and click **Select**.
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
active-directory Howto Hybrid Azure Ad Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-hybrid-azure-ad-join.md
Bringing your devices to Azure AD maximizes user productivity through single sig
- (**For federated domains**) At least Windows Server 2012 R2 with Active Directory Federation Services installed. - Users can register their devices with Azure AD. More information about this setting can be found under the heading **Configure device settings**, in the article, [Configure device settings](device-management-azure-portal.md#configure-device-settings).
+### Network connectivity requirements
+ Hybrid Azure AD join requires devices to have access to the following Microsoft resources from inside your organization's network: - `https://enterpriseregistration.windows.net`
If you experience issues with completing hybrid Azure AD join for domain-joined
- [Downlevel device enablement](howto-hybrid-join-downlevel.md) - [Hybrid Azure AD join verification](howto-hybrid-join-verify.md)-- [Use Conditional Access to require compliant or hybrid Azure AD joined device](../conditional-access/howto-conditional-access-policy-compliant-device.md)
+- [Use Conditional Access to require compliant or hybrid Azure AD joined device](../conditional-access/howto-conditional-access-policy-compliant-device.md)
active-directory Groups Assign Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-assign-sensitivity-labels.md
The sensitivity label option is only displayed for groups when all the following
1. Labels are synchronized to Azure AD with the Execute-AzureAdLabelSync cmdlet in the Security & Compliance PowerShell module. It can take up to 24 hours after synchronization for the label to be available to Azure AD. 1. The group is a Microsoft 365 group. 1. The organization has an active Azure Active Directory Premium P1 license.
-1. The [sensitivity label scope](https://docs.microsoft.com/microsoft-365/compliance/sensitivity-labels?view=o365-worldwide#label-scopes&preserve-view=true) must be configured for Groups & Sites.
+1. The [sensitivity label scope](/microsoft-365/compliance/sensitivity-labels?preserve-view=true&view=o365-worldwide#label-scopes) must be configured for Groups & Sites.
3. The current signed-in user has sufficient privileges to assign labels. The user must be either a Global Administrator, Group Administrator, or the group owner.
-4. The current signed-in user must be within the scope of the [sensitivity label publishing policy](https://docs.microsoft.com/microsoft-365/compliance/sensitivity-labels?view=o365-worldwide#what-label-policies-can-do&preserve-view=true)
+4. The current signed-in user must be within the scope of the [sensitivity label publishing policy](/microsoft-365/compliance/sensitivity-labels?preserve-view=true&view=o365-worldwide#what-label-policies-can-do)
Please make sure all the conditions are met in order to assign labels to a group.
If you must make a change, use an [Azure AD PowerShell script](https://github.co
- [Use sensitivity labels with Microsoft Teams, Microsoft 365 groups, and SharePoint sites](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites) - [Update groups after label policy change manually with Azure AD PowerShell script](https://github.com/microsoftgraph/powershell-aad-samples/blob/master/ReassignSensitivityLabelToO365Groups.ps1) - [Edit your group settings](../fundamentals/active-directory-groups-settings-azure-portal.md)-- [Manage groups using PowerShell commands](../enterprise-users/groups-settings-v2-cmdlets.md)
+- [Manage groups using PowerShell commands](../enterprise-users/groups-settings-v2-cmdlets.md)
active-directory Licensing Groups Resolve Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-resolve-problems.md
Group-based licensing in Azure Active Directory (Azure AD) introduces the concept of users in a licensing error state. In this article, we explain the reasons why users might end up in this state.
-When you assign licenses directly to individual users, without using group-based licensing, the assignment operation might fail. For example, when you execute the PowerShell cmdlet `Set-MsolUserLicense` on a user system, the cmdlet can fail for many reasons that are related to business logic. For example, there might be an insufficient number of licenses or a conflict between two service plans that can't be assigned at the same time. The problem is immediately reported back to you.
+When you assign licenses directly to individual users, without using group-based licensing, the assignment operation might fail for reasons that are related to business logic. For example, there might be an insufficient number of licenses or a conflict between two service plans that can't be assigned at the same time. The problem is immediately reported back to you.
When you're using group-based licensing, the same errors can occur, but they happen in the background while the Azure AD service is assigning licenses. For this reason, the errors can't be communicated to you immediately. Instead, they're recorded on the user object and then reported via the administrative portal. The original intent to license the user is never lost, but it's recorded in an error state for future investigation and resolution.
active-directory Active Directory Properties Area https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-properties-area.md
You add your organization's privacy information in the **Properties** area of Az
- **Technical contact.** Type the email address for the person to contact for technical support within your organization.
- - **Global privacy contact.** Type the email address for the person to contact for inquiries about personal data privacy. This person is also who Microsoft contacts if there's a data breach related to Azure Active Directory services . If there's no person listed here, Microsoft contacts your global administrators. For Microsoft 365 related privacy incident notifications please see [Microsoft 365 Message center FAQs](https://docs.microsoft.com/microsoft-365/admin/manage/message-center?view=o365-worldwide#frequently-asked-questions&preserve-view=true)
+ - **Global privacy contact.** Type the email address for the person to contact for inquiries about personal data privacy. This person is also who Microsoft contacts if there's a data breach related to Azure Active Directory services . If there's no person listed here, Microsoft contacts your global administrators. For Microsoft 365 related privacy incident notifications please see [Microsoft 365 Message center FAQs](/microsoft-365/admin/manage/message-center?preserve-view=true&view=o365-worldwide#frequently-asked-questions)
- **Privacy statement URL.** Type the link to your organization's document that describes how your organization handles both internal and external guest's data privacy.
You add your organization's privacy information in the **Properties** area of Az
## Next steps - [Azure Active Directory B2B collaboration invitation redemption](../external-identities/redemption-experience.md)-- [Add or change profile information for a user in Azure Active Directory](active-directory-users-profile-azure-portal.md)
+- [Add or change profile information for a user in Azure Active Directory](active-directory-users-profile-azure-portal.md)
active-directory Auth Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-ssh.md
SSH with Azure AD
## Implement SSH with Azure AD 
-* [Log in to a Linux® VM with Azure Active Directory credentials - Azure Virtual Machines ](https://docs.microsoft.com/azure/active-directory/devices/howto-vm-sign-in-azure-ad-linux)
+* [Log in to a Linux® VM with Azure Active Directory credentials - Azure Virtual Machines ](../devices/howto-vm-sign-in-azure-ad-linux.md)
* [OAuth 2.0 device code flow - Microsoft identity platform ](../develop/v2-oauth2-device-code.md)
-* [Integrate with Azure Active Directory (akamai.com)](https://learn.akamai.com/en-us/webhelp/enterprise-application-access/enterprise-application-access/GUID-6B16172C-86CC-48E8-B30D-8E678BF3325F.html)
+* [Integrate with Azure Active Directory (akamai.com)](https://learn.akamai.com/en-us/webhelp/enterprise-application-access/enterprise-application-access/GUID-6B16172C-86CC-48E8-B30D-8E678BF3325F.html)
active-directory Multi Tenant User Management Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-user-management-introduction.md
There are several mechanisms available for creating and managing the lifecycle o
[Multi-tenant common solutions](multi-tenant-common-solutions.md)
-[Multi-tenant synchronization from Active Directory](https://docs.microsoft.com/azure/active-directory/hybrid/plan-connect-topologies#multiple-azure-ad-tenants.md)
+[Multi-tenant synchronization from Active Directory](../hybrid/plan-connect-topologies.md#multiple-azure-ad-tenants)
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
In February 2022 we added the following 20 new applications in our App gallery w
[Embark](../saas-apps/embark-tutorial.md), [FENCE-Mobile RemoteManager SSO](../saas-apps/fence-mobile-remotemanager-sso-tutorial.md), [カオナビ](../saas-apps/kao-navi-tutorial.md), [Adobe Identity Management (OIDC)](../saas-apps/adobe-identity-management-tutorial.md), [AppRemo](../saas-apps/appremo-tutorial.md), [Live Center](https://livecenter.norkon.net/Login), [Offishall](https://app.offishall.io/), [MoveWORK Flow](https://www.movework-flow.fm/login), [Cirros SL](https://www.cirros.net/cirros-sl/), [ePMX Procurement Software](https://azure.epmxweb.com/admin/index.php?), [Vanta O365](https://app.vanta.com/connections), [Hubble](../saas-apps/hubble-tutorial.md), [Medigold Gateway](https://gateway.medigoldcore.com), [クラウドログ](../saas-apps/crowd-log-tutorial.md),[Amazing People Schools](../saas-apps/amazing-people-schools-tutorial.md), [Salus](https://salus.com/login), [XplicitTrust Network Access](https://console.xplicittrust.com/#/dashboard), [Spike Email - Mail & Team Chat](https://spikenow.com/web/), [AltheaSuite](https://planmanager.altheasuite.com/), [Balsamiq Wireframes](../saas-apps/balsamiq-wireframes-tutorial.md).
-You can also find the documentation of all the applications from here: [https://aka.ms/AppsTutorial](https://aka.ms/AppsTutorial),
+You can also find the documentation of all the applications from here: [https://aka.ms/AppsTutorial](../saas-apps/tutorial-list.md),
-For listing your application in the Azure AD app gallery, please read the details here: [https://aka.ms/AzureADAppRequest](https://aka.ms/AzureADAppRequest)
+For listing your application in the Azure AD app gallery, please read the details here: [https://aka.ms/AzureADAppRequest](../manage-apps/v2-howto-app-gallery-listing.md)
As the device login flow will start September 30, 2021, it's may not be availabl
The text and design on the Conditional Access blocking screen shown to users when their device is marked as non-compliant has been updated. Users will be blocked until they take the necessary actions to meet their company's device compliance policies. Additionally, we have streamlined the flow for a user to open their device management portal. These improvements apply to all conditional access supported OS platforms. [Learn more](https://support.microsoft.com/account-billing/troubleshooting-the-you-can-t-get-there-from-here-error-message-479a9c42-d9d1-4e44-9e90-24bbad96c251)
-
active-directory How To Connect Install Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-custom.md
On the **Connect to Azure AD** page, enter a global admin account and password.
You might want to use an account in the default *onmicrosoft.com* domain, which comes with your Azure AD tenant. This account is used only to create a service account in Azure AD. It's not used after the installation finishes. >[!NOTE]
->A best practice is to avoid using on-premises synced accounts for Azure AD role assignments. If the on premises account is compromised, this can be used to compromise your Azure AD resources as well. For a complete list of best practices refer to [Best practices for Azure AD roles](https://docs.microsoft.com/azure/active-directory/roles/best-practices)
+>A best practice is to avoid using on-premises synced accounts for Azure AD role assignments. If the on premises account is compromised, this can be used to compromise your Azure AD resources as well. For a complete list of best practices refer to [Best practices for Azure AD roles](../roles/best-practices.md)
![Screenshot showing the "Connect to Azure AD" page.](./media/how-to-connect-install-custom/connectaad.png)
Now that you have installed Azure AD Connect, you can [verify the installation a
For more information about the features that you enabled during the installation, see [Prevent accidental deletes](how-to-connect-sync-feature-prevent-accidental-deletes.md) and [Azure AD Connect Health](how-to-connect-health-sync.md).
-For more information about other common topics, see [Azure AD Connect sync: Scheduler](how-to-connect-sync-feature-scheduler.md) and [Integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
+For more information about other common topics, see [Azure AD Connect sync: Scheduler](how-to-connect-sync-feature-scheduler.md) and [Integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
active-directory Plan Connect Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-connect-topologies.md
This topology implements the following use cases:
* Only one Azure AD tenant sync can be configured to write back to Active Directory for the same object. This includes device and group writeback as well as Hybrid Exchange configurations ΓÇô these features can only be configured in one tenant. The only exception here is Password Writeback ΓÇô see below. * It is supported to configure Password Hash Sync from Active Directory to multiple Azure AD tenants for the same user object. If Password Hash Sync is enabled for a tenant, then Password Writeback may be enabled as well, and this can be done on multiple tenants: if the password is changed on one tenant, then password writeback will update it in Active Directory, and Password Hash Sync will update the password in the other tenants. * It is not supported to add and verify the same custom domain name in more than one Azure AD tenant, even if these tenants are in different Azure environments.
-* It is not supported to configure hybrid experiences that utilize forest level configuration in AD, such as Seamless SSO and Hybrid Azure AD Join (non-targeted approach), with more than one tenant. Doing so would overwrite the configuration of the other tenant, making it no longer usable. You can find additional information in [Plan your hybrid Azure Active Directory join deployment](https://docs.microsoft.com/azure/active-directory/devices/hybrid-azuread-join-plan#hybrid-azure-ad-join-for-single-forest-multiple-azure-ad-tenants).
+* It is not supported to configure hybrid experiences that utilize forest level configuration in AD, such as Seamless SSO and Hybrid Azure AD Join (non-targeted approach), with more than one tenant. Doing so would overwrite the configuration of the other tenant, making it no longer usable. You can find additional information in [Plan your hybrid Azure Active Directory join deployment](../devices/hybrid-azuread-join-plan.md#hybrid-azure-ad-join-for-single-forest-multiple-azure-ad-tenants).
* You can synchronize device objects to more than one tenant but a device can be Hybrid Azure AD Joined to only one tenant. * Each Azure AD Connect instance should be running on a domain-joined machine.
To learn how to install Azure AD Connect for these scenarios, see [Custom instal
Learn more about the [Azure AD Connect sync](how-to-connect-sync-whatis.md) configuration.
-Learn more about [integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about [integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory F5 Aad Password Less Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-aad-password-less-vpn.md
In this tutorial, learn how to integrate F5ΓÇÖs BIG-IP based Secure socket layer
Enabling a BIG-IP SSL-VPN for Azure AD single sign-on (SSO) provides many benefits, including: -- Improved Zero trust governance through Azure AD pre-authentication and [Conditional Access](/azure/active-directory/conditional-access/overview)
+- Improved Zero trust governance through Azure AD pre-authentication and [Conditional Access](../conditional-access/overview.md)
- [Password-less authentication to the VPN service](https://www.microsoft.com/security/business/identity/passwordless) - Manage Identities and access from a single control plane, the [Azure portal](https://azure.microsoft.com/features/azure-portal/) To learn about all of the benefits, see [Integrate F5 BIG-IP with Azure Active Directory](./f5-aad-integration.md) and [What is single sign-on in Azure Active Directory?](/azure/active-directory/active-directory-appssoaccess-whatis).
-Despite these great value adds, classic VPNs do however remain network orientated, often providing little to zero fine grained access to corporate applications. For this reason, we encourage moving to a more Identity centric approach at achieving Zero Trust [access on a per application basis](/azure/active-directory/fundamentals/five-steps-to-full-application-integration-with-azure-ad).
+Despite these great value adds, classic VPNs do however remain network orientated, often providing little to zero fine grained access to corporate applications. For this reason, we encourage moving to a more Identity centric approach at achieving Zero Trust [access on a per application basis](../fundamentals/five-steps-to-full-application-integration-with-azure-ad.md).
## Scenario description
Familiarizing yourself with [F5 BIG-IP terminology](https://www.f5.com/services/
## Add F5 BIG-IP from the Azure AD gallery
-Setting up a SAML federation trust between the BIG-IP allows the Azure AD BIG-IP to hand off the pre-authentication and [Conditional Access](/azure/active-directory/conditional-access/overview) to Azure AD, before granting access to the published VPN service.
+Setting up a SAML federation trust between the BIG-IP allows the Azure AD BIG-IP to hand off the pre-authentication and [Conditional Access](../conditional-access/overview.md) to Azure AD, before granting access to the published VPN service.
1. Sign in to the Azure AD portal using an account with application admin rights
The F5 VPN application should also be visible as a target resource in Azure AD C
- [The end of passwords, go passwordless](https://www.microsoft.com/security/business/identity/passwordless) -- [Five steps to full application integration with Azure AD](/azure/active-directory/fundamentals/five-steps-to-full-application-integration-with-azure-ad)
+- [Five steps to full application integration with Azure AD](../fundamentals/five-steps-to-full-application-integration-with-azure-ad.md)
-- [Microsoft Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
+- [Microsoft Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
active-directory F5 Big Ip Forms Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-forms-advanced.md
In this article, you'll learn how to configure F5's BIG-IP Access Policy Manager
Enabling BIG-IP published services for Azure Active Directory (Azure AD) SSO provides many benefits, including: -- Improved Zero Trust governance through Azure AD pre-authentication and [Conditional Access](/azure/active-directory/conditional-access/overview)
+- Improved Zero Trust governance through Azure AD pre-authentication and [Conditional Access](../conditional-access/overview.md)
- Full single sign-on (SSO) between Azure AD and BIG-IP published services - Identities and access are managed from a single control plane, the [Azure portal](https://azure.microsoft.com/features/azure-portal/)
For more information, see the F5 BIG-IP [Session Variables reference](https://te
* [Forget passwords, go passwordless](https://www.microsoft.com/security/business/identity/passwordless)
-* [What is Conditional Access?](/azure/active-directory/conditional-access/overview)
+* [What is Conditional Access?](../conditional-access/overview.md)
-* [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
+* [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
active-directory F5 Big Ip Header Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-header-advanced.md
In this article, youΓÇÖll learn to implement Secure Hybrid Access (SHA) with sin
Configuring BIG-IP published applications with Azure AD provides many benefits, including: -- Improved Zero trust governance through Azure AD pre-authentication and [Conditional Access](/azure/active-directory/conditional-access/overview)
+- Improved Zero trust governance through Azure AD pre-authentication and [Conditional Access](../conditional-access/overview.md)
- Full Single sign-on (SSO) between Azure AD and BIG-IP published services.
There are many methods to configure BIG-IP for this scenario, including two temp
## Adding F5 BIG-IP from the Azure AD gallery Setting up a SAML federation trust between BIG-IP APM and Azure AD is one of the first step in implementing SHA. It establishes the integration required for BIG-IP to hand off pre-authentication and [conditional
-access](/azure/active-directory/conditional-access/overview) to Azure AD, before granting access to the published service.
+access](../conditional-access/overview.md) to Azure AD, before granting access to the published service.
1. Sign-in to the Azure AD portal using an account with application administrative rights.
For more information refer to these articles:
- [The end of passwords, go password-less](https://www.microsoft.com/security/business/identity/passwordless) -- [What is Conditional Access?](/azure/active-directory/conditional-access/overview)
+- [What is Conditional Access?](../conditional-access/overview.md)
- [Microsoft Zero Trust framework to enable remote
- work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
+ work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
active-directory F5 Big Ip Headers Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-headers-easy-button.md
In this article, learn to secure headers based applications with Azure Active Di
Integrating a BIG-IP with Azure AD provides many benefits, including:
- * [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](/azure/active-directory/conditional-access/overview)
+ * [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](../conditional-access/overview.md)
* Full SSO between Azure AD and BIG-IP published services
There are many methods to configure BIG-IP for this scenario, including two temp
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](/azure/active-directory/develop/quickstart-register-app)
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](../develop/quickstart-register-app.md)
This first step creates a tenant app registration that will be used to authorize the **Easy Button** access to Graph. Through these permissions, the BIG-IP will be allowed to push the configurations required to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
2. The **View Variables** link in this location may also help root cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes from Azure AD or another source
-For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this F5 knowledge article on [LDAP Query](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/5.html).
+For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this F5 knowledge article on [LDAP Query](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/5.html).
active-directory F5 Big Ip Kerberos Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-advanced.md
In this tutorial, you'll learn to implement Secure Hybrid Access (SHA) with sing
Enabling BIG-IP published services for Azure Active Directory (Azure AD) SSO provides many benefits, including:
-* Improved Zero Trust governance through Azure AD pre-authentication and [Conditional Access](/azure/active-directory/conditional-access/overview)
+* Improved Zero Trust governance through Azure AD pre-authentication and [Conditional Access](../conditional-access/overview.md)
* Full SSO between Azure AD and BIG-IP published services.
For help with diagnosing KCD-related problems, see the F5 BIG-IP deployment guid
* [Forget passwords, go passwordless](https://www.microsoft.com/security/business/identity/passwordless)
-* [What is Conditional Access?](/azure/active-directory/conditional-access/overview)
+* [What is Conditional Access?](../conditional-access/overview.md)
-* [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
+* [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
active-directory F5 Big Ip Kerberos Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md
In this article, learn to secure Kerberos-based applications with Azure Active D
Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including:
-* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](/azure/active-directory/conditional-access/overview)
+* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](../conditional-access/overview.md)
* Full SSO between Azure AD and BIG-IP published services
There are many methods to configure BIG-IP for this scenario, including two temp
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](/azure/active-directory/develop/quickstart-register-app)
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](../develop/quickstart-register-app.md)
This first step creates a tenant app registration that will be used to authorize the **Easy Button** access to Graph. Through these permissions, the BIG-IP will be allowed to push the configurations required to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
2. Select the link for your active session. The **View Variables** link in this location may also help determine root cause KCD issues, particularly if the BIG-IP APM fails to obtain the right user and domain identifiers from session variables.
-See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
+See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory F5 Big Ip Ldap Header Easybutton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md
In this article, learn to secure header & LDAP based applications using Azure Ac
Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including:
-* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](/azure/active-directory/conditional-access/overview)
+* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](../conditional-access/overview.md)
* Full SSO between Azure AD and BIG-IP published services
There are many methods to configure BIG-IP for this scenario, including two temp
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](/azure/active-directory/develop/quickstart-register-app)
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](../develop/quickstart-register-app.md)
This first step creates a tenant app registration that will be used to authorize the **Easy Button** access to Graph. Through these permissions, the BIG-IP will be allowed to push the configurations required to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
The following command can also be used from the BIG-IP bash shell to validate th
```ldapsearch -xLLL -H 'ldap://192.168.0.58' -b "CN=partners,dc=contoso,dc=lds" -s sub -D "CN=f5-apm,CN=partners,DC=contoso,DC=lds" -w 'P@55w0rd!' "(cn=testuser)" ```
-For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this F5 knowledge article on [LDAP Query](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/5.html).
+For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this F5 knowledge article on [LDAP Query](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/5.html).
active-directory F5 Big Ip Oracle Enterprise Business Suite Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-enterprise-business-suite-easy-button.md
In this article, learn to secure Oracle Enterprise Business Suite (EBS) using Az
Integrating a BIG-IP with Azure AD provides many benefits, including:
-* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](/azure/active-directory/conditional-access/overview)
+* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](../conditional-access/overview.md)
* Full SSO between Azure AD and BIG-IP published services * Manage Identities and access from a single control plane, the [Azure portal](https://portal.azure.com/)
-To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](/azure/active-directory/manage-apps/f5-aad-integration) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](./f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
## Scenario description
There are many methods to configure BIG-IP for this scenario, including two temp
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](/azure/active-directory/develop/quickstart-register-app)
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](../develop/quickstart-register-app.md)
This first step creates a tenant app registration that will be used to authorize the **Easy Button** access to Graph. Through these permissions, the BIG-IP will be allowed to push the configurations required to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
The following command from a bash shell validates the APM service account used f
```ldapsearch -xLLL -H 'ldap://192.168.0.58' -b "CN=oraclef5,dc=contoso,dc=lds" -s sub -D "CN=f5-apm,CN=partners,DC=contoso,DC=lds" -w 'P@55w0rd!' "(cn=testuser)" ```
-For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this [F5 knowledge article on LDAP Query](https://techdocs.f5.com/en-us/bigip-16-1-0/big-ip-access-policy-manager-authentication-methods/ldap-query.html).
+For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this [F5 knowledge article on LDAP Query](https://techdocs.f5.com/en-us/bigip-16-1-0/big-ip-access-policy-manager-authentication-methods/ldap-query.html).
active-directory F5 Big Ip Oracle Jde Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-jde-easy-button.md
In this article, learn to secure Oracle JD Edwards (JDE) using Azure Active Dire
Integrating a BIG-IP with Azure AD provides many benefits, including:
-* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](/azure/active-directory/conditional-access/overview)
+* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](../conditional-access/overview.md)
* Full SSO between Azure AD and BIG-IP published services * Manage Identities and access from a single control plane, the [Azure portal](https://portal.azure.com/)
-To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](/azure/active-directory/manage-apps/f5-aad-integration) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](./f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
## Scenario description
There are many methods to configure BIG-IP for this scenario, including two temp
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](/azure/active-directory/develop/quickstart-register-app)
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](../develop/quickstart-register-app.md)
This first step creates a tenant app registration that will be used to authorize the **Easy Button** access to Graph. Through these permissions, the BIG-IP will be allowed to push the configurations required to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
2. The **View Variables** link in this location may also help root cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes from Azure AD or another source
-See [BIG-IP APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
+See [BIG-IP APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory F5 Big Ip Oracle Peoplesoft Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-peoplesoft-easy-button.md
In this article, learn to secure Oracle PeopleSoft (PeopleSoft) using Azure Acti
Integrating a BIG-IP with Azure AD provides many benefits, including:
-* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](/azure/active-directory/conditional-access/overview)
+* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](../conditional-access/overview.md)
* Full SSO between Azure AD and BIG-IP published services * Manage Identities and access from a single control plane, the [Azure portal](https://portal.azure.com/)
-To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](/azure/active-directory/manage-apps/f5-aad-integration) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](./f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
## Scenario description
Being legacy, the application lacks modern protocols to support a direct integra
Having a BIG-IP in front of the app enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application. > [!NOTE]
-> Organizations can also gain remote access to this type of application with [Azure AD Application Proxy](/azure/active-directory/app-proxy/application-proxy).
+> Organizations can also gain remote access to this type of application with [Azure AD Application Proxy](../app-proxy/application-proxy.md).
## Scenario architecture
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
2. The **View Variables** link in this location may also help root cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes from session variables
-See [BIG-IP APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
+See [BIG-IP APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory F5 Big Ip Sap Erp Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-sap-erp-easy-button.md
In this article, learn to secure SAP ERP using Azure Active Directory (Azure AD)
Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including:
-* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](/azure/active-directory/conditional-access/overview)
+* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](../conditional-access/overview.md)
* Full SSO between Azure AD and BIG-IP published services
With the Easy Button, admins no longer go back and forth between Azure AD and a
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](/azure/active-directory/develop/quickstart-register-app)
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](../develop/quickstart-register-app.md)
The Easy Button client must also be registered in Azure AD, before it is allowed to establish a trust between each SAML SP instance of a BIG-IP published application, and Azure AD as the SAML IdP.
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
2. Select the link for your active session. The **View Variables** link in this location may also help determine root cause KCD issues, particularly if the BIG-IP APM fails to obtain the right user and domain identifiers from session variables
-See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
+See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory How Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md
Previously updated : 03/04/2022 Last updated : 03/08/2022 zone_pivot_groups: identity-mi-methods
In some environments, administrators choose to limit who can manage user-assigne
1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the search box, enter **Managed Identities**. Under **Services**, select **Managed Identities**. 1. A list of the user-assigned managed identities for your subscription is returned. Select the user-assigned managed identity that you want to manage.
-1. Select **Azure role assignments**, and then select **Add role assignment**.
-1. In the **Add role assignment** pane, configure the following values, and then select **Save**:
- - **Role**: The role to assign.
- - **Assign access to**: The resource to assign the user-assigned managed identity.
- - **Select**: The member to assign access.
-
- ![Screenshot that shows the user-assigned managed identity IAM.](media/how-manage-user-assigned-managed-identities/assign-role-screenshot-02.png)
+1. Select **Access control (IAM)**.
+1. Choose **Add role assignment**.
+
+ ![Screenshot that shows the user-assigned managed identity access control screen](media/how-manage-user-assigned-managed-identities/role-assign.png)
+
+1. In the **Add role assignment** pane, choose the role to assign and choose **Next**.
+1. Choose who should have the role assigned.
>[!NOTE] >You can find information on assigning roles to managed identities in [Assign a managed identity access to a resource by using the Azure portal](../../role-based-access-control/role-assignments-portal-managed-identity.md)
active-directory Workbook Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-legacy authentication.md
This workbook supports multiple filters:
## Best practices -- **[Enable risky sign-in policies](../identity-protection/concept-identity-protection-policies.md)** - To prompt for multi-factor authentication (MFA) on medium risk or above. Enabling the policy reduces the proportion of active real-time risk detections by allowing legitimate users to self-remediate the risk detections with MFA.--- **[Enable a risky user policy](../identity-protection/howto-identity-protection-configure-risk-policies.md#user-risk-with-conditional-access)** - To enable users to securely remediate their accounts when they are high risk. Enabling the policy reduces the number of active at-risk users in your organization by returning the userΓÇÖs credentials to a safe state.-
+- For guidance on blocking legacy authentication in your environment, see [Block legacy authentication to Azure AD with conditional access](../conditional-access/block-legacy-authentication.md).
+- Many email protocols that once relied on legacy authentication now support more secure modern authentication methods. If you see legacy email authentication protocols in this workbook, consider migrating to modern authentication for email instead. For more information, see [Deprecation of Basic authentication in Exchange Online](https://docs.microsoft.com/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online).
+- Some clients can use both legacy authentication or modern authentication depending on client configuration. If you see ΓÇ£modern mobile/desktop clientΓÇ¥ or ΓÇ£browserΓÇ¥ for a client in the Azure AD logs, it is using modern authentication. If it has a specific client or protocol name, such as ΓÇ£Exchange ActiveSyncΓÇ¥, it is using legacy authentication to connect to Azure AD. The client types in conditional access, and the Azure AD reporting page in the Azure Portal demarcate modern authentication clients and legacy authentication clients for you, and only legacy authentication is captured in this workbook.
## Next steps
active-directory Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/prerequisites.md
Previously updated : 08/06/2021 Last updated : 03/17/2022
To use PowerShell commands to do the following:
You must have the following module installed: -- [AzureAD](https://www.powershellgallery.com/packages/AzureAD) version 2.0.2.137 or later
+- [AzureAD](https://www.powershellgallery.com/packages/AzureAD) (current version)
#### Check AzureAD version
You should see output similar to the following:
```powershell Version Name Repository Description - - - --
-2.0.2.137 AzureAD PSGallery Azure Active Directory V2 General Availability M...
+2.0.2.140 AzureAD PSGallery Azure Active Directory V2 General Availability M...
``` #### Install AzureAD
To use AzureAD, follow these steps to make sure it is imported into the current
```powershell ModuleType Version Name ExportedCommands - - - -
- Binary 2.0.2.137 AzureAD {Add-AzureADApplicationOwner, Add-AzureADDeviceRegisteredO...
+ Binary 2.0.2.140 AzureAD {Add-AzureADApplicationOwner, Add-AzureADDeviceRegisteredO...
``` ## AzureADPreview module
To use PowerShell commands to do the following:
You must have the following module installed: -- [AzureADPreview](https://www.powershellgallery.com/packages/AzureADPreview) version 2.0.2.138 or later
+- [AzureADPreview](https://www.powershellgallery.com/packages/AzureADPreview) (current version)
#### Check AzureADPreview version
You should see output similar to the following:
```powershell Version Name Repository Description - - - --
-2.0.2.138 AzureADPreview PSGallery Azure Active Directory V2 Preview Module. ...
+2.0.2.149 AzureADPreview PSGallery Azure Active Directory V2 Preview Module. ...
``` #### Install AzureADPreview
To use AzureADPreview, follow these steps to make sure it is imported into the c
```powershell ModuleType Version Name ExportedCommands - - - -
- Binary 2.0.2.138 AzureADPreview {Add-AzureADAdministrativeUnitMember, Add-AzureADApplicati...
+ Binary 2.0.2.149 AzureADPreview {Add-AzureADAdministrativeUnitMember, Add-AzureADApplicati...
``` ## Graph Explorer
active-directory Arcgisenterprise Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/arcgisenterprise-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with ArcGIS Enterprise | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with ArcGIS Enterprise'
description: Learn how to configure single sign-on between Azure Active Directory and ArcGIS Enterprise.
Previously updated : 02/11/2021 Last updated : 03/16/2022
-# Tutorial: Azure Active Directory integration with ArcGIS Enterprise
+# Tutorial: Azure AD SSO integration with ArcGIS Enterprise
In this tutorial, you'll learn how to integrate ArcGIS Enterprise with Azure Active Directory (Azure AD). When you integrate ArcGIS Enterprise with Azure AD, you can:
Follow these steps to enable Azure AD SSO in the Azure portal.
4. On the **Basic SAML Configuration** section, perform the following steps, if you wish to configure the application in **IDP** Initiated mode:
- a. In the **Identifier** text box, type a URL using the following pattern:
+ a. In the **Identifier** text box, type a value using the following pattern:
`<EXTERNAL_DNS_NAME>.portal` b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://<EXTERNAL_DNS_NAME>/portal/sharing/rest/oauth2/saml/signin2`
+ `https://<EXTERNAL_DNS_NAME>/portal/sharing/rest/oauth2/saml/signin`
c. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
active-directory Cloudtamer Io Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloudtamer-io-tutorial.md
Previously updated : 11/11/2021 Last updated : 03/16/2022
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Kion supports **SP and IDP** initiated SSO.
+* Kion supports **IDP** initiated SSO.
* Kion supports **Just In Time** user provisioning. -
-## Adding Kion (formerly cloudtamer.io) from the gallery
+## Add Kion (formerly cloudtamer.io) from the gallery
To configure the integration of Kion into Azure AD, you need to add Kion from the gallery to your list of managed SaaS apps.
To configure the integration of Kion into Azure AD, you need to add Kion from th
1. In the **Add from the gallery** section, type **Kion** in the search box. 1. Select **Kion** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for Kion (formerly cloudtamer.io) Configure and test Azure AD SSO with Kion using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Kion.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Kion** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier** text box, paste the **SERVICE PROVIDER ISSUER (ENTITY ID)** from Kion into this box. b. In the **Reply URL** text box, paste the **SERVICE PROVIDER ACS URL** from Kion into this box.
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- In the **Sign-on URL** text box, paste the **SERVICE PROVIDER ACS URL** from Kion into this box.
- 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/metadataxml.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Click **Create IDMS**. - ### Create Kion test user In this section, a user called Britta Simon is created in Kion. Kion supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Kion, a new one is created after authentication. ## Test SSO
-In this section, you test your Azure AD single sign-on configuration with following options.
-
-#### SP initiated:
-
-* Click on **Test this application** in Azure portal. This will redirect to Kion Sign on URL where you can initiate the login flow.
-
-* Go to Kion Sign-on URL directly and initiate the login flow from there.
-
-#### IDP initiated:
+In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Kion for which you set up the SSO
+* Click on Test this application in Azure portal and you should be automatically signed in to the Kion for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the Kion tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Kion for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* You can use Microsoft My Apps. When you click the Kion tile in the My Apps, you should be automatically signed in to the Kion for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Group assertions
active-directory Docusign Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/docusign-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with DocuSign | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with DocuSign'
description: Learn how to configure single sign-on (SSO) between Azure Active Directory and DocuSign.
Previously updated : 03/26/2021 Last updated : 03/16/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with DocuSign
+# Tutorial: Azure AD SSO integration with DocuSign
In this tutorial, you'll learn how to integrate DocuSign with Microsoft Azure Active Directory (Azure AD). When you integrate DocuSign with Azure AD, you can:
In this tutorial, you'll configure and test Azure AD SSO in a test environment t
* DocuSign supports [automatic user provisioning](./docusign-provisioning-tutorial.md).
-## Adding DocuSign from the gallery
+## Add DocuSign from the gallery
To configure the integration of DocuSign into Azure AD, you must add DocuSign from the gallery to your list of managed SaaS apps:
To configure the integration of DocuSign into Azure AD, you must add DocuSign fr
1. In the **Add from the gallery** section, type **DocuSign** in the search box. 1. Select **DocuSign** from the results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for DocuSign Configure and test Azure AD SSO with DocuSign by using a test user named **B.Simon**. For SSO to work, you must establish a link relationship between an Azure AD user and the corresponding user in DocuSign.
To enable Azure AD SSO in the Azure portal, follow these steps:
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. In the **Basic SAML Configuration** section, follow these steps:
-
- a. In the **Sign on URL** textbox, enter a URL using the following pattern:
-
- `https://<subdomain>.docusign.com/organizations/<OrganizationID>/saml2/login/sp/<IDPID>`
+1. In the **Basic SAML Configuration** section, perform the following steps:
- b. In the **Identifier (Entity ID)** textbox, enter a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** textbox, type a URL using the following pattern:
`https://<subdomain>.docusign.com/organizations/<OrganizationID>/saml2`
- c. In the **Reply URL** textbox, enter anyone of the following URL patterns:
+ b. In the **Reply URL** textbox, type a URL using one of the following patterns:
| Reply URL | |-|
To enable Azure AD SSO in the Azure portal, follow these steps:
| QA Instance :| | `https://<SUBDOMAIN>.docusign.com/organizations/saml2` |
+ c. In the **Sign on URL** textbox, type a URL using the following pattern:
+
+ `https://<subdomain>.docusign.com/organizations/<OrganizationID>/saml2/login/sp/<IDPID>`
+ > [!NOTE]
- > These bracketed values are placeholders. Replace them with the values in the actual sign-on URL, Identifier and Reply URL. These details are explained in the "View SAML 2.0 Endpoints" section later in this tutorial.
+ > These bracketed values are placeholders. Replace them with the values in the actual Identifier, Reply URL and Sign on URL. These details are explained in the "View SAML 2.0 Endpoints" section later in this tutorial.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)**. Select **Download** to download the certificate and save it on your computer.
In this section, you'll grant B.Simon access to DocuSign so that this user can u
d. In the **Identity Provider Logout URL** box, paste the value of **Logout URL**, which you copied from Azure portal.
- e. Select **Sign AuthN request**.
-
- f. For **Send AuthN request by**, select **POST**.
+ e. For **Send AuthN request by**, select **POST**.
- g. For **Send logout request by**, select **GET**.
+ f. For **Send logout request by**, select **GET**.
- h. In the **Custom Attribute Mapping** section, select **ADD NEW MAPPING**.
+ g. In the **Custom Attribute Mapping** section, select **ADD NEW MAPPING**.
![Custom Attribute Mapping UI][62]
- i. Choose the field you want to map to the Azure AD claim. In this example, the **emailaddress** claim is mapped with the value of `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress`. That's the default claim name from Azure AD for the email claim. Select **SAVE**.
+ h. Choose the field you want to map to the Azure AD claim. In this example, the **emailaddress** claim is mapped with the value of `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress`. That's the default claim name from Azure AD for the email claim. Select **SAVE**.
![Custom Attribute Mapping fields][57] > [!NOTE] > Use the appropriate **User identifier** to map the user from Azure AD to DocuSign user mapping. Select the proper field, and enter the appropriate value based on your organization settings.
- j. In the **Identity Provider Certificates** section, select **ADD CERTIFICATE**, upload the certificate you downloaded from Azure AD portal, and select **SAVE**.
+ i. In the **Identity Provider Certificates** section, select **ADD CERTIFICATE**, upload the certificate you downloaded from Azure AD portal, and select **SAVE**.
![Identity Provider Certificates/Add Certificate][58]
- k. In the **Identity Providers** section, select **ACTIONS**, and then select **Endpoints**.
+ j. In the **Identity Providers** section, select **ACTIONS**, and then select **Endpoints**.
![Identity Providers/Endpoints][59]
- l. In the **View SAML 2.0 Endpoints** section of the DocuSign admin portal, follow these steps:
+ k. In the **View SAML 2.0 Endpoints** section of the DocuSign admin portal, follow these steps:
![View SAML 2.0 Endpoints][60]
In this section, a user named B.Simon is created in DocuSign. DocuSign supports
In this section, you test your Azure AD single sign-on configuration with following options.
-1. Click on **Test this application** in Azure portal. This will redirect to DocuSign Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to DocuSign Sign-on URL where you can initiate the login flow.
-2. Go to DocuSign Sign-on URL directly and initiate the login flow from there.
+* Go to DocuSign Sign-on URL directly and initiate the login flow from there.
-3. You can use Microsoft My Apps. When you click the DocuSign tile in the My Apps, you should be automatically signed in to the DocuSign for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* You can use Microsoft My Apps. When you click the DocuSign tile in the My Apps, you should be automatically signed in to the DocuSign for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next Steps
Once you configure DocuSign you can enforce Session control, which protects exfi
[53]: ./media/docusign-tutorial/tutorial-docusign-23.png [54]: ./media/docusign-tutorial/tutorial-docusign-19.png [55]: ./media/docusign-tutorial/tutorial-docusign-20.png
-[56]: ./media/docusign-tutorial/tutorial-docusign-24.png
+[56]: ./media/docusign-tutorial/request.png
[57]: ./media/docusign-tutorial/tutorial-docusign-25.png [58]: ./media/docusign-tutorial/tutorial-docusign-26.png [59]: ./media/docusign-tutorial/tutorial-docusign-27.png
active-directory Intacct Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/intacct-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Sage Intacct'
+ Title: 'Tutorial: Azure AD SSO integration with Sage Intacct'
description: Learn how to configure single sign-on between Azure Active Directory and Sage Intacct.
Previously updated : 01/05/2022 Last updated : 03/16/2022
-# Tutorial: Integrate Sage Intacct with Azure Active Directory
+# Tutorial: Azure AD SSO integration with Sage Intacct
In this tutorial, you'll learn how to integrate Sage Intacct with Azure Active Directory (Azure AD). When you integrate Sage Intacct with Azure AD, you can:
In this tutorial, you configure and test Azure AD SSO in a test environment.
* Sage Intacct supports **IDP** initiated SSO.
-## Adding Sage Intacct from the gallery
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Sage Intacct from the gallery
To configure the integration of Sage Intacct into Azure AD, you need to add Sage Intacct from the gallery to your list of managed SaaS apps.
To configure the integration of Sage Intacct into Azure AD, you need to add Sage
Configure and test Azure AD SSO with Sage Intacct using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Sage Intacct.
-To configure and test Azure AD SSO with Sage Intacct, complete the following steps:
+To configure and test Azure AD SSO with Sage Intacct, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
To configure and test Azure AD SSO with Sage Intacct, complete the following ste
1. **[Set up individual users in Intacct](#set-up-individual-users-in-intacct)** - to have a counterpart of B.Simon in Sage Intacct that is linked to the Azure AD representation of user. 6. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-### Configure Azure AD SSO
+## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following step:
- In the **Reply URL** text box, add the following URLs:
+ In the **Reply URL** text box, type one of the following URLs:
| Reply URL | | - | | `https://www.intacct.com/ia/acct/sso_response.phtml` (Select as the default.) |
- | `https://www.p-02.intacct.com/ia/acct/sso_response.phtml` |
- | `https://www.p-03.intacct.com/ia/acct/sso_response.phtml` |
- | `https://www.p-04.intacct.com/ia/acct/sso_response.phtml` |
- | `https://www.p-05.intacct.com/ia/acct/sso_response.phtml` |
+ | `https://www-p02.intacct.com/ia/acct/sso_response.phtml` |
+ | `https://www-p03.intacct.com/ia/acct/sso_response.phtml` |
+ | `https://www-p04.intacct.com/ia/acct/sso_response.phtml` |
+ | `https://www-p05.intacct.com/ia/acct/sso_response.phtml` |
| 1. The Sage Intacct application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click **Edit** icon to open User Attributes dialog..
When SSO is enabled for your company, you can individually require users to use
In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on Test this application in Azure portal and you should be automatically signed in to the Sage Intacct for which you set up the SSO
+* Click on Test this application in Azure portal and you should be automatically signed in to the Sage Intacct for which you set up the SSO.
* You can use Microsoft My Apps. When you click the Sage Intacct tile in the My Apps, you should be automatically signed in to the Sage Intacct for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510). - ## Next steps
-Once you configure Sage Intacct you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Sage Intacct you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Netskope Cloud Security Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netskope-cloud-security-tutorial.md
Title: "Tutorial: Azure Active Directory single sign-on (SSO) integration with Netskope Administrator Console | Microsoft Docs"
+ Title: 'Tutorial: Azure AD SSO integration with Netskope Administrator Console'
description: Learn how to configure single sign-on between Azure Active Directory and Netskope Administrator Console.
Previously updated : 04/02/2021 Last updated : 03/15/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Netskope Administrator Console
+# Tutorial: Azure AD SSO integration with Netskope Administrator Console
In this tutorial, you'll learn how to integrate Netskope Administrator Console with Azure Active Directory (Azure AD). When you integrate Netskope Administrator Console with Azure AD, you can: -- Control in Azure AD who has access to Netskope Administrator Console.-- Enable your users to be automatically signed-in to Netskope Administrator Console with their Azure AD accounts.-- Manage your accounts in one central location - the Azure portal.
+* Control in Azure AD who has access to Netskope Administrator Console.
+* Enable your users to be automatically signed-in to Netskope Administrator Console with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To get started, you need the following items: -- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).-- Netskope Administrator Console single sign-on (SSO) enabled subscription.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Netskope Administrator Console single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment. -- Netskope Administrator Console supports **SP and IDP** initiated SSO.
+* Netskope Administrator Console supports **SP and IDP** initiated SSO.
+
+* Netskope Administrator Console supports just-in-time user provisioning.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following step:
In the **Reply URL** text box, type a URL using the following pattern: `https://<tenant_host_name>/saml/acs`
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
### Create Netskope Administrator Console test user
-1. Open a new tab in your browser, and sign in to your Netskope Administrator Console company site as an administrator.
-
-1. Click on the **Settings** tab from the left navigation pane.
-
- ![Screenshot shows Settings selected.](./media/netskope-cloud-security-tutorial/configure-settings.png)
-
-1. Click **Active Platform** tab.
-
- ![Screenshot shows Active Platform selected from Settings.](./media/netskope-cloud-security-tutorial/user-1.png)
-
-1. Click **Users** tab.
-
- ![Screenshot shows Users selected from Active Platform.](./media/netskope-cloud-security-tutorial/add-user.png)
-
-1. Click **ADD USERS**.
-
- ![Screenshot shows the Users dialog box where you can select ADD USERS.](./media/netskope-cloud-security-tutorial/user-add.png)
-
-1. Enter the email address of the user you want to add and click **ADD**.
-
- ![Screenshot shows Add Users where you can enter a list of users.](./media/netskope-cloud-security-tutorial/add-user-popup.png)
+In this section, a user called B.Simon is created in Netskope Administrator Console. Netskope Administrator Console supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Netskope Administrator Console, a new one is created after authentication.
## Test SSO
-In this section, you test your Azure AD single sign-on configuration with following options.
+In this section, you test your Azure AD single sign-on configuration with following options.
#### SP initiated: -- Click on **Test this application** in Azure portal. This will redirect to Netskope Administrator Console Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Netskope Administrator Console Sign on URL where you can initiate the login flow.
-- Go to Netskope Administrator Console Sign-on URL directly and initiate the login flow from there.
+* Go to Netskope Administrator Console Sign-on URL directly and initiate the login flow from there.
#### IDP initiated: -- Click on **Test this application** in Azure portal and you should be automatically signed in to the Netskope Administrator Console for which you set up the SSO
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Netskope Administrator Console for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the Netskope Administrator Console tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Netskope Administrator Console for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Netskope Administrator Console tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Netskope Administrator Console for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Netskope Administrator Console you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Netskope Administrator Console you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Memo 22 09 Meet Identity Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-meet-identity-requirements.md
# Meeting identity requirements of Memorandum 22-09 with Azure Active Directory
+Executive order [14028, Improving the NationΓÇÖs Cyber Security](https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity), directs federal agencies on advancing security measures that dramatically reduce the risk of successful cyber attacks against the federal governmentΓÇÖs digital infrastructure. On January 26, 2022, the [Office of Management and Budget (OMB)](https://www.whitehouse.gov/omb/) released the Federal Zero Trust Strategy [Memorandum M-22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf) in support of EO 14028.
+ This series of articles offer guidance for employing Azure Active Directory (Azure AD) as a centralized identity management system for implementing Zero Trust principles as described by the US Federal GovernmentΓÇÖs Office of Management and Budget (OMB) [Memorandum M-22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf). Throughout this document we refer to it as "The memo." The release of Memorandum 22-09 is designed to support Zero trust initiatives within federal agencies; it also provides regulatory guidance in supporting Federal Cybersecurity and Data Privacy Laws. The Memo cites the [Department of Defense (DoD) Zero Trust Reference Architecture](https://dodcio.defense.gov/Portals/0/Documents/Library/(U)ZT_RA_v1.1(U)_Mar21.pdf),
active-directory Memo 22 09 Multi Factor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-multi-factor-authentication.md
For each of the five phishing-resistant MFA types previously mentioned, you use
| - | - | | Azure Linux VM| Enable the [Linux VM for Azure AD sign-in](../devices/howto-vm-sign-in-azure-ad-linux.md) | | Azure Windows VM| Enable the [Windows VM for Azure AD sign-in](../devices/howto-vm-sign-in-azure-ad-windows.md) |
-| Azure Virtual Desktop| Enable [Azure virtual desktop for Azure AD sign-in](https://docs.microsoft.com/azure/architecture/example-scenario/wvd/azure-virtual-desktop-azure-active-directory-join) |
+| Azure Virtual Desktop| Enable [Azure virtual desktop for Azure AD sign-in](/azure/architecture/example-scenario/wvd/azure-virtual-desktop-azure-active-directory-join) |
| VMs hosted on-prem or in other clouds| Enable [Azure Arc](../../azure-arc/overview.md) on the VM then enable Azure AD sign-in. (Currently in private preview for Linux. Support for Windows VMs hosted in these environments is on our roadmap.) | | Non-Microsoft virtual desktop solutions| Integrate 3rd party virtual desktop solution as an app in Azure AD |
The following articles are a part of this documentation set:
Additional Zero Trust Documentation
-[Securing identity with Zero Trust](/security/zero-trust/deploy/identity)
+[Securing identity with Zero Trust](/security/zero-trust/deploy/identity)
advisor Advisor Reference Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-cost-recommendations.md
Learn more about [Managed Disk Snapshot - ManagedDiskSnapshot (Use Standard Stor
We've analyzed the usage patterns of your virtual machine over the past 7 days and identified virtual machines with low usage. While certain scenarios can result in low utilization by design, you can often save money by managing the size and number of virtual machines.
-Learn more about [Virtual machine - LowUsageVmV2 (Right-size or shutdown underutilized virtual machines)](https://aka.ms/aa_lowusagerec_learnmore).
+Learn more about [Virtual machine - LowUsageVmV2 (Right-size or shutdown underutilized virtual machines)](/azure/advisor/advisor-cost-recommendations#optimize-virtual-machine-spend-by-resizing-or-shutting-down-underutilized-instances).
### You have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk. We have observed that you have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk. Note that if you decide to delete the disk, recovery is not possible. We recommend that you create a snapshot before deletion or ensure the data in the disk is no longer required.
-Learn more about [Disk - DeleteOrDowngradeUnattachedDisks (You have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk.)](https://aka.ms/unattacheddisks).
+Learn more about [Disk - DeleteOrDowngradeUnattachedDisks (You have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk.)](../virtual-machines/disks-find-unattached-portal.md).
## MariaDB
Learn more about [PostgreSQL server - OrcasPostgreSqlCpuRightSize (Right-size un
Your Azure Cosmos DB free tier account is currently containing resources with a total provisioned throughput exceeding 1000 Request Units per second (RU/s). Because Azure Cosmos DB's free tier only covers the first 1000 RU/s of throughput provisioned across your account, any throughput beyond 1000 RU/s will be billed at the regular pricing. As a result, we anticipate that you will get charged for the throughput currently provisioned on your Azure Cosmos DB account.
-Learn more about [Cosmos DB account - CosmosDBFreeTierOverage (Review the configuration of your Azure Cosmos DB free tier account)](/azure/cosmos-db/understand-your-bill#azure-free-tier).
+Learn more about [Cosmos DB account - CosmosDBFreeTierOverage (Review the configuration of your Azure Cosmos DB free tier account)](../cosmos-db/understand-your-bill.md#azure-free-tier).
### Consider taking action on your idle Azure Cosmos DB containers
Learn more about [Cosmos DB account - CosmosDBIdleContainers (Consider taking ac
Based on your usage in the past 7 days, you can save by enabling autoscale. For each hour, we compared the RU/s provisioned to the actual utilization of the RU/s (what autoscale would have scaled to) and calculated the cost savings across the time period. Autoscale helps optimize your cost by scaling down RU/s when not in use.
-Learn more about [Cosmos DB account - CosmosDBAutoscaleRecommendations (Enable autoscale on your Azure Cosmos DB database or container)](/azure/cosmos-db/provision-throughput-autoscale).
+Learn more about [Cosmos DB account - CosmosDBAutoscaleRecommendations (Enable autoscale on your Azure Cosmos DB database or container)](../cosmos-db/provision-throughput-autoscale.md).
### Configure manual throughput instead of autoscale on your Azure Cosmos DB database or container Based on your usage in the past 7 days, you can save by using manual throughput instead of autoscale. Manual throughput is more cost-effective when average utilization of your max throughput (RU/s) is greater than 66% or less than or equal to 10%.
-Learn more about [Cosmos DB account - CosmosDBMigrateToManualThroughputFromAutoscale (Configure manual throughput instead of autoscale on your Azure Cosmos DB database or container)](/azure/cosmos-db/how-to-choose-offer).
+Learn more about [Cosmos DB account - CosmosDBMigrateToManualThroughputFromAutoscale (Configure manual throughput instead of autoscale on your Azure Cosmos DB database or container)](../cosmos-db/how-to-choose-offer.md).
## Data Explorer
Learn more about [Cosmos DB account - CosmosDBMigrateToManualThroughputFromAutos
This recommendation surfaces all Data Explorer resources provisioned more than 10 days from the last update, and found either empty or with no activity. The recommended action is to validate and consider deleting the resources.
-Learn more about [Data explorer resource - ADX Unused resource (Unused/Empty Data Explorer resources)](https://aka.ms/adxemptycluster).
+Learn more about [Data explorer resource - ADX Unused resource (Unused/Empty Data Explorer resources)](/azure/data-explorer/azure-advisor#azure-data-explorer-unused-cluster).
### Right-size Data Explorer resources for optimal cost One or more of these were detected: Low data capacity, CPU utilization, or memory utilization. The recommended action to improve the performance is to scale down and/or scale in the resource to the recommended configuration shown.
-Learn more about [Data explorer resource - Right-size for cost (Right-size Data Explorer resources for optimal cost)](https://aka.ms/adxskusize).
+Learn more about [Data explorer resource - Right-size for cost (Right-size Data Explorer resources for optimal cost)](/azure/data-explorer/azure-advisor#correctly-size-azure-data-explorer-clusters-to-optimize-cost).
### Reduce Data Explorer table cache policy to optimize costs Reducing the table cache policy will free up Data Explorer cluster nodes with low CPU utilization, memory, and a high cache size configuration.
-Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTables (Reduce Data Explorer table cache policy to optimize costs)](https://aka.ms/adxcachepolicy).
+Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTables (Reduce Data Explorer table cache policy to optimize costs)](/azure/data-explorer/kusto/management/cachepolicy).
### Unused Data Explorer resources with data This recommendation surfaces all Data Explorer resources provisioned more than 10 days from the last update, and found containing data but with no activity. The recommended action is to validate and consider stopping the unused resources.
-Learn more about [Data explorer resource - StopUnusedClustersWithData (Unused Data Explorer resources with data)](https://aka.ms/adxunusedcluster).
+Learn more about [Data explorer resource - StopUnusedClustersWithData (Unused Data Explorer resources with data)](/azure/data-explorer/azure-advisor#azure-data-explorer-clusters-containing-data-with-low-activity).
### Cleanup unused storage in Data Explorer resources Over time, internal extents merge operations can accumulate redundant and unused storage artifacts that remain beyond the data retention period. While this unreferenced data doesnΓÇÖt negatively impact the performance, it can lead to more storage use and larger costs than necessary. This recommendation surfaces Data Explorer resources that have unused storage artifacts. The recommended action is to run the cleanup command to detect and delete unused storage artifacts and reduce cost. Note that data recoverability will be reset to the cleanup time and will not be available on data that was created before running the cleanup.
-Learn more about [Data explorer resource - RunCleanupCommandForAzureDataExplorer (Cleanup unused storage in Data Explorer resources)](https://aka.ms/adxcleanextentcontainers).
+Learn more about [Data explorer resource - RunCleanupCommandForAzureDataExplorer (Cleanup unused storage in Data Explorer resources)](/azure/data-explorer/kusto/management/clean-extent-containers).
### Enable optimized autoscale for Data Explorer resources Looks like your resource could have automatically scaled to reduce costs (based on the usage patterns, cache utilization, ingestion utilization, and CPU). To optimize costs and performance, we recommend enabling optimized autoscale. To make sure you don't exceed your planned budget, add a maximum instance count when you enable this.
-Learn more about [Data explorer resource - EnableOptimizedAutoscaleAzureDataExplorer (Enable optimized autoscale for Data Explorer resources)](https://aka.ms/adxoptimizedautoscale).
+Learn more about [Data explorer resource - EnableOptimizedAutoscaleAzureDataExplorer (Enable optimized autoscale for Data Explorer resources)](/azure/data-explorer/manage-cluster-horizontal-scaling#optimized-autoscale).
## Network
Learn more about [Virtual network gateway - IdleVNetGateway (Repurpose or delete
For SQL/HANA DBs in Azure VMs being backed up to Azure, using daily differential with weekly full backup is often more cost-effective than daily fully backups. For HANA, Azure Backup also supports incremental backup which is even more cost effective
-Learn more about [Recovery Services vault - Optimize costs of database backup (Use differential or incremental backup for database workloads)](https://aka.ms/DBBackupCostOptimization).
+Learn more about [Recovery Services vault - Optimize costs of database backup (Use differential or incremental backup for database workloads)](/azure/backup/sap-hana-faq-backup-azure-vm#policy).
## Storage
Learn more about [Virtual machine - ReservedInstance (Buy virtual machine reserv
We analyzed your Cosmos DB usage pattern over last 30 days and calculate reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase Cosmos DB hourly usage and save over your pay-as-you-go costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings even more.
-Learn more about [Subscription - CosmosDBReservedCapacity (Consider Cosmos DB reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+Learn more about [Subscription - CosmosDBReservedCapacity (Consider Cosmos DB reserved instance to save over your pay-as-you-go costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Consider SQL PaaS DB reserved instance to save over your pay-as-you-go costs We analyzed your SQL PaaS usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase hourly usage for your SQL PaaS deployments and save over your SQL PaaS compute costs. SQL license is charged separately and is not discounted by the reservation. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - SQLReservedCapacity (Consider SQL PaaS DB reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+Learn more about [Subscription - SQLReservedCapacity (Consider SQL PaaS DB reserved instance to save over your pay-as-you-go costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Consider App Service stamp fee reserved instance to save over your on-demand costs We analyzed your App Service isolated environment stamp fees usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase hourly usage for the isolated environment stamp fee and save over your Pay-as-you-go costs. Note that reserved instance only applies to the stamp fee and not to the App Service instances. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions based on usage pattern over last 30 days.
-Learn more about [Subscription - AppServiceReservedCapacity (Consider App Service stamp fee reserved instance to save over your on-demand costs)](https://aka.ms/rirecommendations).
+Learn more about [Subscription - AppServiceReservedCapacity (Consider App Service stamp fee reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Consider Database for MariaDB reserved instance to save over your pay-as-you-go costs We analyzed your Azure Database for MariaDB usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase MariaDB hourly usage and save over your compute costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - MariaDBSQLReservedCapacity (Consider Database for MariaDB reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+Learn more about [Subscription - MariaDBSQLReservedCapacity (Consider Database for MariaDB reserved instance to save over your pay-as-you-go costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Consider Database for MySQL reserved instance to save over your pay-as-you-go costs We analyzed your MySQL Database usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase MySQL hourly usage and save over your compute costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - MySQLReservedCapacity (Consider Database for MySQL reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+Learn more about [Subscription - MySQLReservedCapacity (Consider Database for MySQL reserved instance to save over your pay-as-you-go costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs We analyzed your Database for PostgreSQL usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase PostgresSQL Database hourly usage and save over your on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - PostgreSQLReservedCapacity (Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+Learn more about [Subscription - PostgreSQLReservedCapacity (Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Consider Cache for Redis reserved instance to save over your pay-as-you-go costs We analyzed your Cache for Redis usage pattern over last 30 days and calculated reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase Cache for Redis hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - RedisCacheReservedCapacity (Consider Cache for Redis reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+Learn more about [Subscription - RedisCacheReservedCapacity (Consider Cache for Redis reserved instance to save over your pay-as-you-go costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Consider Azure Synapse Analytics (formerly SQL DW) reserved instance to save over your pay-as-you-go costs We analyze you Azure Synapse Analytics usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase Synapse Analytics hourly usage and save over your on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - SQLDWReservedCapacity (Consider Azure Synapse Analytics (formerly SQL DW) reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+Learn more about [Subscription - SQLDWReservedCapacity (Consider Azure Synapse Analytics (formerly SQL DW) reserved instance to save over your pay-as-you-go costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### (Preview) Consider Blob storage reserved instance to save on Blob v2 and Datalake storage Gen2 costs We analyzed your Azure Blob and Datalake storage usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Blob storage reserved instance applies only to data stored on Azure Blob (GPv2) and Azure Data Lake Storage (Gen 2). Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - BlobReservedCapacity ((Preview) Consider Blob storage reserved instance to save on Blob v2 and Datalake storage Gen2 costs)](https://aka.ms/rirecommendations).
+Learn more about [Subscription - BlobReservedCapacity ((Preview) Consider Blob storage reserved instance to save on Blob v2 and Datalake storage Gen2 costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### (Preview) Consider Azure Data explorer reserved capacity to save over your pay-as-you-go costs We analyzed your Azure Data Explorer usage pattern over last 30 days and recommend reserved capacity purchase that maximizes your savings. With reserved capacity you can pre-purchase Data Explorer hourly usage and get savings over your on-demand costs. Reserved capacity is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions using 3-year reservation pricing and last 30 day's usage pattern. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - DataExplorerReservedCapacity ((Preview) Consider Azure Data explorer reserved capacity to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+Learn more about [Subscription - DataExplorerReservedCapacity ((Preview) Consider Azure Data explorer reserved capacity to save over your pay-as-you-go costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Consider Azure Dedicated Host reserved instance to save over your on-demand costs We analyzed your Azure Dedicated Host usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - AzureDedicatedHostReservedCapacity (Consider Azure Dedicated Host reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+Learn more about [Subscription - AzureDedicatedHostReservedCapacity (Consider Azure Dedicated Host reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Consider Data Factory reserved instance to save over your on-demand costs We analyzed your Data Factory usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - DataFactorybReservedCapacity (Consider Data Factory reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+Learn more about [Subscription - DataFactorybReservedCapacity (Consider Data Factory reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Consider Azure Data Explorer reserved instance to save over your on-demand costs We analyzed your Azure Data Explorer usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - AzureDataExplorerReservedCapacity (Consider Azure Data Explorer reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+Learn more about [Subscription - AzureDataExplorerReservedCapacity (Consider Azure Data Explorer reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Consider Azure Files reserved instance to save over your on-demand costs We analyzed your Azure Files usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - AzureFilesReservedCapacity (Consider Azure Files reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+Learn more about [Subscription - AzureFilesReservedCapacity (Consider Azure Files reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Consider Azure VMware Solution reserved instance to save over your on-demand costs We analyzed your Azure VMware Solution usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - AzureVMwareSolutionReservedCapacity (Consider Azure VMware Solution reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+Learn more about [Subscription - AzureVMwareSolutionReservedCapacity (Consider Azure VMware Solution reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### (Preview) Consider Databricks reserved capacity to save over your on-demand costs We analyzed your Databricks usage over last 30 days and calculated reserved capacity purchase that would maximize your savings. With reserved capacity you can pre-purchase hourly usage and save over your current on-demand costs. Reserved capacity is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions using 3-year reservation pricing and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - DataBricksReservedCapacity ((Preview) Consider Databricks reserved capacity to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+Learn more about [Subscription - DataBricksReservedCapacity ((Preview) Consider Databricks reserved capacity to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Consider NetApp Storage reserved instance to save over your on-demand costs We analyzed your NetApp Storage usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - NetAppStorageReservedCapacity (Consider NetApp Storage reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+Learn more about [Subscription - NetAppStorageReservedCapacity (Consider NetApp Storage reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Consider Azure Managed Disk reserved instance to save over your on-demand costs We analyzed your Azure Managed Disk usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - AzureManagedDiskReservedCapacity (Consider Azure Managed Disk reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+Learn more about [Subscription - AzureManagedDiskReservedCapacity (Consider Azure Managed Disk reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Consider Red Hat reserved instance to save over your on-demand costs We analyzed your Red Hat usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - RedHatReservedCapacity (Consider Red Hat reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+Learn more about [Subscription - RedHatReservedCapacity (Consider Red Hat reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Consider RedHat Osa reserved instance to save over your on-demand costs We analyzed your RedHat Osa usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - RedHatOsaReservedCapacity (Consider RedHat Osa reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+Learn more about [Subscription - RedHatOsaReservedCapacity (Consider RedHat Osa reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Consider SapHana reserved instance to save over your on-demand costs We analyzed your SapHana usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - SapHanaReservedCapacity (Consider SapHana reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+Learn more about [Subscription - SapHanaReservedCapacity (Consider SapHana reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Consider SuseLinux reserved instance to save over your on-demand costs We analyzed your SuseLinux usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - SuseLinuxReservedCapacity (Consider SuseLinux reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+Learn more about [Subscription - SuseLinuxReservedCapacity (Consider SuseLinux reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Consider VMware Cloud Simple reserved instance We analyzed your VMware Cloud Simple usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - VMwareCloudSimpleReservedCapacity (Consider VMware Cloud Simple reserved instance )](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+Learn more about [Subscription - VMwareCloudSimpleReservedCapacity (Consider VMware Cloud Simple reserved instance )](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
### Use Virtual Machines with Ephemeral OS Disk enabled to save cost and get better performance
Learn more about [Subscription - EphemeralOsDisk (Use Virtual Machines with Ephe
Auto-pause releases and shuts down unused compute resources after a set idle period of inactivity
-Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoPauseGuidance (Consider enabling autopause feature on spark compute.)](https://aka.ms/EnableSynapseSparkComputeAutoPauseGuidance).
+Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoPauseGuidance (Consider enabling autopause feature on spark compute.)](/dotnet/api/microsoft.azure.management.synapse.models.autopauseproperties).
### Consider enabling autoscale feature on Spark compute. Apache Spark for Azure Synapse Analytics pool's Autoscale feature automatically scales the number of nodes in a cluster instance up and down. During the creation of a new Apache Spark for Azure Synapse Analytics pool, a minimum and maximum number of nodes can be set when Autoscale is selected. Autoscale then monitors the resource requirements of the load and scales the number of nodes up or down. There's no additional charge for this feature.
-Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoScaleGuidance (Consider enabling autoscale feature on spark compute.)](https://aka.ms/EnableSynapseSparkComputeAutoScaleGuidance).
+Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoScaleGuidance (Consider enabling autoscale feature on spark compute.)](../synapse-analytics/spark/apache-spark-autoscale.md).
## Next steps
-Learn more about [Cost Optimization - Microsoft Azure Well Architected Framework](/azure/architecture/framework/cost/overview)
+Learn more about [Cost Optimization - Microsoft Azure Well Architected Framework](/azure/architecture/framework/cost/overview)
advisor Advisor Reference Operational Excellence Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md
You can get these recommendations on the **Operational Excellence** tab of the A
We have identified API calls from an outdated Azure Spring Cloud SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
-Learn more about [Spring Cloud Service - SpringCloudUpgradeOutdatedSDK (Update your outdated Azure Spring Cloud SDK to the latest version)](/azure/spring-cloud).
+Learn more about [Spring Cloud Service - SpringCloudUpgradeOutdatedSDK (Update your outdated Azure Spring Cloud SDK to the latest version)](../spring-cloud/index.yml).
### Update Azure Spring Cloud API Version We have identified API calls from outdated Azure Spring Cloud API for resources under this subscription. We recommend switching to the latest Spring Cloud API version. You need to update your existing code to use the latest API version. Also, you need to upgrade your Azure SDK and Azure CLI to the latest version. This ensures you receive the latest features and performance improvements.
-Learn more about [Spring Cloud Service - UpgradeAzureSpringCloudAPI (Update Azure Spring Cloud API Version)](/azure/spring-cloud).
+Learn more about [Spring Cloud Service - UpgradeAzureSpringCloudAPI (Update Azure Spring Cloud API Version)](../spring-cloud/index.yml).
## Automation
Learn more about [Spring Cloud Service - UpgradeAzureSpringCloudAPI (Update Azur
This new version of Start/Stop VMs v2 (preview) provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the original version available with Azure Automation, but it is designed to take advantage of newer technology in Azure.
-Learn more about [Automation account - SSV1_Upgrade (Upgrade to Start/Stop VMs v2)](https://aka.ms/startstopv2docs).
+Learn more about [Automation account - SSV1_Upgrade (Upgrade to Start/Stop VMs v2)](../azure-functions/start-stop-vms/overview.md).
## Batch
Learn more about [Automation account - SSV1_Upgrade (Upgrade to Start/Stop VMs v
Your pool has an old node agent. Consider recreating your pool to get the latest node agent updates and bug fixes.
-Learn more about [Batch account - OldPool (Recreate your pool to get the latest node agent features and fixes)](https://aka.ms/batch_oldpool_learnmore).
+Learn more about [Batch account - OldPool (Recreate your pool to get the latest node agent features and fixes)](../batch/best-practices.md#pool-lifetime-and-billing).
### Delete and recreate your pool to remove a deprecated internal component Your pool is using a deprecated internal component. Please delete and recreate your pool for improved stability and performance.
-Learn more about [Batch account - RecreatePool (Delete and recreate your pool to remove a deprecated internal component)](https://aka.ms/batch_deprecatedcomponent_learnmore).
+Learn more about [Batch account - RecreatePool (Delete and recreate your pool to remove a deprecated internal component)](/azure/batch/best-practices#pool-lifetime-and-billing).
### Upgrade to the latest API version to ensure your Batch account remains operational. In the past 14 days, you have invoked a Batch management or service API version that is scheduled for deprecation. Upgrade to the latest API version to ensure your Batch account remains operational.
-Learn more about [Batch account - UpgradeAPI (Upgrade to the latest API version to ensure your Batch account remains operational.)](https://aka.ms/batch_deprecatedapi_learnmore).
+Learn more about [Batch account - UpgradeAPI (Upgrade to the latest API version to ensure your Batch account remains operational.)](/rest/api/batchservice/batch-api-status#rest-api-deprecation-status-and-upgrade-instructions).
### Delete and recreate your pool using a VM size that will soon be retired
Learn more about [Batch account - RemoveA8_A11Pools (Delete and recreate your po
Your pool is using an image with an imminent expiration date. Please recreate the pool with a new image to avoid potential interruptions. A list of newer images is available via the ListSupportedImages API.
-Learn more about [Batch account - EolImage (Recreate your pool with a new image)](https://aka.ms/batch_expiring_image_learn_more).
+Learn more about [Batch account - EolImage (Recreate your pool with a new image)](/azure/batch/batch-pool-vm-sizes#supported-vm-images).
## Cognitive Service
Learn more about [Batch account - EolImage (Recreate your pool with a new image)
We have identified resources under this subscription using outdated versions of the Immersive Reader SDK. Using the latest version of the Immersive Reader SDK provides you with updated security, performance and an expanded set of features for customizing and enhancing your integration experience.
-Learn more about [Cognitive Service - ImmersiveReaderSDKRecommendation (Upgrade to the latest version of the Immersive Reader SDK)](https://aka.ms/ImmersiveReaderAzureAdvisorSDKLearnMore).
+Learn more about [Cognitive Service - ImmersiveReaderSDKRecommendation (Upgrade to the latest version of the Immersive Reader SDK)](../applied-ai-services/immersive-reader/index.yml).
## Compute
Learn more about [Cognitive Service - ImmersiveReaderSDKRecommendation (Upgrade
If quota limits are exceeded, new VM deployments will be blocked until quota is increased. Increase your quota now to enable deployment of more resources. Learn More
-Learn more about [Virtual machine - IncreaseQuotaExperiment (Increase the number of compute resources you can deploy by 10 vCPU)](https://aka.ms/SubscriptionServiceLimits).
+Learn more about [Virtual machine - IncreaseQuotaExperiment (Increase the number of compute resources you can deploy by 10 vCPU)](../azure-resource-manager/management/azure-subscription-service-limits.md).
### Add Azure Monitor to your virtual machine (VM) labeled as production
Learn more about [Virtual machine - GetF5vulnK03009991 (The iControl REST interf
Desired state for Accelerated Networking is set to ΓÇÿtrueΓÇÖ for one or more interfaces on this VM, but actual state for accelerated networking is not enabled.
-Learn more about [Virtual machine - GetVmListANDisabled (NVA Accelerated Networking enabled but potentially not working.)](/azure/virtual-network/create-vm-accelerated-networking-cli).
+Learn more about [Virtual machine - GetVmListANDisabled (NVA Accelerated Networking enabled but potentially not working.)](../virtual-network/create-vm-accelerated-networking-cli.md).
### Upgrade Citrix load balancers to avoid connectivity issues during NIC maintenance operations.
Learn more about [Virtual machine - GetCitrixVFRevokeError (Upgrade Citrix load
This cluster's service principal is expired and the cluster will not be healthy until the service principal is updated
-Learn more about [Kubernetes service - UpdateServicePrincipal (Update cluster's service principal)](/azure/aks/update-credentials).
+Learn more about [Kubernetes service - UpdateServicePrincipal (Update cluster's service principal)](../aks/update-credentials.md).
### Monitoring addon workspace is deleted Monitoring addon workspace is deleted. Correct issues to setup monitoring addon.
-Learn more about [Kubernetes service - MonitoringAddonWorkspaceIsDeleted (Monitoring addon workspace is deleted)](https://aka.ms/aks-disable-monitoring-addon).
+Learn more about [Kubernetes service - MonitoringAddonWorkspaceIsDeleted (Monitoring addon workspace is deleted)](/azure/azure-monitor/containers/container-insights-optout#azure-cli).
### Deprecated Kubernetes API in 1.16 is found
Learn more about [Kubernetes service - DeprecatedKubernetesAPIIn116IsFound (Depr
This cluster has not enabled AKS Cluster Autoscaler, and it will not adapt to changing load conditions unless you have other ways to autoscale your cluster
-Learn more about [Kubernetes service - EnableClusterAutoscaler (Enable the Cluster Autoscaler)](/azure/aks/cluster-autoscaler).
+Learn more about [Kubernetes service - EnableClusterAutoscaler (Enable the Cluster Autoscaler)](../aks/cluster-autoscaler.md).
### The AKS node pool subnet is full Some of the subnets for this cluster's node pools are full and cannot take any more worker nodes. Using the Azure CNI plugin requires to reserve IP addresses for each node and all the pods for the node at node provisioning time. If there is not enough IP address space in the subnet, no worker nodes can be deployed. Additionally, the AKS cluster cannot be upgraded if the node subnet is full.
-Learn more about [Kubernetes service - NodeSubnetIsFull (The AKS node pool subnet is full)](/azure/aks/use-multiple-node-pools#add-a-node-pool-with-a-unique-subnet-preview).
+Learn more about [Kubernetes service - NodeSubnetIsFull (The AKS node pool subnet is full)](../aks/use-multiple-node-pools.md#add-a-node-pool-with-a-unique-subnet-preview).
### Disable the Application Routing Addon
Learn more about [Kubernetes service - UseAzurePolicyForKubernetes (Disable the
This cluster is not using ephemeral OS disks which can provide lower read/write latency, along with faster node scaling and cluster upgrades
-Learn more about [Kubernetes service - UseEphemeralOSdisk (Use Ephemeral OS disk)](/azure/aks/cluster-configuration#ephemeral-os).
+Learn more about [Kubernetes service - UseEphemeralOSdisk (Use Ephemeral OS disk)](../aks/cluster-configuration.md#ephemeral-os).
### Use Uptime SLA This cluster has not enabled Uptime SLA, and it limited to an SLO of 99.5%
-Learn more about [Kubernetes service - UseUptimeSLA (Use Uptime SLA)](/azure/aks/uptime-sla).
+Learn more about [Kubernetes service - UseUptimeSLA (Use Uptime SLA)](../aks/uptime-sla.md).
### Deprecated Kubernetes API in 1.22 has been found
Learn more about [Host Pool - AVDStartVMonConnect (Permissions missing for start
We have determined that you do not have a validation environment enabled in current subscription. When creating your host pools, you have selected "No" for "Validation environment" in the properties tab. Having at least one host pool with a validation environment enabled ensures the business continuity through Windows Virtual Desktop service deployments with early detection of potential issues.
-Learn more about [Host Pool - ValidationEnvHostPools (No validation environment enabled)](/azure/virtual-desktop/create-validation-host-pool).
+Learn more about [Host Pool - ValidationEnvHostPools (No validation environment enabled)](../virtual-desktop/create-validation-host-pool.md).
### Not enough production environments enabled We have determined that too many of your host pools have Validation Environment enabled. In order for Validation Environments to best serve their purpose, you should have at least one, but never more than half of your host pools in Validation Environment. By having a healthy balance between your host pools with Validation Environment enabled and those with it disabled, you will best be able to utilize the benefits of the multistage deployments that Windows Virtual Desktop offers with certain updates. To fix this issue, open your host pool's properties and select "No" next to the "Validation Environment" setting.
-Learn more about [Host Pool - ProductionEnvHostPools (Not enough production environments enabled)](/azure/virtual-desktop/create-host-pools-powershell).
+Learn more about [Host Pool - ProductionEnvHostPools (Not enough production environments enabled)](../virtual-desktop/create-host-pools-powershell.md).
## Cosmos DB
Learn more about [Host Pool - ProductionEnvHostPools (Not enough production envi
We noticed that your Azure Cosmos collection is using the legacy attachments feature. We recommend migrating attachments to Azure Blob Storage to improve the resiliency and scalability of your blob data.
-Learn more about [Cosmos DB account - CosmosDBAttachments (Migrate Azure Cosmos DB attachments to Azure Blob Storage)](/azure/cosmos-db/attachments#migrating-attachments-to-azure-blob-storage).
+Learn more about [Cosmos DB account - CosmosDBAttachments (Migrate Azure Cosmos DB attachments to Azure Blob Storage)](../cosmos-db/attachments.md#migrating-attachments-to-azure-blob-storage).
### Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup Your Azure Cosmos DB accounts are configured with periodic backup. Continuous backup with point-in-time restore is now available on these accounts. With continuous backup, you can restore your data to any point in time within the past 30 days. Continuous backup may also be more cost-effective as a single copy of your data is retained.
-Learn more about [Cosmos DB account - CosmosDBMigrateToContinuousBackup (Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup)](/azure/cosmos-db/continuous-backup-restore-introduction).
+Learn more about [Cosmos DB account - CosmosDBMigrateToContinuousBackup (Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup)](../cosmos-db/continuous-backup-restore-introduction.md).
## Insights
Learn more about [Cosmos DB account - CosmosDBMigrateToContinuousBackup (Improve
We have detected that one or more of your alert rules have invalid queries specified in their condition section. Log alert rules are created in Azure Monitor and are used to run analytics queries at specified intervals. The results of the query determine if an alert needs to be triggered. Analytics queries may become invalid overtime due to changes in referenced resources, tables, or commands. We recommend that you correct the query in the alert rule to prevent it from getting auto-disabled and ensure monitoring coverage of your resources in Azure.
-Learn more about [Alert Rule - ScheduledQueryRulesLogAlert (Repair your log alert rule)](https://aka.ms/aa_logalerts_queryrepair).
+Learn more about [Alert Rule - ScheduledQueryRulesLogAlert (Repair your log alert rule)](/azure/azure-monitor/alerts/alerts-troubleshoot-log#query-used-in-a-log-alert-is-not-valid).
### Log alert rule was disabled The alert rule was disabled by Azure Monitor as it was causing service issues. To enable the alert rule, contact support.
-Learn more about [Alert Rule - ScheduledQueryRulesRp (Log alert rule was disabled)](https://aka.ms/aa_logalerts_queryrepair).
+Learn more about [Alert Rule - ScheduledQueryRulesRp (Log alert rule was disabled)](/azure/azure-monitor/alerts/alerts-troubleshoot-log#query-used-in-a-log-alert-is-not-valid).
## Key Vault
Learn more about [Alert Rule - ScheduledQueryRulesRp (Log alert rule was disable
Create a periodic HSM backup to prevent data loss and have ability to recover the HSM in case of a disaster.
-Learn more about [Managed HSM Service - CreateHSMBackup (Create a backup of HSM)](/azure/key-vault/managed-hsm/best-practices#backup).
+Learn more about [Managed HSM Service - CreateHSMBackup (Create a backup of HSM)](../key-vault/managed-hsm/best-practices.md#backup).
## Data Explorer
Learn more about [Managed HSM Service - CreateHSMBackup (Create a backup of HSM)
Reduce the table cache policy to match the usage patterns (query lookback period)
-Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesOperationalExcellence (Reduce the cache policy on your Data Explorer tables)](https://aka.ms/adxcachepolicy).
+Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesOperationalExcellence (Reduce the cache policy on your Data Explorer tables)](/azure/data-explorer/kusto/management/cachepolicy).
## Networking
Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTables
We've detected that one or more of your Application Gateways has been misconfigured to obtain their listener certificate(s) from Key Vault, which may result in operational issues. You should fix this misconfiguration immediately to avoid operational issues for your Application Gateway.
-Learn more about [Application gateway - AppGwAdvisorRecommendationForKeyVaultErrors (Resolve Azure Key Vault issue for your Application Gateway)](https://aka.ms/agkverror).
+Learn more about [Application gateway - AppGwAdvisorRecommendationForKeyVaultErrors (Resolve Azure Key Vault issue for your Application Gateway)](../application-gateway/application-gateway-key-vault-common-errors.md).
### Application Gateway does not have enough capacity to scale out We've detected that your Application Gateway subnet does not have enough capacity for allowing scale out during high traffic conditions, which can cause downtime.
-Learn more about [Application gateway - AppgwRestrictedSubnetSpace (Application Gateway does not have enough capacity to scale out)](https://aka.ms/application-gateway-faq).
+Learn more about [Application gateway - AppgwRestrictedSubnetSpace (Application Gateway does not have enough capacity to scale out)](../application-gateway/application-gateway-faq.yml#can-i-change-the-virtual-network-or-subnet-for-an-existing-application-gateway).
### Enable Traffic Analytics to view insights into traffic patterns across Azure resources Traffic Analytics is a cloud-based solution that provides visibility into user and application activity in Azure. Traffic analytics analyzes Network Watcher network security group (NSG) flow logs to provide insights into traffic flow. With traffic analytics, you can view top talkers across Azure and non Azure deployments, investigate open ports, protocols and malicious flows in your environment and optimize your network deployment for performance. You can process flow logs at 10 mins and 60 mins processing intervals, giving you faster analytics on your traffic.
-Learn more about [Network Security Group - NSGFlowLogsenableTA (Enable Traffic Analytics to view insights into traffic patterns across Azure resources)](https://aka.ms/aa_enableta_learnmore).
+Learn more about [Network Security Group - NSGFlowLogsenableTA (Enable Traffic Analytics to view insights into traffic patterns across Azure resources)](../network-watcher/traffic-analytics.md).
## SQL Virtual Machine
Learn more about [Network Security Group - NSGFlowLogsenableTA (Enable Traffic A
Full mode installs the SQL IaaS Agent to the VM to deliver full functionality. Use it for managing a SQL Server VM with a single instance. There is no cost associated with using the full manageability mode. System administrator permissions are required. Note that installing or upgrading to full mode is an online operation, there is no restart required.
-Learn more about [SQL virtual machine - UpgradeToFullMode (SQL IaaS Agent should be installed in full mode)](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management?tabs=azure-powershell).
+Learn more about [SQL virtual machine - UpgradeToFullMode (SQL IaaS Agent should be installed in full mode)](../azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md?tabs=azure-powershell).
## Storage
Learn more about [SQL virtual machine - UpgradeToFullMode (SQL IaaS Agent should
A region can support a maximum of 250 storage accounts per subscription. You have either already reached or are about to reach that limit. If you reach that limit, you will be unable to create any more storage accounts in that subscription/region combination. Please evaluate the recommended action below to avoid hitting the limit.
-Learn more about [Storage Account - StorageAccountScaleTarget (Prevent hitting subscription limit for maximum storage accounts)](https://aka.ms/subscalelimit).
+Learn more about [Storage Account - StorageAccountScaleTarget (Prevent hitting subscription limit for maximum storage accounts)](/azure/storage/blobs/storage-performance-checklist#what-to-do-when-approaching-a-scalability-target).
### Update to newer releases of the Storage Java v12 SDK for better reliability.
Learn more about [Storage Account - UpdateStorageJavaSDK (Update to newer releas
Deploying an app to a slot first and swapping it into production makes sure that all instances of the slot are warmed up before being swapped into production. This eliminates downtime when you deploy your app. The traffic redirection is seamless, no requests are dropped because of swap operations.
-Learn more about [Subscription - AzureApplicationService (Set up staging environments in Azure App Service)](/azure/app-service/deploy-staging-slots).
+Learn more about [Subscription - AzureApplicationService (Set up staging environments in Azure App Service)](../app-service/deploy-staging-slots.md).
### Enforce 'Add or replace a tag on resources' using Azure Policy Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy adds or replaces the specified tag and value when any resource is created or updated. Existing resources can be remediated by triggering a remediation task. Does not modify tags on resource groups.
-Learn more about [Subscription - AddTagPolicy (Enforce 'Add or replace a tag on resources' using Azure Policy)](/azure/governance/policy/overview).
+Learn more about [Subscription - AddTagPolicy (Enforce 'Add or replace a tag on resources' using Azure Policy)](../governance/policy/overview.md).
### Enforce 'Allowed locations' using Azure Policy Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy enables you to restrict the locations your organization can specify when deploying resources. Use to enforce your geo-compliance requirements.
-Learn more about [Subscription - AllowedLocationsPolicy (Enforce 'Allowed locations' using Azure Policy)](/azure/governance/policy/overview).
+Learn more about [Subscription - AllowedLocationsPolicy (Enforce 'Allowed locations' using Azure Policy)](../governance/policy/overview.md).
### Enforce 'Audit VMs that do not use managed disks' using Azure Policy Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy audits VMs that do not use managed disks.
-Learn more about [Subscription - AuditForManagedDisksPolicy (Enforce 'Audit VMs that do not use managed disks' using Azure Policy)](/azure/governance/policy/overview).
+Learn more about [Subscription - AuditForManagedDisksPolicy (Enforce 'Audit VMs that do not use managed disks' using Azure Policy)](../governance/policy/overview.md).
### Enforce 'Allowed virtual machine SKUs' using Azure Policy Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy enables you to specify a set of virtual machine SKUs that your organization can deploy.
-Learn more about [Subscription - AllowedVirtualMachineSkuPolicy (Enforce 'Allowed virtual machine SKUs' using Azure Policy)](/azure/governance/policy/overview).
+Learn more about [Subscription - AllowedVirtualMachineSkuPolicy (Enforce 'Allowed virtual machine SKUs' using Azure Policy)](../governance/policy/overview.md).
### Enforce 'Inherit a tag from the resource group' using Azure Policy Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy adds or replaces the specified tag and value from the parent resource group when any resource is created or updated. Existing resources can be remediated by triggering a remediation task.
-Learn more about [Subscription - InheritTagPolicy (Enforce 'Inherit a tag from the resource group' using Azure Policy)](/azure/governance/policy/overview).
+Learn more about [Subscription - InheritTagPolicy (Enforce 'Inherit a tag from the resource group' using Azure Policy)](../governance/policy/overview.md).
### Use Azure Lighthouse to simply and securely manage customer subscriptions at scale Using Azure Lighthouse improves security and reduces unnecessary access to your customer tenants by enabling more granular permissions for your users. It also allows for greater scalability, as your users can work across multiple customer subscriptions using a single login in your tenant.
-Learn more about [Subscription - OnboardCSPSubscriptionsToLighthouse (Use Azure Lighthouse to simply and securely manage customer subscriptions at scale)](/azure/lighthouse/concepts/cloud-solution-provider).
+Learn more about [Subscription - OnboardCSPSubscriptionsToLighthouse (Use Azure Lighthouse to simply and securely manage customer subscriptions at scale)](../lighthouse/concepts/cloud-solution-provider.md).
## Web
Learn more about [Subscription - OnboardCSPSubscriptionsToLighthouse (Use Azure
Deploying an app to a slot first and swapping it into production makes sure that all instances of the slot are warmed up before being swapped into production. This eliminates downtime when you deploy your app. The traffic redirection is seamless, no requests are dropped because of swap operations.
-Learn more about [App service - AzureAppService-StagingEnv (Set up staging environments in Azure App Service)](/azure/app-service/deploy-staging-slots).
+Learn more about [App service - AzureAppService-StagingEnv (Set up staging environments in Azure App Service)](../app-service/deploy-staging-slots.md).
## Next steps
-Learn more about [Operational Excellence - Microsoft Azure Well Architected Framework](/azure/architecture/framework/devops/overview)
+Learn more about [Operational Excellence - Microsoft Azure Well Architected Framework](/azure/architecture/framework/devops/overview)
advisor Advisor Reference Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-performance-recommendations.md
Learn more about [Attestation provider - UpgradeAttestationAPI (Update Attestati
Your vSAN capacity utilization has reached 75%. The cluster utilization is required to remain below the 75% critical threshold for SLA compliance. Add new nodes to VSphere cluster to increase capacity or delete VMs to reduce consumption or adjust VM workloads
-Learn more about [AVS Private cloud - vSANCapacity (vSAN capacity utilization has crossed critical threshold)](/azure/azure-vmware/concepts-private-clouds-clusters).
+Learn more about [AVS Private cloud - vSANCapacity (vSAN capacity utilization has crossed critical threshold)](../azure-vmware/concepts-private-clouds-clusters.md).
## Azure Cache for Redis
Learn more about [Redis Cache Server - RedisCacheNetworkBandwidth (Improve your
Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity.
-Learn more about [Redis Cache Server - RedisCacheConnectedClients (Improve your Cache and application performance when running with many connected clients)](https://aka.ms/redis/recommendations/connections).
+Learn more about [Redis Cache Server - RedisCacheConnectedClients (Improve your Cache and application performance when running with many connected clients)](/azure/azure-cache-for-redis/cache-faq#performance-considerations-around-connections).
### Improve your Cache and application performance when running with high server load
Learn more about [Cognitive Service - UpgradeToLatestAPI (Upgrade to the latest
Upgrade to the latest API version to get the best results in terms of model quality, performance and service availability.
-Learn more about [Cognitive Service - UpgradeToLatestAPILanguage (Upgrade to the latest API version of Azure Cognitive Service for Language)](https://aka.ms/language-api).
+Learn more about [Cognitive Service - UpgradeToLatestAPILanguage (Upgrade to the latest API version of Azure Cognitive Service for Language)](../cognitive-services/language-service/overview.md).
### Upgrade to the latest Cognitive Service Text Analytics SDK version
Learn more about [Cognitive Service - UpgradeToLatestSDK (Upgrade to the latest
Upgrade to the latest SDK version to get the best results in terms of model quality, performance and service availability.
-Learn more about [Cognitive Service - UpgradeToLatestSDKLanguage (Upgrade to the latest Cognitive Service Language SDK version)](https://aka.ms/language-api).
+Learn more about [Cognitive Service - UpgradeToLatestSDKLanguage (Upgrade to the latest Cognitive Service Language SDK version)](../cognitive-services/language-service/overview.md).
## Communication services
Learn more about [Cognitive Service - UpgradeToLatestSDKLanguage (Upgrade to the
Azure Communication Services Chat SDK can be used to add rich, real-time chat to your applications. Update to the recommended version of Chat SDK to ensure the latest fixes and features.
-Learn more about [Communication service - UpgradeChatSdk (Use recommended version of Chat SDK)](/azure/communication-services/concepts/chat/sdk-features).
+Learn more about [Communication service - UpgradeChatSdk (Use recommended version of Chat SDK)](../communication-services/concepts/chat/sdk-features.md).
### Use recommended version of Resource Manager SDK Resource Manager SDK can be used to provision and manage Azure Communication Services resources. Update to the recommended version of Resource Manager SDK to ensure the latest fixes and features.
-Learn more about [Communication service - UpgradeResourceManagerSdk (Use recommended version of Resource Manager SDK)](/azure/communication-services/quickstarts/create-communication-resource?tabs=windows&pivots=platform-net).
+Learn more about [Communication service - UpgradeResourceManagerSdk (Use recommended version of Resource Manager SDK)](../communication-services/quickstarts/create-communication-resource.md?pivots=platform-net&tabs=windows).
### Use recommended version of Identity SDK Azure Communication Services Identity SDK can be used to manage identities, users, and access tokens. Update to the recommended version of Identity SDK to ensure the latest fixes and features.
-Learn more about [Communication service - UpgradeIdentitySdk (Use recommended version of Identity SDK)](/azure/communication-services/concepts/sdk-options).
+Learn more about [Communication service - UpgradeIdentitySdk (Use recommended version of Identity SDK)](../communication-services/concepts/sdk-options.md).
### Use recommended version of SMS SDK
Learn more about [Communication service - UpgradeSmsSdk (Use recommended version
Azure Communication Services Phone Numbers SDK can be used to acquire and manage phone numbers. Update to the recommended version of Phone Numbers SDK to ensure the latest fixes and features.
-Learn more about [Communication service - UpgradePhoneNumbersSdk (Use recommended version of Phone Numbers SDK)](/azure/communication-services/concepts/sdk-options).
+Learn more about [Communication service - UpgradePhoneNumbersSdk (Use recommended version of Phone Numbers SDK)](../communication-services/concepts/sdk-options.md).
### Use recommended version of Calling SDK Azure Communication Services Calling SDK can be used to enable voice, video, screen-sharing, and other real-time communication. Update to the recommended version of Calling SDK to ensure the latest fixes and features.
-Learn more about [Communication service - UpgradeCallingSdk (Use recommended version of Calling SDK)](/azure/communication-services/concepts/voice-video-calling/calling-sdk-features).
+Learn more about [Communication service - UpgradeCallingSdk (Use recommended version of Calling SDK)](../communication-services/concepts/voice-video-calling/calling-sdk-features.md).
### Use recommended version of Call Automation SDK Azure Communication Services Call Automation SDK can be used to make and manage calls, play audio, and configure recording. Update to the recommended version of Call Automation SDK to ensure the latest fixes and features.
-Learn more about [Communication service - UpgradeServerCallingSdk (Use recommended version of Call Automation SDK)](/azure/communication-services/concepts/voice-video-calling/call-automation-apis).
+Learn more about [Communication service - UpgradeServerCallingSdk (Use recommended version of Call Automation SDK)](../communication-services/concepts/voice-video-calling/call-automation-apis.md).
### Use recommended version of Network Traversal SDK Azure Communication Services Network Traversal SDK can be used to access TURN servers for low-level data transport. Update to the recommended version of Network Traversal SDK to ensure the latest fixes and features.
-Learn more about [Communication service - UpgradeTurnSdk (Use recommended version of Network Traversal SDK)](/azure/communication-services/concepts/sdk-options).
+Learn more about [Communication service - UpgradeTurnSdk (Use recommended version of Network Traversal SDK)](../communication-services/concepts/sdk-options.md).
## Compute
Learn more about [Communication service - UpgradeTurnSdk (Use recommended versio
We have determined that your VMs are located in a region different or far from where your users are connecting from, using Windows Virtual Desktop (WVD). This may lead to prolonged connection response times and will impact overall user experience on WVD.
-Learn more about [Virtual machine - RegionProximitySessionHosts (Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.)](/azure/virtual-desktop/connection-latency).
+Learn more about [Virtual machine - RegionProximitySessionHosts (Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.)](../virtual-desktop/connection-latency.md).
### Consider increasing the size of your NVA to address persistent high CPU When NVAs run at high CPU, packets can get dropped resulting in connection failures or high latency due to network retransmits. Your NVA is running at high CPU, so you should consider increasing the VM size as allowed by the NVA vendor's licensing requirements.
-Learn more about [Virtual machine - NVAHighCPU (Consider increasing the size of your NVA to address persistent high CPU)](https://aka.ms/NVAHighCPU).
+Learn more about [Virtual machine - NVAHighCPU (Consider increasing the size of your NVA to address persistent high CPU)](../virtual-machines/sizes.md).
### Use Managed disks to prevent disk I/O throttling Your virtual machine disks belong to a storage account that has reached its scalability target, and is susceptible to I/O throttling. To protect your virtual machine from performance degradation and to simplify storage management, use Managed Disks.
-Learn more about [Virtual machine - ManagedDisksStorageAccount (Use Managed disks to prevent disk I/O throttling)](https://aka.ms/aa_avset_manageddisk_learnmore).
+Learn more about [Virtual machine - ManagedDisksStorageAccount (Use Managed disks to prevent disk I/O throttling)](../virtual-machines/managed-disks-overview.md).
### Convert Managed Disks from Standard HDD to Premium SSD for performance
Learn more about [Disk - MDHDDtoPremiumForPerformance (Convert Managed Disks fro
We have detected that Accelerated Networking is not enabled on VM resources in your existing deployment that may be capable of supporting this feature. If your VM OS image supports Accelerated Networking as detailed in the documentation, make sure to enable this free feature on these VMs to maximize the performance and latency of your networking workloads in cloud
-Learn more about [Virtual machine - AccelNetConfiguration (Enable Accelerated Networking to improve network performance and latency)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+Learn more about [Virtual machine - AccelNetConfiguration (Enable Accelerated Networking to improve network performance and latency)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
### Use SSD Disks for your production workloads
Learn more about [Virtual machine - MixedDiskTypeToSSDPublic (Use SSD Disks for
We have identified that your Virtual Machine might be running a version of Barracuda Networks NextGen Firewall Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact Barracuda Networks for further instructions on how to upgrade your Network Virtual Appliance Image.
-Learn more about [Virtual machine - BarracudaNVAAccelNet (Barracuda Networks NextGen Firewall may experience high CPU utilization, reduced throughput and high latency.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+Learn more about [Virtual machine - BarracudaNVAAccelNet (Barracuda Networks NextGen Firewall may experience high CPU utilization, reduced throughput and high latency.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
### Arista Networks vEOS Router may experience high CPU utilization, reduced throughput and high latency. We have identified that your Virtual Machine might be running a version of Arista Networks vEOS Router Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact Arista Networks for further instructions on how to upgrade your Network Virtual Appliance Image.
-Learn more about [Virtual machine - AristaNVAAccelNet (Arista Networks vEOS Router may experience high CPU utilization, reduced throughput and high latency.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+Learn more about [Virtual machine - AristaNVAAccelNet (Arista Networks vEOS Router may experience high CPU utilization, reduced throughput and high latency.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
### Cisco Cloud Services Router 1000V may experience high CPU utilization, reduced throughput and high latency. We have identified that your Virtual Machine might be running a version of Cisco Cloud Services Router 1000V Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact Cisco for further instructions on how to upgrade your Network Virtual Appliance Image.
-Learn more about [Virtual machine - CiscoCSRNVAAccelNet (Cisco Cloud Services Router 1000V may experience high CPU utilization, reduced throughput and high latency.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+Learn more about [Virtual machine - CiscoCSRNVAAccelNet (Cisco Cloud Services Router 1000V may experience high CPU utilization, reduced throughput and high latency.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
### Palo Alto Networks VM-Series Firewall may experience high CPU utilization, reduced throughput and high latency. We have identified that your Virtual Machine might be running a version of Palo Alto Networks VM-Series Firewall Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact Palo Alto Networks for further instructions on how to upgrade your Network Virtual Appliance Image.
-Learn more about [Virtual machine - PaloAltoNVAAccelNet (Palo Alto Networks VM-Series Firewall may experience high CPU utilization, reduced throughput and high latency.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+Learn more about [Virtual machine - PaloAltoNVAAccelNet (Palo Alto Networks VM-Series Firewall may experience high CPU utilization, reduced throughput and high latency.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
### NetApp Cloud Volumes ONTAP may experience high CPU utilization, reduced throughput and high latency. We have identified that your Virtual Machine might be running a version of NetApp Cloud Volumes ONTAP Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact NetApp for further instructions on how to upgrade your Network Virtual Appliance Image.
-Learn more about [Virtual machine - NetAppNVAAccelNet (NetApp Cloud Volumes ONTAP may experience high CPU utilization, reduced throughput and high latency.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+Learn more about [Virtual machine - NetAppNVAAccelNet (NetApp Cloud Volumes ONTAP may experience high CPU utilization, reduced throughput and high latency.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
### Match production Virtual Machines with Production Disk for consistent performance and better latency
Learn more about [Virtual machine - MatchProdVMProdDisks (Match production Virtu
We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
-Learn more about [Virtual machine - AristaVeosANUpgradeRecommendation (Update to the latest version of your Arista VEOS product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+Learn more about [Virtual machine - AristaVeosANUpgradeRecommendation (Update to the latest version of your Arista VEOS product for Accelerated Networking support.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
### Update to the latest version of your Barracuda NG Firewall product for Accelerated Networking support. We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
-Learn more about [Virtual machine - BarracudaNgANUpgradeRecommendation (Update to the latest version of your Barracuda NG Firewall product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+Learn more about [Virtual machine - BarracudaNgANUpgradeRecommendation (Update to the latest version of your Barracuda NG Firewall product for Accelerated Networking support.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
### Update to the latest version of your Cisco Cloud Services Router 1000V product for Accelerated Networking support. We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
-Learn more about [Virtual machine - Cisco1000vANUpgradeRecommendation (Update to the latest version of your Cisco Cloud Services Router 1000V product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+Learn more about [Virtual machine - Cisco1000vANUpgradeRecommendation (Update to the latest version of your Cisco Cloud Services Router 1000V product for Accelerated Networking support.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
### Update to the latest version of your F5 BigIp product for Accelerated Networking support. We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
-Learn more about [Virtual machine - F5BigIpANUpgradeRecommendation (Update to the latest version of your F5 BigIp product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+Learn more about [Virtual machine - F5BigIpANUpgradeRecommendation (Update to the latest version of your F5 BigIp product for Accelerated Networking support.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
### Update to the latest version of your NetApp product for Accelerated Networking support. We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
-Learn more about [Virtual machine - NetAppANUpgradeRecommendation (Update to the latest version of your NetApp product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+Learn more about [Virtual machine - NetAppANUpgradeRecommendation (Update to the latest version of your NetApp product for Accelerated Networking support.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
### Update to the latest version of your Palo Alto Firewall product for Accelerated Networking support. We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
-Learn more about [Virtual machine - PaloAltoFWANUpgradeRecommendation (Update to the latest version of your Palo Alto Firewall product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+Learn more about [Virtual machine - PaloAltoFWANUpgradeRecommendation (Update to the latest version of your Palo Alto Firewall product for Accelerated Networking support.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
### Update to the latest version of your Check Point product for Accelerated Networking support. We have identified that your Virtual Machine (VM) might be running a version of software image that is running older drivers for Accelerated Networking (AN). Your VM has a synthetic network interface that is either not AN capable or is not compatible with all Azure hardware. We recommend that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
-Learn more about [Virtual machine - CheckPointCGANUpgradeRecommendation (Update to the latest version of your Check Point product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+Learn more about [Virtual machine - CheckPointCGANUpgradeRecommendation (Update to the latest version of your Check Point product for Accelerated Networking support.)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
### Accelerated Networking may require stopping and starting the VM We have detected that Accelerated Networking is not engaged on VM resources in your existing deployment even though the feature has been requested. In rare cases like this, it may be necessary to stop and start your VM, at your convenience, to re-engage AccelNet.
-Learn more about [Virtual machine - AccelNetDisengaged (Accelerated Networking may require stopping and starting the VM)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+Learn more about [Virtual machine - AccelNetDisengaged (Accelerated Networking may require stopping and starting the VM)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
### NVA may see traffic loss due to hitting the maximum number of flows. Packet loss has been observed for this Virtual Machine due to hitting or exceeding the maximum number of flows for a VM instance of this size on Azure
-Learn more about [Virtual machine - NvaMaxFlowLimit (NVA may see traffic loss due to hitting the maximum number of flows.)](/azure/virtual-network/virtual-machine-network-throughput).
+Learn more about [Virtual machine - NvaMaxFlowLimit (NVA may see traffic loss due to hitting the maximum number of flows.)](../virtual-network/virtual-machine-network-throughput.md).
### Take advantage of Ultra Disk low latency for your log disks and improve your database workload performance. Ultra disk is available in the same region as your database workload. Ultra disk offers high throughput, high IOPS, and consistent low latency disk storage for your database workloads: For Oracle DBs, you can now use either 4k or 512E sector sizes with Ultra disk depending on your Oracle DB version. For SQL server, leveraging Ultra disk for your log disk might offer more performance for your database. See instructions here for migrating your log disk to Ultra disk.
-Learn more about [Virtual machine - AzureStorageVmUltraDisk (Take advantage of Ultra Disk low latency for your log disks and improve your database workload performance.)](/azure/virtual-machines/disks-enable-ultra-ssd?tabs=azure-portal).
+Learn more about [Virtual machine - AzureStorageVmUltraDisk (Take advantage of Ultra Disk low latency for your log disks and improve your database workload performance.)](../virtual-machines/disks-enable-ultra-ssd.md?tabs=azure-portal).
## Kubernetes
Learn more about [Virtual machine - AzureStorageVmUltraDisk (Take advantage of U
Unsupported Kubernetes version is detected. Ensure Kubernetes cluster runs with a supported version.
-Learn more about [Kubernetes service - UnsupportedKubernetesVersionIsDetected (Unsupported Kubernetes version is detected)](https://aka.ms/aks-supported-versions).
+Learn more about [Kubernetes service - UnsupportedKubernetesVersionIsDetected (Unsupported Kubernetes version is detected)](../aks/supported-kubernetes-versions.md).
## Data Factory
Learn more about [Kubernetes service - UnsupportedKubernetesVersionIsDetected (U
A high volume of throttling has been detected in an event-based trigger that runs in your Data Factory resource. This is causing your pipeline runs to drop from the run queue. Review the trigger definition to resolve issues and increase performance.
-Learn more about [Data factory trigger - ADFThrottledTriggers (Review your throttled Data Factory Triggers)](https://aka.ms/adf-create-event-trigger).
+Learn more about [Data factory trigger - ADFThrottledTriggers (Review your throttled Data Factory Triggers)](../data-factory/how-to-create-event-trigger.md).
## MariaDB
Learn more about [MariaDB server - OrcasMariaDbMemoryCache (Move your MariaDB se
Our internal telemetry shows that the server's audit logs may have been lost over the past day. This can occur when your server is experiencing a CPU heavy workload or a server generates a large number of audit logs over a short period of time. We recommend only logging the necessary events required for your audit purposes using the following server parameters: audit_log_events, audit_log_exclude_users, audit_log_include_users. If the CPU usage on your server is high due to your workload, we recommend increasing the server's vCores to improve performance.
-Learn more about [MariaDB server - OrcasMariaDBAuditLog (Increase the reliability of audit logs)](https://aka.ms/mariadb-audit-logs).
+Learn more about [MariaDB server - OrcasMariaDBAuditLog (Increase the reliability of audit logs)](../mariadb/concepts-audit-logs.md).
## MySQL
Learn more about [MySQL server - OrcasMySQLConnectionPooling (Improve MySQL conn
Our internal telemetry shows that the server's audit logs may have been lost over the past day. This can occur when your server is experiencing a CPU heavy workload or a server generates a large number of audit logs over a short period of time. We recommend only logging the necessary events required for your audit purposes using the following server parameters: audit_log_events, audit_log_exclude_users, audit_log_include_users. If the CPU usage on your server is high due to your workload, we recommend increasing the server's vCores to improve performance.
-Learn more about [MySQL server - OrcasMySQLAuditLog (Increase the reliability of audit logs)](https://aka.ms/mysql-audit-logs).
+Learn more about [MySQL server - OrcasMySQLAuditLog (Increase the reliability of audit logs)](../mysql/concepts-audit-logs.md).
### Improve performance by optimizing MySQL temporary-table sizing
Learn more about [MySQL server - OrcasMySqlTmpTables (Improve performance by opt
Our internal telemetry indicates that your application connecting to MySQL server may not be managing connections efficiently. This may result in higher application latency. To improve connection latency, we recommend that you enable connection redirection. This can be done by enabling the connection redirection feature of the PHP driver.
-Learn more about [MySQL server - OrcasMySQLConnectionRedirection (Improve MySQL connection latency)](https://aka.ms/azure_mysql_connection_redirection).
+Learn more about [MySQL server - OrcasMySQLConnectionRedirection (Improve MySQL connection latency)](../mysql/howto-redirection.md).
## PostgreSQL
Learn more about [PostgreSQL server - OrcasPostgreSqlMemoryCache (Move your Post
Our internal telemetry shows that you may have a read intensive workload running, which results in resource contention for this server. This may lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica.
-Learn more about [PostgreSQL server - OrcasPostgreSqlReadReplica (Add a PostgreSQL Read Replica server)](https://aka.ms/postgresqlreadreplica).
+Learn more about [PostgreSQL server - OrcasPostgreSqlReadReplica (Add a PostgreSQL Read Replica server)](../postgresql/howto-read-replicas-portal.md).
### Increase the PostgreSQL server vCores
Learn more about [PostgreSQL server - OrcasPostgreSqlLogErrorVerbosity (Improve
Our internal telemetry indicates that your PostgreSQL server has been configured to track query statistics using the pg_stat_statements module. While useful for troubleshooting, it can also result in reduced server performance. To improve performance, we recommend that you change the pg_stat_statements.track parameter to NONE.
-Learn more about [PostgreSQL server - OrcasPostgreSqlStatStatementsTrack (Optimize query statistics collection on an Azure Database for PostgreSQL)](https://aka.ms/azure_postgresql_optimize_query_stats).
+Learn more about [PostgreSQL server - OrcasPostgreSqlStatStatementsTrack (Optimize query statistics collection on an Azure Database for PostgreSQL)](../postgresql/howto-optimize-query-stats-collection.md).
### Optimize query store on an Azure Database for PostgreSQL when not troubleshooting Our internal telemetry indicates that your PostgreSQL database has been configured to track query performance using the pg_qs.query_capture_mode parameter. While troubleshooting, we suggest setting the pg_qs.query_capture_mode parameter to TOP or ALL. When not troubleshooting, we recommend that you set the pg_qs.query_capture_mode parameter to NONE.
-Learn more about [PostgreSQL server - OrcasPostgreSqlQueryCaptureMode (Optimize query store on an Azure Database for PostgreSQL when not troubleshooting)](https://aka.ms/azure_postgresql_query_store).
+Learn more about [PostgreSQL server - OrcasPostgreSqlQueryCaptureMode (Optimize query store on an Azure Database for PostgreSQL when not troubleshooting)](../postgresql/concepts-query-store.md).
### Increase the storage limit for PostgreSQL Flexible Server Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount.
-Learn more about [PostgreSQL server - OrcasPostgreSqlFlexibleServerStorageLimit (Increase the storage limit for PostgreSQL Flexible Server)](https://aka.ms/azure_postgresql_flexible_server_limits).
+Learn more about [PostgreSQL server - OrcasPostgreSqlFlexibleServerStorageLimit (Increase the storage limit for PostgreSQL Flexible Server)](../postgresql/flexible-server/concepts-limits.md).
### Optimize logging settings by setting LoggingCollector to -1
Learn more about [PostgreSQL server - OrcasPostgreSqlCitusStorageLimitHyperscale
Our internal telemetry indicates that you have log_statement enabled, for better performance, set it to NONE
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogStatement (Optimize log_statement settings for PostgreSQL on Azure Database)](/azure/postgresql/flexible-server/concepts-logging).
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogStatement (Optimize log_statement settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md).
### Increase the work_mem to avoid excessive disk spilling from sort and hash
Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruW
Our internal telemetry suggests that you can improve storage performance by enabling Intelligent tuning
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruIntelligentTuning (Improve PostgreSQL - Flexible Server performance by enabling Intelligent tuning)](/azure/postgresql/flexible-server/concepts-intelligent-tuning).
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruIntelligentTuning (Improve PostgreSQL - Flexible Server performance by enabling Intelligent tuning)](../postgresql/flexible-server/concepts-intelligent-tuning.md).
### Optimize log_duration settings for PostgreSQL on Azure Database Our internal telemetry indicates that you have log_duration enabled, for better performance, set it to OFF
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogDuration (Optimize log_duration settings for PostgreSQL on Azure Database)](/azure/postgresql/flexible-server/concepts-logging).
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogDuration (Optimize log_duration settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md).
### Optimize log_min_duration settings for PostgreSQL on Azure Database Our internal telemetry indicates that you have log_min_duration enabled, for better performance, set it to -1
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogMinDuration (Optimize log_min_duration settings for PostgreSQL on Azure Database)](/azure/postgresql/flexible-server/concepts-logging).
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogMinDuration (Optimize log_min_duration settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md).
### Optimize pg_qs.query_capture_mode settings for PostgreSQL on Azure Database Our internal telemetry indicates that you have pg_qs.query_capture_mode enabled, for better performance, set it to NONE
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruQueryCaptureMode (Optimize pg_qs.query_capture_mode settings for PostgreSQL on Azure Database)](/azure/postgresql/flexible-server/concepts-query-store-best-practices).
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruQueryCaptureMode (Optimize pg_qs.query_capture_mode settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-query-store-best-practices.md).
### Optimize PostgreSQL performance by enabling PGBouncer Our Internal telemetry indicates that you can improve PostgreSQL performance by enabling PGBouncer
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruOrcasPostgreSQLConnectionPooling (Optimize PostgreSQL performance by enabling PGBouncer)](/azure/postgresql/flexible-server/concepts-pgbouncer).
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruOrcasPostgreSQLConnectionPooling (Optimize PostgreSQL performance by enabling PGBouncer)](../postgresql/flexible-server/concepts-pgbouncer.md).
### Optimize log_error_verbosity settings for PostgreSQL on Azure Database Our internal telemetry indicates that you have log_error_verbosity enabled, for better performance, set it to DEFAULT
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogErrorVerbosity (Optimize log_error_verbosity settings for PostgreSQL on Azure Database)](/azure/postgresql/flexible-server/concepts-logging).
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogErrorVerbosity (Optimize log_error_verbosity settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md).
### Increase the storage limit for Hyperscale (Citus) server group
Learn more about [Hyperscale (Citus) server group - MarlinStorageLimitRecommenda
Consider our new offering Azure Database for PostgreSQL Flexible Server that provides richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls and simplified developer experience. Learn more.
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlMeruMigration (Migrate your database from SSPG to FSPG)](https://aka.ms/sspg-upgrade).
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlMeruMigration (Migrate your database from SSPG to FSPG)](../postgresql/how-to-upgrade-using-dump-and-restore.md).
## Desktop Virtualization
Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSq
We have determined that your VMs are located in a region different or far from where your users are connecting from, using Windows Virtual Desktop (WVD). This may lead to prolonged connection response times and will impact overall user experience on WVD. When creating VMs for your host pools, you should attempt to use a region closer to the user. Having close proximity ensures continuing satisfaction with the WVD service and a better overall quality of experience.
-Learn more about [Host Pool - RegionProximityHostPools (Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.)](/azure/virtual-desktop/connection-latency).
+Learn more about [Host Pool - RegionProximityHostPools (Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.)](../virtual-desktop/connection-latency.md).
### Change the max session limit for your depth first load balanced host pool to improve VM performance Depth first load balancing uses the max session limit to determine the maximum number of users that can have concurrent sessions on a single session host. If the max session limit is too high, all user sessions will be directed to the same session host and this may cause performance and reliability issues. Therefore, when setting a host pool to have depth first load balancing, you should also set an appropriate max session limit according to the configuration of your deployment and capacity of your VMs. To fix this, open your host pool's properties and change the value next to the "Max session limit" setting.
-Learn more about [Host Pool - ChangeMaxSessionLimitForDepthFirstHostPool (Change the max session limit for your depth first load balanced host pool to improve VM performance )](/azure/virtual-desktop/configure-host-pool-load-balancing).
+Learn more about [Host Pool - ChangeMaxSessionLimitForDepthFirstHostPool (Change the max session limit for your depth first load balanced host pool to improve VM performance )](../virtual-desktop/configure-host-pool-load-balancing.md).
## Cosmos DB
Learn more about [Cosmos DB account - CosmosDBQueryPageSize (Configure your Azur
Your Azure Cosmos DB containers are running ORDER BY queries incurring high Request Unit (RU) charges. It is recommended to add composite indexes to your containers' indexing policy to improve the RU consumption and decrease the latency of these queries.
-Learn more about [Cosmos DB account - CosmosDBOrderByHighRUCharge (Add composite indexes to your Azure Cosmos DB container)](/azure/cosmos-db/index-policy#composite-indexes).
+Learn more about [Cosmos DB account - CosmosDBOrderByHighRUCharge (Add composite indexes to your Azure Cosmos DB container)](../cosmos-db/index-policy.md#composite-indexes).
### Optimize your Azure Cosmos DB indexing policy to only index what's needed Your Azure Cosmos DB containers are using the default indexing policy, which indexes every property in your documents. Because you're storing large documents, a high number of properties get indexed, resulting in high Request Unit consumption and poor write latency. To optimize write performance, we recommend overriding the default indexing policy to only index the properties used in your queries.
-Learn more about [Cosmos DB account - CosmosDBDefaultIndexingWithManyPaths (Optimize your Azure Cosmos DB indexing policy to only index what's needed)](/azure/cosmos-db/index-policy).
+Learn more about [Cosmos DB account - CosmosDBDefaultIndexingWithManyPaths (Optimize your Azure Cosmos DB indexing policy to only index what's needed)](../cosmos-db/index-policy.md).
### Use hierarchical partition keys for optimal data distribution
Learn more about [Cosmos DB account - CosmosDBHierarchicalPartitionKey (Use hier
More than 75% of your read requests are landing on the memstore. That indicates that the reads are primarily on recent data. This suggests that even if a flush happens on the memstore, the recent file needs to be accessed and that file needs to be in the cache.
-Learn more about [HDInsight cluster - HBaseMemstoreReadPercentage (Reads happen on most recent data)](/azure/hdinsight/hbase/apache-hbase-advisor).
+Learn more about [HDInsight cluster - HBaseMemstoreReadPercentage (Reads happen on most recent data)](../hdinsight/hbase/apache-hbase-advisor.md).
### Consider using Accelerated Writes feature in your HBase cluster to improve cluster performance.
You are seeing this advisor recommendation because HDInsight team's system log s
These conditions are indicators that your cluster is suffering from high write latencies. This could be due to heavy workload performed on your cluster. To improve the performance of your cluster, you may want to consider utilizing the Accelerated Writes feature provided by Azure HDInsight HBase. The Accelerated Writes feature for HDInsight Apache HBase clusters attaches premium SSD-managed disks to every RegionServer (worker node) instead of using cloud storage. As a result, provides low write-latency and better resiliency for your applications.
-Learn more about [HDInsight cluster - AccWriteCandidate (Consider using Accelerated Writes feature in your HBase cluster to improve cluster performance.)](/azure/hdinsight/hbase/apache-hbase-accelerated-writes).
+Learn more about [HDInsight cluster - AccWriteCandidate (Consider using Accelerated Writes feature in your HBase cluster to improve cluster performance.)](../hdinsight/hbase/apache-hbase-accelerated-writes.md).
### More than 75% of your queries are full scan queries. More than 75% of the scan queries on your cluster are doing a full region/table scan. Modify your scan queries to avoid full region or table scans.
-Learn more about [HDInsight cluster - ScanQueryTuningcandidate (More than 75% of your queries are full scan queries.)](/azure/hdinsight/hbase/apache-hbase-advisor).
+Learn more about [HDInsight cluster - ScanQueryTuningcandidate (More than 75% of your queries are full scan queries.)](../hdinsight/hbase/apache-hbase-advisor.md).
### Check your region counts as you have blocking updates. Region counts needs to be adjusted to avoid updates getting blocked. It might require a scale up of the cluster by adding new nodes.
-Learn more about [HDInsight cluster - RegionCountCandidate (Check your region counts as you have blocking updates.)](/azure/hdinsight/hbase/apache-hbase-advisor).
+Learn more about [HDInsight cluster - RegionCountCandidate (Check your region counts as you have blocking updates.)](../hdinsight/hbase/apache-hbase-advisor.md).
### Consider increasing the flusher threads The flush queue size in your region servers is more than 100 or there are updates getting blocked frequently. Tuning of the flush handler is recommended.
-Learn more about [HDInsight cluster - FlushQueueCandidate (Consider increasing the flusher threads)](/azure/hdinsight/hbase/apache-hbase-advisor).
+Learn more about [HDInsight cluster - FlushQueueCandidate (Consider increasing the flusher threads)](../hdinsight/hbase/apache-hbase-advisor.md).
### Consider increasing your compaction threads for compactions to complete faster The compaction queue in your region servers is more than 2000 suggesting that more data requires compaction. Slower compactions can impact read performance as the number of files to read are more. More files without compaction can also impact the heap usage related to how files interact with Azure file system.
-Learn more about [HDInsight cluster - CompactionQueueCandidate (Consider increasing your compaction threads for compactions to complete faster)](/azure/hdinsight/hbase/apache-hbase-advisor).
+Learn more about [HDInsight cluster - CompactionQueueCandidate (Consider increasing your compaction threads for compactions to complete faster)](../hdinsight/hbase/apache-hbase-advisor.md).
## Key Vault
Learn more about [HDInsight cluster - CompactionQueueCandidate (Consider increas
New Key Vault Client Libraries are split to keys, secrets, and certificates SDKs, which are integrated with recommended Azure Identity library to provide seamless authentication to Key Vault across all languages and environments. It also contains several performance fixes to issues reported by customers and proactively identified through our QA process.<br><br>**PLEASE DISMISS:**<br>If Key Vault is integrated with Azure Storage, Disk or other Azure services which can use old Key Vault SDK and when all your current custom applications are using .NET SDK 4.0 or above.
-Learn more about [Key vault - UpgradeKeyVaultSDK (Update Key Vault SDK Version)](/azure/key-vault/general/client-libraries).
+Learn more about [Key vault - UpgradeKeyVaultSDK (Update Key Vault SDK Version)](../key-vault/general/client-libraries.md).
### Update Key Vault SDK Version
New Key Vault Client Libraries are split to keys, secrets, and certificates SDKs
> [!IMPORTANT] > Please be aware that you can only remediate recommendation for custom applications you have access to. Recommendations can be shown due to integration with other Azure services like Storage, Disk encryption, which are in process to update to new version of our SDK. If you use .NET 4.0 in all your applications please dismiss.
-Learn more about [Managed HSM Service - UpgradeKeyVaultMHSMSDK (Update Key Vault SDK Version)](/azure/key-vault/general/client-libraries).
+Learn more about [Managed HSM Service - UpgradeKeyVaultMHSMSDK (Update Key Vault SDK Version)](../key-vault/general/client-libraries.md).
## Data Exporer
Learn more about [Managed HSM Service - UpgradeKeyVaultMHSMSDK (Update Key Vault
This recommendation surfaces all Data Explorer resources which exceed the recommended data capacity (80%). The recommended action to improve the performance is to scale to the recommended configuration shown.
-Learn more about [Data explorer resource - Right-size ADX resource (Right-size Data Explorer resources for optimal performance.)](https://aka.ms/adxskuperformance).
+Learn more about [Data explorer resource - Right-size ADX resource (Right-size Data Explorer resources for optimal performance.)](/azure/data-explorer/azure-advisor#correctly-size-azure-data-explorer-clusters-to-optimize-performance).
### Review table cache policies for Data Explorer tables This recommendation surfaces Data Explorer tables with a high number of queries that look back beyond the configured cache period (policy). (You'll see the top 10 tables by query percentage that access out-of-cache data). The recommended action to improve the performance: Limit queries on this table to the minimal necessary time range (within the defined policy). Alternatively, if data from the entire time range is required, increase the cache period to the recommended value.
-Learn more about [Data explorer resource - UpdateCachePoliciesForAdxTables (Review table cache policies for Data Explorer tables)](https://aka.ms/adxcachepolicy).
+Learn more about [Data explorer resource - UpdateCachePoliciesForAdxTables (Review table cache policies for Data Explorer tables)](/azure/data-explorer/kusto/management/cachepolicy).
### Reduce Data Explorer table cache policy for better performance Reducing the table cache policy will free up unused data from the resource's cache and improve performance.
-Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesToImprovePerformance (Reduce Data Explorer table cache policy for better performance)](https://aka.ms/adxcachepolicy).
+Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesToImprovePerformance (Reduce Data Explorer table cache policy for better performance)](/azure/data-explorer/kusto/management/cachepolicy).
## Networking
Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTables
Time to Live (TTL) affects how recent of a response a client will get when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client will be routed to a functioning endpoint faster in the case of a failover. Configure your TTL to 20 seconds to route traffic to a health endpoint as quickly as possible.
-Learn more about [Traffic Manager profile - FastFailOverTTL (Configure DNS Time to Live to 20 seconds)](https://aka.ms/Ngfw4r).
+Learn more about [Traffic Manager profile - FastFailOverTTL (Configure DNS Time to Live to 20 seconds)](/azure/traffic-manager/traffic-manager-monitoring#endpoint-failover-and-recovery).
### Configure DNS Time to Live to 60 seconds Time to Live (TTL) affects how recent of a response a client will get when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client will be routed to a functioning endpoint faster in the case of a failover. Configure your TTL to 60 seconds to route traffic to a health endpoint as quickly as possible.
-Learn more about [Traffic Manager profile - ProfileTTL (Configure DNS Time to Live to 60 seconds)](https://aka.ms/Um3xr5).
+Learn more about [Traffic Manager profile - ProfileTTL (Configure DNS Time to Live to 60 seconds)](../traffic-manager/traffic-manager-monitoring.md).
### Upgrade your ExpressRoute circuit bandwidth to accommodate your bandwidth needs You have been using over 90% of your procured circuit bandwidth recently. If you exceed your allocated bandwidth, you will experience an increase in dropped packets sent over ExpressRoute. Upgrade your circuit bandwidth to maintain performance if your bandwidth needs remain this high.
-Learn more about [ExpressRoute circuit - UpgradeERCircuitBandwidth (Upgrade your ExpressRoute circuit bandwidth to accommodate your bandwidth needs)](/azure/expressroute/about-upgrade-circuit-bandwidth).
+Learn more about [ExpressRoute circuit - UpgradeERCircuitBandwidth (Upgrade your ExpressRoute circuit bandwidth to accommodate your bandwidth needs)](../expressroute/about-upgrade-circuit-bandwidth.md).
### Consider increasing the size of your VNet Gateway SKU to address consistently high CPU use Under high traffic load, the VPN gateway may drop packets due to high CPU.
-Learn more about [Virtual network gateway - HighCPUVNetGateway (Consider increasing the size of your VNet Gateway SKU to address consistently high CPU use)](https://aka.ms/HighCPUP2SVNetGateway).
+Learn more about [Virtual network gateway - HighCPUVNetGateway (Consider increasing the size of your VNet Gateway SKU to address consistently high CPU use)](../virtual-machines/sizes.md).
### Consider increasing the size of your VNet Gateway SKU to address high P2S use
Learn more about [Virtual network gateway - HighP2SConnectionsVNetGateway (Consi
Your Application Gateway has been running on high utilization recently and under heavy load, you may experience traffic loss or increase in latency. It is important that you scale your Application Gateway according to your traffic and with a bit of a buffer so that you are prepared for any traffic surges or spikes and minimizing the impact that it may have in your QoS. Application Gateway v1 SKU (Standard/WAF) supports manual scaling and v2 SKU (Standard_v2/WAF_v2) support manual and autoscaling. In case of manual scaling, increase your instance count and if autoscaling is enabled, make sure your maximum instance count is set to a higher value so Application Gateway can scale out as the traffic increases
-Learn more about [Application gateway - HotAppGateway (Make sure you have enough instances in your Application Gateway to support your traffic)](https://aka.ms/hotappgw).
+Learn more about [Application gateway - HotAppGateway (Make sure you have enough instances in your Application Gateway to support your traffic)](../application-gateway/high-traffic-support.md).
## SQL
Learn more about [Application gateway - HotAppGateway (Make sure you have enough
We have detected that you are missing table statistics which may be impacting query performance. The query optimizer uses statistics to estimate the cardinality or number of rows in the query result which enables the query optimizer to create a high quality query plan.
-Learn more about [SQL data warehouse - CreateTableStatisticsSqlDW (Create statistics on table columns)](https://aka.ms/learnmorestatistics).
+Learn more about [SQL data warehouse - CreateTableStatisticsSqlDW (Create statistics on table columns)](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-statistics.md).
### Remove data skew to increase query performance We have detected distribution data skew greater than 15%. This can cause costly performance bottlenecks.
-Learn more about [SQL data warehouse - DataSkewSqlDW (Remove data skew to increase query performance)](https://aka.ms/learnmoredataskew).
+Learn more about [SQL data warehouse - DataSkewSqlDW (Remove data skew to increase query performance)](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute#how-to-tell-if-your-distribution-column-is-a-good-choice).
### Update statistics on table columns We have detected that you do not have up-to-date table statistics which may be impacting query performance. The query optimizer uses up-to-date statistics to estimate the cardinality or number of rows in the query result which enables the query optimizer to create a high quality query plan.
-Learn more about [SQL data warehouse - UpdateTableStatisticsSqlDW (Update statistics on table columns)](https://aka.ms/learnmorestatistics).
+Learn more about [SQL data warehouse - UpdateTableStatisticsSqlDW (Update statistics on table columns)](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-statistics.md).
### Right-size overutilized SQL Databases
Learn more about [SQL database - sqlRightsizePerformance (Right-size overutilize
We have detected that you had high cache used percentage with a low hit percentage. This indicates high cache eviction which can impact the performance of your workload.
-Learn more about [SQL data warehouse - SqlDwIncreaseCacheCapacity (Scale up to optimize cache utilization with SQL Data Warehouse)](https://aka.ms/learnmoreadaptivecache).
+Learn more about [SQL data warehouse - SqlDwIncreaseCacheCapacity (Scale up to optimize cache utilization with SQL Data Warehouse)](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-monitor-cache.md).
### Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse We have detected that you had high tempdb utilization which can impact the performance of your workload.
-Learn more about [SQL data warehouse - SqlDwReduceTempdbContention (Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse)](https://aka.ms/learnmoretempdb).
+Learn more about [SQL data warehouse - SqlDwReduceTempdbContention (Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse)](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor#monitor-tempdb).
### Convert tables to replicated tables with SQL Data Warehouse We have detected that you may benefit from using replicated tables. When using replicated tables, this will avoid costly data movement operations and significantly increase the performance of your workload.
-Learn more about [SQL data warehouse - SqlDwReplicateTable (Convert tables to replicated tables with SQL Data Warehouse)](https://aka.ms/learnmorereplicatedtables).
+Learn more about [SQL data warehouse - SqlDwReplicateTable (Convert tables to replicated tables with SQL Data Warehouse)](../synapse-analytics/sql-data-warehouse/design-guidance-for-replicated-tables.md).
### Split staged files in the storage account to increase load performance We have detected that you can increase load throughput by splitting your compressed files that are staged in your storage account. A good rule of thumb is to split compressed files into 60 or more to maximize the parallelism of your load.
-Learn more about [SQL data warehouse - FileSplittingGuidance (Split staged files in the storage account to increase load performance)](https://aka.ms/learnmorefilesplit).
+Learn more about [SQL data warehouse - FileSplittingGuidance (Split staged files in the storage account to increase load performance)](/azure/synapse-analytics/sql/data-loading-best-practices#preparing-data-in-azure-storage).
### Increase batch size when loading to maximize load throughput, data compression, and query performance We have detected that you can increase load performance and throughput by increasing the batch size when loading into your database. You should consider using the COPY statement. If you are unable to use the COPY statement, consider increasing the batch size when using loading utilities such as the SQLBulkCopy API or BCP - a good rule of thumb is a batch size between 100K to 1M rows.
-Learn more about [SQL data warehouse - LoadBatchSizeGuidance (Increase batch size when loading to maximize load throughput, data compression, and query performance)](https://aka.ms/learnmoreincreasebatchsize).
+Learn more about [SQL data warehouse - LoadBatchSizeGuidance (Increase batch size when loading to maximize load throughput, data compression, and query performance)](/azure/synapse-analytics/sql/data-loading-best-practices#increase-batch-size-when-using-sqlbulkcopy-api-or-bcp).
### Co-locate the storage account within the same region to minimize latency when loading We have detected that you are loading from a region that is different from your SQL pool. You should consider loading from a storage account that is within the same region as your SQL pool to minimize latency when loading data.
-Learn more about [SQL data warehouse - ColocateStorageAccount (Co-locate the storage account within the same region to minimize latency when loading)](https://aka.ms/learnmorestoragecolocation).
+Learn more about [SQL data warehouse - ColocateStorageAccount (Co-locate the storage account within the same region to minimize latency when loading)](/azure/synapse-analytics/sql/data-loading-best-practices#preparing-data-in-azure-storage).
## Storage
Learn more about [SQL data warehouse - ColocateStorageAccount (Co-locate the sto
When writing a block blob that is 256 MB or less (64 MB for requests using REST versions before 2016-05-31), you can upload it in its entirety with a single write operation using "Put Blob". Based on your aggregated metrics, we believe your storage account's write operations can be optimized.
-Learn more about [Storage Account - StorageCallPutBlob (Use \"Put Blob\" for blobs smaller than 256 MB)](https://aka.ms/understandblockblobs).
+Learn more about [Storage Account - StorageCallPutBlob (Use \"Put Blob\" for blobs smaller than 256 MB)](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs).
### Upgrade your Storage Client Library to the latest version for better reliability and performance The latest version of Storage Client Library/ SDK contains fixes to issues reported by customers and proactively identified through our QA process. The latest version also carries reliability and performance optimization in addition to new features that can improve your overall experience using Azure Storage.
-Learn more about [Storage Account - UpdateStorageDataMovementSDK (Upgrade your Storage Client Library to the latest version for better reliability and performance)](https://aka.ms/AA5wtca).
+Learn more about [Storage Account - UpdateStorageDataMovementSDK (Upgrade your Storage Client Library to the latest version for better reliability and performance)](/nuget/consume-packages/install-use-packages-visual-studio).
### Upgrade to Standard SSD Disks for consistent and improved performance
The latest version of Storage Client Library/ SDK contains fixes to issues repor
One or more of your storage accounts has a high transaction rate per GB of block blob data stored. Use premium performance block blob storage instead of standard performance storage for your workloads that require fast storage response times and/or high transaction rates and potentially save on storage costs.
-Learn more about [Storage Account - PremiumBlobStorageAccount (Use premium performance block blob storage)](https://aka.ms/usePremiumBlob).
+Learn more about [Storage Account - PremiumBlobStorageAccount (Use premium performance block blob storage)](../storage/common/storage-account-overview.md).
### Convert Unmanaged Disks from Standard HDD to Premium SSD for performance
Learn more about [Storage Account - UMDHDDtoPremiumForPerformance (Convert Unman
We have observed that there are no snapshots of your file shares. This means you are not protected from accidental file deletion or file corruption. Please enable snapshots to protect your data. One way to do this is through Azure
-Learn more about [Storage Account - EnableSnapshots (No Snapshots Detected)](/azure/backup/azure-file-share-backup-overview).
+Learn more about [Storage Account - EnableSnapshots (No Snapshots Detected)](../backup/azure-file-share-backup-overview.md).
## Synapse
Learn more about [Storage Account - EnableSnapshots (No Snapshots Detected)](/az
Clustered columnstore tables are organized in data into segments. Having high segment quality is critical to achieving optimal query performance on a columnstore table. Segment quality can be measured by the number of rows in a compressed row group.
-Learn more about [Synapse workspace - SynapseCCIGuidance (Tables with Clustered Columnstore Indexes (CCI) with less than 60 million rows)](https://aka.ms/AzureSynapseCCIGuidance).
+Learn more about [Synapse workspace - SynapseCCIGuidance (Tables with Clustered Columnstore Indexes (CCI) with less than 60 million rows)](../synapse-analytics/sql/best-practices-dedicated-sql-pool.md#optimize-clustered-columnstore-tables).
### Update SynapseManagementClient SDK Version New SynapseManagementClient is using .NET SDK 4.0 or above.
-Learn more about [Synapse workspace - UpgradeSynapseManagementClientSDK (Update SynapseManagementClient SDK Version)](https://aka.ms/UpgradeSynapseManagementClientSDK).
+Learn more about [Synapse workspace - UpgradeSynapseManagementClientSDK (Update SynapseManagementClient SDK Version)](/dotnet/api/microsoft.azure.management.synapse.synapsemanagementclient).
## Web
Learn more about [Synapse workspace - UpgradeSynapseManagementClientSDK (Update
Your app served more than 1000 requests per day for the past 3 days. Your app may benefit from the higher performance infrastructure available with the Premium V2 App Service tier. The Premium V2 tier features Dv2-series VMs with faster processors, SSD storage, and doubled memory-to-core ratio when compared to the previous instances. Learn more about upgrading to Premium V2 from our documentation.
-Learn more about [App service - AppServiceMoveToPremiumV2 (Move your App Service Plan to PremiumV2 for better performance)](https://aka.ms/ant-premiumv2).
+Learn more about [App service - AppServiceMoveToPremiumV2 (Move your App Service Plan to PremiumV2 for better performance)](../app-service/app-service-configure-premium-tier.md).
### Check outbound connections from your App Service resource Your app has opened too many TCP/IP socket connections. Exceeding ephemeral TCP/IP port connection limits can cause unexpected connectivity issues for your apps.
-Learn more about [App service - AppServiceOutboundConnections (Check outbound connections from your App Service resource)](https://aka.ms/antbc-socket).
+Learn more about [App service - AppServiceOutboundConnections (Check outbound connections from your App Service resource)](/azure/app-service/app-service-best-practices#socketresources).
## Next steps
-Learn more about [Performance Efficiency - Microsoft Azure Well Architected Framework](/azure/architecture/framework/scalability/overview)
+Learn more about [Performance Efficiency - Microsoft Azure Well Architected Framework](/azure/architecture/framework/scalability/overview)
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
Learn more about [Azure FarmBeats - FarmBeatsApiVersion (Upgrade to the latest F
API Management service failed to refresh hostname certificate from Key Vault. Ensure that certificate exists in Key Vault and API Management service identity is granted secret read access. Otherwise, API Management service will not be able to retrieve certificate updates from Key Vault, which may lead to the service using stale certificate and runtime API traffic being blocked as a result.
-Learn more about [Api Management - HostnameCertRotationFail (Hostname certificate rotation failed)](https://aka.ms/apimdocs/customdomain).
+Learn more about [Api Management - HostnameCertRotationFail (Hostname certificate rotation failed)](../api-management/configure-custom-domain.md).
### SSL/TLS renegotiation blocked SSL/TLS renegotiation attempt blocked. Renegotiation happens when a client certificate is requested over an already established connection. When it is blocked, reading 'context.Request.Certificate' in policy expressions will return 'null'. To support client certificate authentication scenarios, enable 'Negotiate client certificate' on listed hostnames. For browser-based clients, enabling this option might result in a certificate prompt being presented to the client.
-Learn more about [Api Management - TlsRenegotiationBlocked (SSL/TLS renegotiation blocked)](/azure/api-management/api-management-howto-mutual-certificates-for-clients).
+Learn more about [Api Management - TlsRenegotiationBlocked (SSL/TLS renegotiation blocked)](../api-management/api-management-howto-mutual-certificates-for-clients.md).
## Cache
Learn more about [Api Management - TlsRenegotiationBlocked (SSL/TLS renegotiatio
Fragmentation and memory pressure can cause availability incidents during a failover or management operations. Increasing reservation of memory for fragmentation helps in reducing the cache failures when running under high memory pressure. Memory for fragmentation can be increased via maxfragmentationmemory-reserved setting available in advanced settings blade.
-Learn more about [Redis Cache Server - RedisCacheMemoryFragmentation (Availability may be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact.)](https://aka.ms/redis/recommendations/memory-policies).
+Learn more about [Redis Cache Server - RedisCacheMemoryFragmentation (Availability may be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact.)](/azure/azure-cache-for-redis/cache-configure#memory-policies).
## Compute
Learn more about [Redis Cache Server - RedisCacheMemoryFragmentation (Availabili
Enable backups for your virtual machines and secure your data
-Learn more about [Virtual machine (classic) - EnableBackup (Enable Backups on your Virtual Machines)](/azure/backup/backup-overview).
+Learn more about [Virtual machine (classic) - EnableBackup (Enable Backups on your Virtual Machines)](../backup/backup-overview.md).
### Upgrade the standard disks attached to your premium-capable VM to premium disks We have identified that you are using standard disks with your premium-capable Virtual Machines and we recommend you consider upgrading the standard disks to premium disks. For any Single Instance Virtual Machine using premium storage for all Operating System Disks and Data Disks, we guarantee you will have Virtual Machine Connectivity of at least 99.9%. Consider these factors when making your upgrade decision. The first is that upgrading requires a VM reboot and this process takes 3-5 minutes to complete. The second is if the VMs in the list are mission-critical production VMs, evaluate the improved availability against the cost of premium disks.
-Learn more about [Virtual machine - MigrateStandardStorageAccountToPremium (Upgrade the standard disks attached to your premium-capable VM to premium disks)](https://aka.ms/aa_storagestandardtopremium_learnmore).
+Learn more about [Virtual machine - MigrateStandardStorageAccountToPremium (Upgrade the standard disks attached to your premium-capable VM to premium disks)](/azure/virtual-machines/disks-types#premium-ssd).
### Enable virtual machine replication to protect your applications from regional outage Virtual machines which do not have replication enabled to another region are not resilient to regional outages. Replicating the machines drastically reduce any adverse business impact during the time of an Azure region outage. We highly recommend enabling replication of all the business critical virtual machines from the below list so that in an event of an outage, you can quickly bring up your machines in remote Azure region.
-Learn more about [Virtual machine - ASRUnprotectedVMs (Enable virtual machine replication to protect your applications from regional outage)](https://aka.ms/azure-site-recovery-dr-azure-vms).
+Learn more about [Virtual machine - ASRUnprotectedVMs (Enable virtual machine replication to protect your applications from regional outage)](../site-recovery/azure-to-azure-quickstart.md).
### Upgrade VM from Premium Unmanaged Disks to Managed Disks at no additional cost We have identified that your VM is using premium unmanaged disks that can be migrated to managed disks at no additional cost. Azure Managed Disks provides higher resiliency, simplified service management, higher scale target and more choices among several disk types. This upgrade can be done through the portal in less than 5 minutes.
-Learn more about [Virtual machine - UpgradeVMToManagedDisksWithoutAdditionalCost (Upgrade VM from Premium Unmanaged Disks to Managed Disks at no additional cost)](https://aka.ms/md_overview).
+Learn more about [Virtual machine - UpgradeVMToManagedDisksWithoutAdditionalCost (Upgrade VM from Premium Unmanaged Disks to Managed Disks at no additional cost)](../virtual-machines/managed-disks-overview.md).
### Update your outbound connectivity protocol to Service Tags for Azure Site Recovery Using IP Address based filtering has been identified as a vulnerable way to control outbound connectivity for firewalls. It is advised to use Service Tags as an alternative for controlling connectivity. We highly recommend the use of Service Tags, to allow connectivity to Azure Site Recovery services for the machines.
-Learn more about [Virtual machine - ASRUpdateOutboundConnectivityProtocolToServiceTags (Update your outbound connectivity protocol to Service Tags for Azure Site Recovery)](https://aka.ms/azure-site-recovery-using-service-tags).
+Learn more about [Virtual machine - ASRUpdateOutboundConnectivityProtocolToServiceTags (Update your outbound connectivity protocol to Service Tags for Azure Site Recovery)](/azure/site-recovery/azure-to-azure-about-networking#outbound-connectivity-using-service-tags).
### Use Managed Disks to improve data reliability Virtual machines in an Availability Set with disks that share either storage accounts or storage scale units are not resilient to single storage scale unit failures during outages. Migrate to Azure Managed Disks to ensure that the disks of different VMs in the Availability Set are sufficiently isolated to avoid a single point of failure.
-Learn more about [Availability set - ManagedDisksAvSet (Use Managed Disks to improve data reliability)](https://aka.ms/aa_avset_manageddisk_learnmore).
+Learn more about [Availability set - ManagedDisksAvSet (Use Managed Disks to improve data reliability)](../virtual-machines/managed-disks-overview.md).
### Check Point Virtual Machine may lose Network Connectivity.
Learn more about [Virtual machine - CheckPointPlatformServicingKnownIssueA (Chec
In order for a session host to deploy and register to WVD properly, you need to add a set of URLs to allowed list in case your virtual machine runs in restricted environment. After visiting "Learn More" link, you will be able to see the minimum list of URLs you need to unblock to have a successful deployment and functional session host. For specific URL(s) missing from allowed list, you may also search Application event log for event 3702.
-Learn more about [Virtual machine - SessionHostNeedsAssistanceForUrlCheck (Access to mandatory URLs missing for your Windows Virtual Desktop environment)](/azure/virtual-desktop/safe-url-list).
+Learn more about [Virtual machine - SessionHostNeedsAssistanceForUrlCheck (Access to mandatory URLs missing for your Windows Virtual Desktop environment)](../virtual-desktop/safe-url-list.md).
## PostgreSQL
Learn more about [Virtual machine - SessionHostNeedsAssistanceForUrlCheck (Acces
Our internal telemetry indicates that your PostgreSQL server may have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. This can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
-Learn more about [PostgreSQL server - OrcasPostgreSqlLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_logical_decoding).
+Learn more about [PostgreSQL server - OrcasPostgreSqlLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](../postgresql/concepts-logical.md).
### Improve PostgreSQL availability by removing inactive logical replication slots Our internal telemetry indicates that your PostgreSQL flexible server may have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. This can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlFlexibleServerLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_flexible_server_logical_decoding).
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlFlexibleServerLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](../postgresql/flexible-server/concepts-logical.md#monitoring).
## IoT Hub
Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSq
Some or all of your devices are using outdated SDK and we recommend you upgrade to a supported version of SDK. See the details in the recommendation.
-Learn more about [IoT hub - UpgradeDeviceClientSdk (Upgrade device client SDK to a supported version for IotHub)](https://aka.ms/iothubsdk).
+Learn more about [IoT hub - UpgradeDeviceClientSdk (Upgrade device client SDK to a supported version for IotHub)](../iot-hub/iot-hub-devguide-sdks.md#azure-iot-hub-device-sdks).
## Cosmos DB
Learn more about [Cosmos DB account - CosmosDBLazyIndexing (Configure Consistent
Your Azure Cosmos DB account is using an old version of the SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
-Learn more about [Cosmos DB account - CosmosDBUpgradeOldSDK (Upgrade your old Azure Cosmos DB SDK to the latest version)](/azure/cosmos-db/).
+Learn more about [Cosmos DB account - CosmosDBUpgradeOldSDK (Upgrade your old Azure Cosmos DB SDK to the latest version)](../cosmos-db/index.yml).
### Upgrade your outdated Azure Cosmos DB SDK to the latest version Your Azure Cosmos DB account is using an outdated version of the SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
-Learn more about [Cosmos DB account - CosmosDBUpgradeOutdatedSDK (Upgrade your outdated Azure Cosmos DB SDK to the latest version)](/azure/cosmos-db/).
+Learn more about [Cosmos DB account - CosmosDBUpgradeOutdatedSDK (Upgrade your outdated Azure Cosmos DB SDK to the latest version)](../cosmos-db/index.yml).
### Configure your Azure Cosmos DB containers with a partition key Your Azure Cosmos DB non-partitioned collections are approaching their provisioned storage quota. Please migrate these collections to new collections with a partition key definition so that they can automatically be scaled out by the service.
-Learn more about [Cosmos DB account - CosmosDBFixedCollections (Configure your Azure Cosmos DB containers with a partition key)](/azure/cosmos-db/partitioning-overview#choose-partitionkey).
+Learn more about [Cosmos DB account - CosmosDBFixedCollections (Configure your Azure Cosmos DB containers with a partition key)](../cosmos-db/partitioning-overview.md#choose-partitionkey).
### Upgrade your Azure Cosmos DB API for MongoDB account to v4.0 to save on query/storage costs and utilize new features
Based on their names and configuration, we have detected the Azure Cosmos DB acc
> [!NOTE] > Additional regions will incur extra costs.
-Learn more about [Cosmos DB account - CosmosDBSingleRegionProdAccounts (Add a second region to your production workloads on Azure Cosmos DB)](/azure/cosmos-db/high-availability).
+Learn more about [Cosmos DB account - CosmosDBSingleRegionProdAccounts (Add a second region to your production workloads on Azure Cosmos DB)](../cosmos-db/high-availability.md).
### Enable Server Side Retry (SSR) on your Azure Cosmos DB's API for MongoDB account
Learn more about [Cosmos DB account - CosmosDBMongoMigrationUpgrade (Migrate you
It appears that your key vault's configuration is preventing your Cosmos DB account from contacting the key vault to access your managed encryption keys. If you've recently performed a key rotation, make sure that the previous key or key version remains enabled and available until Cosmos DB has completed the rotation. The previous key or key version can be disabled after 24 hours, or after the Azure Key Vault audit logs don't show activity from Azure Cosmos DB on that key or key version anymore.
-Learn more about [Cosmos DB account - CosmosDBKeyVaultWrap (Your Cosmos DB account is unable to access its linked Azure Key Vault hosting your encryption key)](/azure/cosmos-db/how-to-setup-cmk).
+Learn more about [Cosmos DB account - CosmosDBKeyVaultWrap (Your Cosmos DB account is unable to access its linked Azure Key Vault hosting your encryption key)](../cosmos-db/how-to-setup-cmk.md).
### Avoid being rate limited from metadata operations
Learn more about [Cosmos DB account - CosmosDBMongoNudge36AwayFrom32 (Use the ne
### Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated
-There is a critical bug in version 2.6.13 and lower of the Azure Cosmos DB Async Java SDK v2 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. This happens transparent to you by the service after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container. Note: This is a critical hotfix for the Async Java SDK v2, however it is still highly recommended you migrate to the [Java SDK v4](/azure/cosmos-db/sql/sql-api-sdk-java-v4).
+There is a critical bug in version 2.6.13 and lower of the Azure Cosmos DB Async Java SDK v2 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. This happens transparent to you by the service after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container. Note: This is a critical hotfix for the Async Java SDK v2, however it is still highly recommended you migrate to the [Java SDK v4](../cosmos-db/sql/sql-api-sdk-java-v4.md).
-Learn more about [Cosmos DB account - CosmosDBMaxGlobalLSNReachedV2 (Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated)](/azure/cosmos-db/sql/sql-api-sdk-async-java).
+Learn more about [Cosmos DB account - CosmosDBMaxGlobalLSNReachedV2 (Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated)](../cosmos-db/sql/sql-api-sdk-async-java.md).
### Upgrade to the current recommended version of the Java SDK v4 to avoid a critical issue There is a critical bug in version 4.15 and lower of the Azure Cosmos DB Java SDK v4 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. This happens transparent to you by the service after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container.
-Learn more about [Cosmos DB account - CosmosDBMaxGlobalLSNReachedV4 (Upgrade to the current recommended version of the Java SDK v4 to avoid a critical issue)](/azure/cosmos-db/sql/sql-api-sdk-java-v4).
+Learn more about [Cosmos DB account - CosmosDBMaxGlobalLSNReachedV4 (Upgrade to the current recommended version of the Java SDK v4 to avoid a critical issue)](../cosmos-db/sql/sql-api-sdk-java-v4.md).
## Fluid Relay
Learn more about [FluidRelay Server - UpgradeClientLibrary (Upgrade your Azure F
Starting July 1, 2020, customers will not be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
-Learn more about [HDInsight cluster - KafkaVersionRetirement (Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster)](https://aka.ms/hdiretirekafka).
+Learn more about [HDInsight cluster - KafkaVersionRetirement (Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster)](../hdinsight/hdinsight-release-notes.md).
### Deprecation of Older Spark Versions in HDInsight Spark cluster Starting July 1, 2020, customers will not be able to create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6, and Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft.
-Learn more about [HDInsight cluster - SparkVersionRetirement (Deprecation of Older Spark Versions in HDInsight Spark cluster)](https://aka.ms/hdiretirespark).
+Learn more about [HDInsight cluster - SparkVersionRetirement (Deprecation of Older Spark Versions in HDInsight Spark cluster)](../hdinsight/spark/migrate-versions.md).
### Enable critical updates to be applied to your HDInsight clusters HDInsight service is applying an important certificate related update to your cluster. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Please take actions to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 13, 2021 05:00 PM UTC. The HDInsight team will be performing updates between Jan 13, 2021 05:00 PM UTC and Jan 16, 2021 05:00 PM UTC. Failure to apply this update may result in your clusters becoming unhealthy and unusable.
-Learn more about [HDInsight cluster - GCSCertRotation (Enable critical updates to be applied to your HDInsight clusters)](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters).
+Learn more about [HDInsight cluster - GCSCertRotation (Enable critical updates to be applied to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
### Drop and recreate your HDInsight clusters to apply critical updates The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, due to some custom configuration changes, we are unable to apply the certificate updates on some of your clusters.
-Learn more about [HDInsight cluster - GCSCertRotationRound2 (Drop and recreate your HDInsight clusters to apply critical updates)](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters).
+Learn more about [HDInsight cluster - GCSCertRotationRound2 (Drop and recreate your HDInsight clusters to apply critical updates)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
### Drop and recreate your HDInsight clusters to apply critical updates The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, due to some custom configuration changes, we are unable to apply the certificate updates on some of your clusters. Please drop and recreate your cluster before Jan 25th, 2021 to prevent the cluster from becoming unhealthy and unusable.
-Learn more about [HDInsight cluster - GCSCertRotationR3DropRecreate (Drop and recreate your HDInsight clusters to apply critical updates)](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters).
+Learn more about [HDInsight cluster - GCSCertRotationR3DropRecreate (Drop and recreate your HDInsight clusters to apply critical updates)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
### Apply critical updates to your HDInsight clusters The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Please remove or update your policy assignment to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 21, 2021 05:00 PM UTC. The HDInsight team will be performing updates between Jan 21, 2021 05:00 PM UTC and Jan 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources (Load balancer, Network interface and Public IP address) in the same resource group and Subnet where your cluster is in. Failure to apply this update may result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before Jan 25th, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service will send another notification if we failed to apply the update to your clusters.
-Learn more about [HDInsight cluster - GCSCertRotationR3PlanPatch (Apply critical updates to your HDInsight clusters)](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters).
+Learn more about [HDInsight cluster - GCSCertRotationR3PlanPatch (Apply critical updates to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
### Action required: Migrate your A8ΓÇôA11 HDInsight cluster before 1 March 2021
Learn more about [HDInsight cluster - VMDeprecation (Action required: Migrate yo
The Azure Connected Machine agent is updated regularly with bug fixes, stability enhancements, and new functionality. Upgrade your agent to the latest version for the best Azure Arc experience.
-Learn more about [Machine - Azure Arc - ArcServerAgentVersion (Upgrade to the latest version of the Azure Connected Machine agent)](/azure/azure-arc/servers/manage-agent).
+Learn more about [Machine - Azure Arc - ArcServerAgentVersion (Upgrade to the latest version of the Azure Connected Machine agent)](../azure-arc/servers/manage-agent.md).
## Kubernetes
Learn more about [Machine - Azure Arc - ArcServerAgentVersion (Upgrade to the la
Pod Disruption Budgets Recommended. Improve service high availability.
-Learn more about [Kubernetes service - PodDisruptionBudgetsRecommended (Pod Disruption Budgets Recommended)](https://aka.ms/aks-pdb).
+Learn more about [Kubernetes service - PodDisruptionBudgetsRecommended (Pod Disruption Budgets Recommended)](../aks/operator-best-practices-scheduler.md#plan-for-availability-using-pod-disruption-budgets).
### Upgrade to the latest agent version of Azure Arc-enabled Kubernetes Upgrade to the latest agent version for the best Azure Arc enabled Kubernetes experience, improved stability and new functionality.
-Learn more about [Kubernetes - Azure Arc - Arc-enabled K8s agent version upgrade (Upgrade to the latest agent version of Azure Arc-enabled Kubernetes)](https://aka.ms/ArcK8sAgentUpgradeDocs).
+Learn more about [Kubernetes - Azure Arc - Arc-enabled K8s agent version upgrade (Upgrade to the latest agent version of Azure Arc-enabled Kubernetes)](../azure-arc/kubernetes/agent-upgrade.md).
## Media Services
Learn more about [Kubernetes - Azure Arc - Arc-enabled K8s agent version upgrade
Please be advised that your media account is about to hit its quota limits. Please review current usage of Assets, Content Key Policies and Stream Policies for the media account. To avoid any disruption of service, you should request quota limits to be increased for the entities that are closer to hitting quota limit. You can request quota limits to be increased by opening a ticket and adding relevant details to it. Please don't create additional Azure Media accounts in an attempt to obtain higher limits.
-Learn more about [Media Service - AccountQuotaLimit (Increase Media Services quotas or limits to ensure continuity of service.)](https://aka.ms/ams-quota-recommendation/).
+Learn more about [Media Service - AccountQuotaLimit (Increase Media Services quotas or limits to ensure continuity of service.)](../media-services/latest/limits-quotas-constraints-reference.md).
## Networking
Learn more about [Application gateway - AppGateway (Upgrade your SKU or add more
The VPN gateway Basic SKU is designed for development or testing scenarios. Please move to a production SKU if you are using the VPN gateway for production purposes. The production SKUs offer higher number of tunnels, BGP support, active-active, custom IPsec/IKE policy in addition to higher stability and availability.
-Learn more about [Virtual network gateway - BasicVPNGateway (Move to production gateway SKUs from Basic gateways)](https://aka.ms/aa_basicvpngateway_learnmore).
+Learn more about [Virtual network gateway - BasicVPNGateway (Move to production gateway SKUs from Basic gateways)](/azure/vpn-gateway/vpn-gateway-about-vpn-gateway-settings#gwsku).
### Add at least one more endpoint to the profile, preferably in another Azure region Profiles should have more than one endpoint to ensure availability if one of the endpoints fails. It is also recommended that endpoints be in different regions.
-Learn more about [Traffic Manager profile - GeneralProfile (Add at least one more endpoint to the profile, preferably in another Azure region)](https://aka.ms/AA1o0x4).
+Learn more about [Traffic Manager profile - GeneralProfile (Add at least one more endpoint to the profile, preferably in another Azure region)](../traffic-manager/traffic-manager-endpoint-types.md).
### Add an endpoint configured to "All (World)" For geographic routing, traffic is routed to endpoints based on defined regions. When a region fails, there is no pre-defined failover. Having an endpoint where the Regional Grouping is configured to "All (World)" for geographic profiles will avoid traffic black holing and guarantee service remains available.
-Learn more about [Traffic Manager profile - GeographicProfile (Add an endpoint configured to \"All (World)\")](https://aka.ms/Rf7vc5).
+Learn more about [Traffic Manager profile - GeographicProfile (Add an endpoint configured to \"All (World)\")](../traffic-manager/traffic-manager-manage-endpoints.md).
### Add or move one endpoint to another Azure region All endpoints associated to this proximity profile are in the same region. Users from other regions may experience long latency when attempting to connect. Adding or moving an endpoint to another region will improve overall performance for proximity routing and provide better availability if all endpoints in one region fail.
-Learn more about [Traffic Manager profile - ProximityProfile (Add or move one endpoint to another Azure region)](https://aka.ms/Ldkkdb).
+Learn more about [Traffic Manager profile - ProximityProfile (Add or move one endpoint to another Azure region)](../traffic-manager/traffic-manager-configure-performance-routing-method.md).
### Implement multiple ExpressRoute circuits in your Virtual Network for cross premises resiliency We have detected that your ExpressRoute gateway only has 1 ExpressRoute circuit associated to it. Connect 1 or more additional circuits to your gateway to ensure peering location redundancy and resiliency
-Learn more about [Virtual network gateway - ExpressRouteGatewayRedundancy (Implement multiple ExpressRoute circuits in your Virtual Network for cross premises resiliency)](/azure/expressroute/designing-for-high-availability-with-expressroute).
+Learn more about [Virtual network gateway - ExpressRouteGatewayRedundancy (Implement multiple ExpressRoute circuits in your Virtual Network for cross premises resiliency)](../expressroute/designing-for-high-availability-with-expressroute.md).
### Implement ExpressRoute Monitor on Network Performance Monitor for end-to-end monitoring of your ExpressRoute circuit We have detected that your ExpressRoute circuit is not currently being monitored by ExpressRoute Monitor on Network Performance Monitor. ExpressRoute monitor provides end-to-end monitoring capabilities including: Loss, latency, and performance from on-premises to Azure and Azure to on-premises
-Learn more about [ExpressRoute circuit - ExpressRouteGatewayE2EMonitoring (Implement ExpressRoute Monitor on Network Performance Monitor for end-to-end monitoring of your ExpressRoute circuit)](/azure/expressroute/how-to-npm).
+Learn more about [ExpressRoute circuit - ExpressRouteGatewayE2EMonitoring (Implement ExpressRoute Monitor on Network Performance Monitor for end-to-end monitoring of your ExpressRoute circuit)](../expressroute/how-to-npm.md).
### Avoid hostname override to ensure site integrity Try to avoid overriding the hostname when configuring Application Gateway. Having a different domain on the frontend of Application Gateway than the one which is used to access the backend can potentially lead to cookies or redirect urls being broken. Note that this might not be the case in all situations and that certain categories of backends (like REST API's) in general are less sensitive to this. Please make sure the backend is able to deal with this or update the Application Gateway configuration so the hostname does not need to be overwritten towards the backend. When used with App Service, attach a custom domain name to the Web App and avoid use of the *.azurewebsites.net host name towards the backend.
-Learn more about [Application gateway - AppGatewayHostOverride (Avoid hostname override to ensure site integrity)](https://aka.ms/appgw-advisor-usecustomdomain).
+Learn more about [Application gateway - AppGatewayHostOverride (Avoid hostname override to ensure site integrity)](/azure/application-gateway/troubleshoot-app-service-redirection-app-service-url#alternate-solution-use-a-custom-domain-name).
### Use ExpressRoute Global Reach to improve your design for disaster recovery You appear to have ExpressRoute circuits peered in at least two different locations. Connect them to each other using ExpressRoute Global Reach to allow traffic to continue flowing between your on-premises network and Azure environments in the event of one circuit losing connectivity. You can establish Global Reach connections between circuits in different peering locations within the same metro or across metros.
-Learn more about [ExpressRoute circuit - UseGlobalReachForDR (Use ExpressRoute Global Reach to improve your design for disaster recovery)](/azure/expressroute/about-upgrade-circuit-bandwidth).
+Learn more about [ExpressRoute circuit - UseGlobalReachForDR (Use ExpressRoute Global Reach to improve your design for disaster recovery)](../expressroute/about-upgrade-circuit-bandwidth.md).
### Azure WAF RuleSet CRS 3.1/3.2 has been updated with log4j2 vulnerability rule
Learn more about [Application gateway - AppGwLog4JCVEGenericNotification (Additi
In active-active configuration, both instances of the VPN gateway will establish S2S VPN tunnels to your on-premise VPN device. When a planned maintenance or unplanned event happens to one gateway instance, traffic will be switched over to the other active IPsec tunnel automatically.
-Learn more about [Virtual network gateway - VNetGatewayActiveActive (Enable Active-Active gateways for redundancy)](https://aka.ms/aa_vpnha_learnmore).
+Learn more about [Virtual network gateway - VNetGatewayActiveActive (Enable Active-Active gateways for redundancy)](../vpn-gateway/vpn-gateway-highlyavailable.md).
## Recovery Services
Learn more about [Virtual network gateway - VNetGatewayActiveActive (Enable Acti
Soft delete helps you retain your backup data in the Recovery Services vault for an additional duration after deletion, giving you an opportunity to retrieve it before it is permanently deleted.
-Learn more about [Recovery Services vault - AB-SoftDeleteRsv (Enable soft delete for your Recovery Services vaults)](/azure/backup/backup-azure-security-feature-cloud).
+Learn more about [Recovery Services vault - AB-SoftDeleteRsv (Enable soft delete for your Recovery Services vaults)](../backup/backup-azure-security-feature-cloud.md).
### Enable Cross Region Restore for your recovery Services Vault Enabling cross region restore for your geo-redundant vaults
-Learn more about [Recovery Services vault - Enable CRR (Enable Cross Region Restore for your recovery Services Vault)](/azure/backup/backup-azure-arm-restore-vms#cross-region-restore).
+Learn more about [Recovery Services vault - Enable CRR (Enable Cross Region Restore for your recovery Services Vault)](../backup/backup-azure-arm-restore-vms.md#cross-region-restore).
## Search
Learn more about [Recovery Services vault - Enable CRR (Enable Cross Region Rest
You are close to exceeding storage quota of 2GB. Create a Standard search service. Indexing operations will stop working when storage quota is exceeded.
-Learn more about [Search service - BasicServiceStorageQuota90percent (You are close to exceeding storage quota of 2GB. Create a Standard search service.)](https://aka.ms/azs/search-limits-quotas-capacity).
+Learn more about [Search service - BasicServiceStorageQuota90percent (You are close to exceeding storage quota of 2GB. Create a Standard search service.)](../search/search-limits-quotas-capacity.md).
### You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service. You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service. Indexing operations will stop working when storage quota is exceeded.
-Learn more about [Search service - FreeServiceStorageQuota90percent (You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service.)](https://aka.ms/azs/search-limits-quotas-capacity).
+Learn more about [Search service - FreeServiceStorageQuota90percent (You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service.)](../search/search-limits-quotas-capacity.md).
### You are close to exceeding your available storage quota. Add additional partitions if you need more storage. You are close to exceeding your available storage quota. Add additional partitions if you need more storage. After exceeding storage quota, you can still query, but indexing operations will no longer work.
-Learn more about [Search service - StandardServiceStorageQuota90percent (You are close to exceeding your available storage quota. Add additional partitions if you need more storage.)](https://aka.ms/azs/search-limits-quotas-capacity).
+Learn more about [Search service - StandardServiceStorageQuota90percent (You are close to exceeding your available storage quota. Add additional partitions if you need more storage.)](../search/search-limits-quotas-capacity.md).
## Storage
Learn more about [Storage Account - StorageSoftDelete (Enable Soft Delete to pro
We have identified that you are using Premium SSD Unmanaged Disks in Storage account(s) that are about to reach Premium Storage capacity limit. To avoid failures when the limit is reached, we recommend migrating to Managed Disks that do not have account capacity limit. This migration can be done through the portal in less than 5 minutes.
-Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Disks for storage accounts reaching capacity limit)](https://aka.ms/premium_blob_quota).
+Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Disks for storage accounts reaching capacity limit)](/azure/storage/common/scalability-targets-standard-account#premium-performance-page-blob-storage).
## Web
Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Di
Your App reached >90% CPU over the last couple of days. High CPU utilization can lead to runtime issues with your apps, to solve this you could scale out your app.
-Learn more about [App service - AppServiceCPUExhaustion (Consider scaling out your App Service Plan to avoid CPU exhaustion)](https://aka.ms/antbc-cpu).
+Learn more about [App service - AppServiceCPUExhaustion (Consider scaling out your App Service Plan to avoid CPU exhaustion)](/azure/app-service/app-service-best-practices#CPUresources).
### Fix the backup database settings of your App Service resource Your app's backups are consistently failing due to invalid DB configuration, you can find more details in backup history.
-Learn more about [App service - AppServiceFixBackupDatabaseSettings (Fix the backup database settings of your App Service resource)](https://aka.ms/antbc).
+Learn more about [App service - AppServiceFixBackupDatabaseSettings (Fix the backup database settings of your App Service resource)](/azure/app-service/app-service-best-practices#appbackup.).
### Consider scaling up your App Service Plan SKU to avoid memory exhaustion The App Service Plan containing your app reached >85% memory allocated. High memory consumption can lead to runtime issues with your apps. Investigate which app in the App Service Plan is exhausting memory and scale up to a higher plan with more memory resources if needed.
-Learn more about [App service - AppServiceMemoryExhaustion (Consider scaling up your App Service Plan SKU to avoid memory exhaustion)](https://aka.ms/antbc-memory).
+Learn more about [App service - AppServiceMemoryExhaustion (Consider scaling up your App Service Plan SKU to avoid memory exhaustion)](/azure/app-service/app-service-best-practices#memoryresources).
### Scale up your App Service resource to remove the quota limit Your app is part of a shared App Service plan and has met its quota multiple times. After meeting a quota, your web app canΓÇÖt accept incoming requests. To remove the quota, upgrade to a Standard plan.
-Learn more about [App service - AppServiceRemoveQuota (Scale up your App Service resource to remove the quota limit)](https://aka.ms/ant-asp).
+Learn more about [App service - AppServiceRemoveQuota (Scale up your App Service resource to remove the quota limit)](../app-service/overview-hosting-plans.md).
### Use deployment slots for your App Service resource You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment impact to your production web app.
-Learn more about [App service - AppServiceUseDeploymentSlots (Use deployment slots for your App Service resource)](https://aka.ms/ant-staging).
+Learn more about [App service - AppServiceUseDeploymentSlots (Use deployment slots for your App Service resource)](../app-service/deploy-staging-slots.md).
### Fix the backup storage settings of your App Service resource Your app's backups are consistently failing due to invalid storage settings, you can find more details in backup history.
-Learn more about [App service - AppServiceFixBackupStorageSettings (Fix the backup storage settings of your App Service resource)](https://aka.ms/antbc).
+Learn more about [App service - AppServiceFixBackupStorageSettings (Fix the backup storage settings of your App Service resource)](/azure/app-service/app-service-best-practices#appbackup.).
### Move your App Service resource to Standard or higher and use deployment slots You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment impact to your production web app.
-Learn more about [App service - AppServiceStandardOrHigher (Move your App Service resource to Standard or higher and use deployment slots)](https://aka.ms/ant-staging).
+Learn more about [App service - AppServiceStandardOrHigher (Move your App Service resource to Standard or higher and use deployment slots)](../app-service/deploy-staging-slots.md).
### Consider scaling out your App Service Plan to optimize user experience and availability.
aks Operator Best Practices Cluster Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-cluster-security.md
spec:
``` > [!NOTE]
-> Alternatively you can use [Pod Identity](https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity) thought this is in Public Preview. It has a pod (NMI) that runs as a DaemonSet on each node in the AKS cluster. NMI intercepts security token requests to the Azure Instance Metadata Service on each node, redirect them to itself and validates if the pod has access to the identity it's requesting a token for and fetch the token from the Azure AD tenant on behalf of the application.
+> Alternatively you can use [Pod Identity](./use-azure-ad-pod-identity.md) thought this is in Public Preview. It has a pod (NMI) that runs as a DaemonSet on each node in the AKS cluster. NMI intercepts security token requests to the Azure Instance Metadata Service on each node, redirect them to itself and validates if the pod has access to the identity it's requesting a token for and fetch the token from the Azure AD tenant on behalf of the application.
> ## Secure container access to resources
For Windows Server nodes, regularly perform a node image upgrade operation to sa
[pod-security-contexts]: developer-best-practices-pod-security.md#secure-pod-access-to-resources [aks-ssh]: ssh.md [security-center-aks]: ../defender-for-cloud/defender-for-kubernetes-introduction.md
-[node-image-upgrade]: node-image-upgrade.md
+[node-image-upgrade]: node-image-upgrade.md
aks Use Azure Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-dedicated-hosts.md
In this article, you learned how to create an AKS cluster with a Dedicated host,
[aks-support-policies]: support-policies.md [aks-faq]: faq.md [azure-cli-install]: /cli/azure/install-azure-cli
-[dedicated-hosts]: /azure/virtual-machines/dedicated-hosts.md
-[az-vm-host-group-create]: /cli/azure/vm/host/group#az_vm_host_group_create
+[dedicated-hosts]: ../virtual-machines/dedicated-hosts.md
+[az-vm-host-group-create]: /cli/azure/vm/host/group#az_vm_host_group_create
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-advanced-policies.md
This example shows how to return a 401 response if the authorization token is in
This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes). -- **Policy sections:** outbound, backend, on-error
+- **Policy sections:** inbound, outbound, backend, on-error
- **Policy scopes:** all scopes ## <a name="set-variable"></a> Set variable
api-management Rp Source Ip Address Change Mar2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/rp-source-ip-address-change-mar2023.md
Finally, check for any other systems that may impact the communication from the
## More Information
-* [Virtual Network](/azure/virtual-network)
+* [Virtual Network](../../virtual-network/index.yml)
* [API Management VNET Reference](../virtual-network-reference.md) * [Microsoft Q&A](/answers/topics/azure-api-management.html) <!-- Links --> [Configure NSG Rules]: ../api-management-using-with-internal-vnet.md#configure-nsg-rules
-[Virtual Network]: /azure/virtual-network
+[Virtual Network]: ../../virtual-network/index.yml
[Force tunneling traffic]: ../virtual-network-reference.md#force-tunneling-traffic-to-on-premises-firewall-using-expressroute-or-network-virtual-appliance [Create, change, or delete a network security group]: /azure/virtual-network/manage-network-security-group
app-service App Service App Service Environment Control Inbound Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-control-inbound-traffic.md
ms.assetid: 4cc82439-8791-48a4-9485-de6d8e1d1a08 Previously updated : 01/11/2017 Last updated : 03/15/2022 # How To Control Inbound Traffic to an App Service Environment+
+> [!IMPORTANT]
+> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
+ ## Overview An App Service Environment can be created in **either** an Azure Resource Manager virtual network, **or** a classic deployment model [virtual network][virtualnetwork]. A new virtual network and new subnet can be defined at the time an App Service Environment is created. Instead, an App Service Environment can be created in a pre-existing virtual network and pre-existing subnet. As of June 2016, ASEs can also be deployed into virtual networks that use either public address ranges or RFC1918 address spaces (private addresses). For more information, see [How to Create an ASEv1 from template](app-service-app-service-environment-create-ilb-ase-resourcemanager.md).
app-service App Service App Service Environment Create Ilb Ase Resourcemanager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-create-ilb-ase-resourcemanager.md
ms.assetid: 091decb6-b0de-42a1-9f2f-c18d9b2e67df Previously updated : 10/10/2021 Last updated : 03/15/2022 # How To Create an ILB ASEv1 Using Azure Resource Manager Templates
-> [!NOTE]
-> This article is about the App Service Environment v1. There is a newer version of the App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version start with the [Introduction to the App Service Environment](intro.md).
+> [!IMPORTANT]
+> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
> ## Overview
app-service App Service App Service Environment Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-intro.md
ms.assetid: 78e6d4f5-da46-4eb5-a632-b5fdc17d2394 Previously updated : 07/11/2017 Last updated : 03/15/2022 # Introduction to App Service Environment v1
-> [!NOTE]
-> This article is about the App Service Environment v1. There is a newer version of the App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version start with the [Introduction to the App Service Environment](overview.md).
+> [!IMPORTANT]
+> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
## Overview
app-service App Service App Service Environment Layered Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-layered-security.md
ms.assetid: 73ce0213-bd3e-4876-b1ed-5ecad4ad5601 Previously updated : 08/30/2016 Last updated : 03/15/2022 # Implementing a Layered Security Architecture with App Service Environments+
+> [!IMPORTANT]
+> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
+ Since App Service Environments provide an isolated runtime environment deployed into a virtual network, developers can create a layered security architecture providing differing levels of network access for each physical application tier. A common desire is to hide API back-ends from general Internet access, and only allow APIs to be called by upstream web apps. [Network security groups (NSGs)][NetworkSecurityGroups] can be used on subnets containing App Service Environments to restrict public access to API applications.
app-service App Service App Service Environment Network Architecture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-network-architecture-overview.md
ms.assetid: 13d03a37-1fe2-4e3e-9d57-46dfb330ba52 Previously updated : 10/04/2016 Last updated : 03/15/2022 # Network Architecture Overview of App Service Environments+
+> [!IMPORTANT]
+> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
+ App Service Environments are always created within a subnet of a [virtual network][virtualnetwork] - apps running in an App Service Environment can communicate with private endpoints located within the same virtual network topology. Since customers may lock down parts of their virtual network infrastructure, it is important to understand the types of network communication flows that occur with an App Service Environment. ## General Network Flow
app-service App Service App Service Environment Network Configuration Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-network-configuration-expressroute.md
ms.assetid: 34b49178-2595-4d32-9b41-110c96dde6bf Previously updated : 10/14/2016 Last updated : 03/15/2022 # Network configuration details for App Service Environment for Power Apps with Azure ExpressRoute
+> [!IMPORTANT]
+> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
+ Customers can connect an [Azure ExpressRoute][ExpressRoute] circuit to their virtual network infrastructure to extend their on-premises network to Azure. App Service Environment is created in a subnet of the [virtual network][virtualnetwork] infrastructure. Apps that run on App Service Environment establish secure connections to back-end resources that are accessible only over the ExpressRoute connection. App Service Environment can be created in these scenarios:
app-service App Service App Service Environment Securely Connecting To Backend Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-securely-connecting-to-backend-resources.md
ms.assetid: f82eb283-a6e7-4923-a00b-4b4ccf7c4b5b Previously updated : 10/04/2016 Last updated : 03/15/2022 # Connect securely to back end resources from an App Service environment+
+> [!IMPORTANT]
+> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
+ Since an App Service Environment is always created in **either** an Azure Resource Manager virtual network, **or** a classic deployment model [virtual network][virtualnetwork], outbound connections from an App Service Environment to other backend resources can flow exclusively over the virtual network. As of June 2016, ASEs can also be deployed into virtual networks that use either public address ranges or RFC1918 address spaces (private addresses). For example, there may be a SQL Server running on a cluster of virtual machines with port 1433 locked down. The endpoint may be ACLd to only allow access from other resources on the same virtual network.
app-service App Service Environment Auto Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-environment-auto-scale.md
ms.assetid: c23af2d8-d370-4b1f-9b3e-8782321ddccb Previously updated : 07/11/2017 Last updated : 03/15/2022 # Autoscaling and App Service Environment v1
-> [!NOTE]
-> This article is about the App Service Environment v1. There is a newer version of the App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version start with the [Introduction to the App Service Environment](intro.md).
->
+> [!IMPORTANT]
+> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
Azure App Service environments support *autoscaling*. You can autoscale individual worker pools based on metrics or schedule.
app-service App Service Web Configure An App Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-web-configure-an-app-service-environment.md
ms.assetid: b5a1da49-4cab-460d-b5d2-edd086ec32f4 Previously updated : 07/11/2017 Last updated : 03/15/2022 # Configuring an App Service Environment v1
-> [!NOTE]
-> This article is about the App Service Environment v1. There is a newer version of the App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version start with the [Introduction to the App Service Environment](intro.md).
->
+> [!IMPORTANT]
+> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
## Overview At a high level, an Azure App Service Environment consists of several major components:
app-service App Service Web Scale A Web App In An App Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-web-scale-a-web-app-in-an-app-service-environment.md
ms.assetid: 78eb1e49-4fcd-49e7-b3c7-f1906f0f22e3 Previously updated : 10/17/2016 Last updated : 03/15/2022 # Scaling apps in an App Service Environment v1+
+> [!IMPORTANT]
+> This article is about App Service Environment v1. App Service Environment v1 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
+ In the Azure App Service there are normally three things you can scale: * pricing plan
app-service Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/certificates.md
Title: Certificates bindings
description: Explain numerous topics related to certificates on an App Service Environment v2. Learn how certificate bindings work on the single-tenanted apps in an ASE. Previously updated : 11/15/2021 Last updated : 03/15/2022 # Certificates and the App Service Environment v2
-> [!NOTE]
-> This article is about the App Service Environment v2 which is used with Isolated App Service plans
->
+
+> [!IMPORTANT]
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. App Service Environment v2 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
+ The App Service Environment(ASE) is a deployment of the Azure App Service that runs within your Azure Virtual Network(VNet). It can be deployed with an internet accessible application endpoint or an application endpoint that is in your VNet. If you deploy the ASE with an internet accessible endpoint, that deployment is called an External ASE. If you deploy the ASE with an endpoint in your VNet, that deployment is called an ILB ASE. You can learn more about the ILB ASE from the [Create and use an ILB ASE](./create-ilb-ase.md) document. The ASE is a single tenant system. Because it is single tenant, there are some features available only with an ASE that are not available in the multi-tenant App Service.
app-service Create External Ase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-external-ase.md
Title: Create an external ASE
description: Learn how to create an App Service environment with an app in it, or create a standalone (empty) ASE. Previously updated : 06/13/2017 Last updated : 03/15/2022 # Create an External App Service Environment
-> [!NOTE]
-> This article is about the App Service Environment v2 which is used with Isolated App Service plans
->
+> [!IMPORTANT]
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. App Service Environment v2 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
Azure App Service Environment is a deployment of Azure App Service into a subnet in an Azure virtual network (VNet). There are two ways to deploy an App Service Environment (ASE):
To create an ASE while you create an App Service plan:
![Pricing tier selection][3]
-1. Enter the name for your ASE. This name is used in the addressable name for your apps. If the name of the ASE is _appsvcenvdemo_, the domain name is *.appsvcenvdemo.p.azurewebsites.net*. If you create an app named *mytestapp*, it's addressable at mytestapp.appsvcenvdemo.p.azurewebsites.net. You can't use white space in the name. If you use uppercase characters, the domain name is the total lowercase version of that name.
+1. Enter the name for your ASE. This name is used in the addressable name for your apps. If the name of the ASE is *appsvcenvdemo*, the domain name is *.appsvcenvdemo.p.azurewebsites.net*. If you create an app named *mytestapp*, it's addressable at mytestapp.appsvcenvdemo.p.azurewebsites.net. You can't use white space in the name. If you use uppercase characters, the domain name is the total lowercase version of that name.
![New App Service plan name][4]
app-service Create Ilb Ase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-ilb-ase.md
description: Learn how to create an App Service environment with an internal loa
ms.assetid: 0f4c1fa4-e344-46e7-8d24-a25e247ae138 Previously updated : 09/16/2020 Last updated : 03/15/2022 # Create and use an Internal Load Balancer App Service Environment
-> [!NOTE]
-> This article is about the App Service Environment v2 which is used with Isolated App Service plans
->
+
+> [!IMPORTANT]
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. App Service Environment v2 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
The Azure App Service Environment is a deployment of Azure App Service into a subnet in an Azure virtual network (VNet). There are two ways to deploy an App Service Environment (ASE):
app-service Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/firewall-integration.md
description: Learn how to integrate with Azure Firewall to secure outbound traff
ms.assetid: 955a4d84-94ca-418d-aa79-b57a5eb8cb85 Previously updated : 01/12/2022 Last updated : 03/15/2022
# Locking down an App Service Environment
-> [!NOTE]
-> This article is about the App Service Environment v2, which is used with Isolated App Service plans.
+> [!IMPORTANT]
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. App Service Environment v2 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
The App Service Environment (ASE) has many external dependencies that it requires access to in order to function properly. The ASE lives in the customer Azure Virtual Network. Customers must allow the ASE dependency traffic, which is a problem for customers that want to lock down all egress from their virtual network.
app-service Forced Tunnel Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/forced-tunnel-support.md
description: Learn how to enable your App Service Environment to work when outbo
ms.assetid: 384cf393-5c63-4ffb-9eb2-bfd990bc7af1 Previously updated : 05/29/2018 Last updated : 03/15/2022 # Configure your App Service Environment with forced tunneling
-> [!NOTE]
-> This article is about the App Service Environment v2 which is used with Isolated App Service plans
->
+
+> [!IMPORTANT]
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. App Service Environment v2 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
The App Service Environment (ASE) is a deployment of Azure App Service in a customer's Azure Virtual Network. Many customers configure their Azure virtual networks to be extensions of their on-premises networks with VPNs or Azure ExpressRoute connections. Forced tunneling is when you redirect internet bound traffic to your VPN or a virtual appliance instead. Virtual appliances are often used to inspect and audit outbound network traffic.
app-service Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/intro.md
Title: Introduction to ASEv2
description: Learn how Azure App Service Environments v2 help you scale, secure, and optimize your apps in a fully isolated and dedicated environment. Previously updated : 11/15/2021 Last updated : 03/15/2022 # Introduction to App Service Environment v2
-> [!NOTE]
-> This article is about the App Service Environment v2 which is used with Isolated App Service plans. There is a newer version of the App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version start with the [Introduction to the App Service Environment](overview.md).
->
+
+> [!IMPORTANT]
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. App Service Environment v2 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
## Overview
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Once the new IPs are created, you'll have the new default outbound to the intern
### Delegate your App Service Environment subnet
-App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Migration will not succeed if the App Service Environment's subnet isn't delegated or it's delegated to a different resource.
+App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Migration won't succeed if the App Service Environment's subnet isn't delegated or it's delegated to a different resource.
### Migrate to App Service Environment v3
There's no cost to migrate your App Service Environment. You'll stop being charg
If there's an unexpected issue, support teams will be on hand. It's recommended to migrate dev environments before touching any production environments. - **What happens to my old App Service Environment?** If you decide to migrate an App Service Environment, the old environment gets shut down and deleted and all of your apps are migrated to a new environment. Your old environment will no longer be accessible.
+- **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?**
+ After 31 August 2024, if you haven't migrated to App Service Environment v3, your App Service Environment v1/v2s and the apps deployed in them will no longer be available. App Service Environment v1/v2 is hosted on App Service scale units running on [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture that will be [retired on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Because of this, App Service Environment v1/v2 will no longer be available after that date. Migrate to App Service Environment v3 to keep your apps running or save or back up any resources or data that you need to maintain.
## Next steps
There's no cost to migrate your App Service Environment. You'll stop being charg
> [Migrate App Service Environment v2 to App Service Environment v3](how-to-migrate.md) > [!div class="nextstepaction"]
-> [Migration Alternatives](migration-alternatives.md)
+> [Manually migrate to App Service Environment v3](migration-alternatives.md)
> [!div class="nextstepaction"] > [App Service Environment v3 Networking](networking.md)
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md
Title: Migrate to App Service Environment v3
description: How to migrate your applications to App Service Environment v3 Previously updated : 2/2/2022 Last updated : 3/15/2022 # Migrate to App Service Environment v3
You can distribute traffic between your old and new environment using an [Applic
Once your migration and any testing with your new environment is complete, delete your old App Service Environment, the apps that are on it, and any supporting resources that you no longer need. You'll continue to be charged for any resources that haven't been deleted.
+## Frequently asked questions
+
+- **Will I experience downtime during the migration?**
+ Downtime is dependent on your migration process. If you have a different App Service Environment that you can point traffic to while you migrate or if you can use a different subnet to create your new environment, you won't have downtime. However, if you must use the same subnet, there will be downtime resulting from the time it takes to delete the old environment, create the App Service Environment v3, create the new App Service plans, re-create the apps, and update any resources that need to know about the new IP addresses.
+- **Do I need to change anything about my apps to get them to run on App Service Environment v3?**
+ No, apps that run on App Service Environment v1 and v2 shouldn't need any modifications to run on App Service Environment v3.
+- **What if my App Service Environment has a custom domain suffix?**
+ App Service Environment v3 doesn't support custom domain suffixes at this time. You won't be able to migrate until it's supported if you want to continue using this feature.
+- **What if my App Service Environment is zone pinned?**
+ Zone pinning isn't a supported feature on App Service Environment v3. Use [zone redundancy](overview-zone-redundancy.md) instead.
+- **What properties of my App Service Environment will change?**
+ You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you'll keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address will change. Note for internet facing App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses).
+- **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?**
+ After 31 August 2024, if you haven't migrated to App Service Environment v3, your App Service Environment v1/v2s and the apps deployed in them will no longer be available. App Service Environment v1/v2 is hosted on App Service scale units running on [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture that will be [retired on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Because of this, App Service Environment v1/v2 will no longer be available after that date. Migrate to App Service Environment v3 to keep your apps running or save or back up any resources or data that you need to maintain.
+ ## Next steps > [!div class="nextstepaction"]
Once your migration and any testing with your new environment is complete, delet
> [!div class="nextstepaction"] > [Integrate your ILB App Service Environment with the Azure Application Gateway](integrate-with-application-gateway.md)+
+> [!div class="nextstepaction"]
+> [Migrate to App Service Environment v3 by using the migration feature](migrate.md)
app-service Network Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/network-info.md
Title: Networking considerations
description: Learn about App Service Environment network traffic, and how to set network security groups and user-defined routes. Previously updated : 11/15/2021 Last updated : 03/15/2022 # Networking considerations for App Service Environment
+> [!IMPORTANT]
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. App Service Environment v2 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
+ [App Service Environment][Intro] is a deployment of Azure App Service into a subnet in your Azure virtual network. There are two deployment types for an App Service Environment: - **External:** This type of deployment exposes the hosted apps by using an IP address that is accessible on the internet. For more information, see [Create an external App Service Environment][MakeExternalASE].
app-service Using An Ase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/using-an-ase.md
description: Learn how to create, publish, and scale apps in an App Service Envi
ms.assetid: a22450c4-9b8b-41d4-9568-c4646f4cf66b Previously updated : 01/26/2022 Last updated : 03/15/2022 # Manage an App Service Environment
-> [!NOTE]
-> This article is about the App Service Environment v2 which is used with Isolated App Service plans
->
+
+> [!IMPORTANT]
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. App Service Environment v2 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
An App Service Environment (ASE) is a deployment of Azure App Service into a subnet in a customer's Azure Virtual Network instance. An ASE consists of:
Front-end resources are the HTTP/HTTPS endpoint for the ASE. With the default fr
## App access
-In an External ASE, the domain suffix used for app creation is *.&lt;asename&gt;.p.azurewebsites.net*. If your ASE is named _external-ase_ and you host an app called _contoso_ in that ASE, you reach it at these URLs:
+In an External ASE, the domain suffix used for app creation is *.&lt;asename&gt;.p.azurewebsites.net*. If your ASE is named *external-ase* and you host an app called *contoso* in that ASE, you reach it at these URLs:
- contoso.external-ase.p.azurewebsites.net - contoso.scm.external-ase.p.azurewebsites.net For information about how to create an External ASE, see [Create an App Service Environment][MakeExternalASE].
-In an ILB ASE, the domain suffix used for app creation is *.&lt;asename&gt;.appserviceenvironment.net*. If your ASE is named _ilb-ase_ and you host an app called _contoso_ in that ASE, you reach it at these URLs:
+In an ILB ASE, the domain suffix used for app creation is *.&lt;asename&gt;.appserviceenvironment.net*. If your ASE is named *ilb-ase* and you host an app called *contoso* in that ASE, you reach it at these URLs:
- contoso.ilb-ase.appserviceenvironment.net - contoso.scm.ilb-ase.appserviceenvironment.net
app-service Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/zone-redundancy.md
Title: Availability Zone support for App Service Environment v2
description: Learn how to deploy your App Service Environments so that your apps are zone redundant. Previously updated : 11/15/2021 Last updated : 03/15/2022 # Availability Zone support for App Service Environment v2
-> [!NOTE]
-> This article is about the App Service Environment v2 which is used with Isolated App Service plans
->
+> [!IMPORTANT]
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. App Service Environment v2 will be retired on 31 August 2024. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
App Service Environment v2 (ASE) can be deployed into Availability Zones (AZ). Customers can deploy an internal load balancer (ILB) ASEs into a specific AZ within an Azure region. If you pin your ILB ASE to a specific AZ, the resources used by a ILB ASE will either be pinned to the specified AZ, or deployed in a zone redundant manner.
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
# Deploy a Python (Django or Flask) web app with PostgreSQL in Azure
-In this tutorial, you will deploy a data-driven Python web app (**[Django](https://www.djangoproject.com/)** or **[Flask](https://flask.palletsprojects.com/)**) with the **[Azure Database for PostgreSQL](/azure/postgresql/)** relational database service. The Python app is hosted in a fully managed **[Azure App Service](/azure/app-service/overview#app-service-on-linux)** which supports [Python 3.7 or higher](https://www.python.org/downloads/) in a Linux server environment. You can start with a basic pricing tier that can be scaled up at any later time.
+In this tutorial, you will deploy a data-driven Python web app (**[Django](https://www.djangoproject.com/)** or **[Flask](https://flask.palletsprojects.com/)**) with the **[Azure Database for PostgreSQL](../postgresql/index.yml)** relational database service. The Python app is hosted in a fully managed **[Azure App Service](./overview.md#app-service-on-linux)** which supports [Python 3.7 or higher](https://www.python.org/downloads/) in a Linux server environment. You can start with a basic pricing tier that can be scaled up at any later time.
:::image type="content" border="False" source="./media/tutorial-python-postgresql-app/python-postgresql-app-architecture-240px.png" lightbox="./media/tutorial-python-postgresql-app/python-postgresql-app-architecture.png" alt-text="An architecture diagram showing an App Service with a PostgreSQL database in Azure.":::
Learn how to map a custom DNS name to your app:
Learn how App Service runs a Python app: > [!div class="nextstepaction"]
-> [Configure Python app](configure-language-python.md)
+> [Configure Python app](configure-language-python.md)
application-gateway Application Gateway Backend Health Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-backend-health-troubleshooting.md
Previously updated : 06/09/2020 Last updated : 03/17/2022
This behavior can occur for one or more of the following reasons:
1. To verify that Application Gateway is healthy and running, go to the **Resource Health** option in the portal and verify that the state is **Healthy**. If you see an **Unhealthy** or **Degraded** state, [contact support](https://azure.microsoft.com/support/options/).
+1. If Internet and private traffic are going though an Azure Firewall hosted in a secured Virtual hub (using Azure Virtual WAN Hub):
+
+ a. To ensure the application gateway can send traffic directly to the Internet, configure the following user defined route:
+
+ Address prefix: 0.0.0.0/0<br>
+ Next hop: Internet
+
+ b. To ensure the application gateway can send traffic to the backend pool via an Azure Firewall in the Virtual WAN hub, configure the following user defined route:
+
+ Address Prefix: Backend pool subnet<br>
+ Next hop: Azure Firewall private IP address
++ ## Next steps Learn more about [Application Gateway diagnostics and logging](./application-gateway-diagnostics.md).
application-gateway Classic To Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/classic-to-resource-manager.md
Please refer to [Frequently asked questions about classic to Azure Resource Mana
### How do I report an issue?
-Post your issues and questions about migration to our [Microsoft Q&A page](https://aka.ms/AAflal1). We recommend posting all your questions on this forum. If you have a support contract, you're welcome to log a support ticket as well.
+Post your issues and questions about migration to our [Microsoft Q&A page](/answers/topics/azure-virtual-network.html). We recommend posting all your questions on this forum. If you have a support contract, you're welcome to log a support ticket as well.
## Next steps To get started see: [platform-supported migration of IaaS resources from classic to Resource Manager](../virtual-machines/migration-classic-resource-manager-ps.md)
applied-ai-services Build Custom Model V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-custom-model-v3.md
The Form Recognizer Studio provides and orchestrates all the API calls required
1. On the next step in the workflow, choose or create a Form Recognizer resource before you select continue. > [!IMPORTANT]
- > Custom neural models models are only available in a few regions. If you plan on training a neural model, please select or create a resource in one of [these supported regions](https://aka.ms/fr-neural#l#supported-regions).
+ > Custom neural models models are only available in a few regions. If you plan on training a neural model, please select or create a resource in one of [these supported regions](/azure/applied-ai-services/form-recognizer/concept-custom-neural#l).
:::image type="content" source="../media/how-to/studio-select-resource.png" alt-text="Screenshot: Select the Form Recognizer resource.":::
Congratulations you've trained a custom model in the Form Recognizer Studio! You
> [Learn about custom model types](../concept-custom.md) > [!div class="nextstepaction"]
-> [Learn about accuracy and confidence with custom models](../concept-accuracy-confidence.md)
+> [Learn about accuracy and confidence with custom models](../concept-accuracy-confidence.md)
applied-ai-services Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities.md
You need to grant Form Recognizer access to your storage account before it can c
## Learn more about managed identity > [!div class="nextstepaction"]
-> [Access Azure Storage form a web app using managed identities](/azure/app-service/scenario-secure-app-access-storage?toc=/azure/applied-ai-services/form-recognizer/toc.json&bc=/azure/applied-ai-services/form-recognizer/breadcrumb/toc.json )
+> [Access Azure Storage form a web app using managed identities](../../app-service/scenario-secure-app-access-storage.md?bc=%2fazure%2fapplied-ai-services%2fform-recognizer%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fapplied-ai-services%2fform-recognizer%2ftoc.json)
attestation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/overview.md
# Microsoft Azure Attestation
-Microsoft Azure Attestation is a unified solution for remotely verifying the trustworthiness of a platform and integrity of the binaries running inside it. The service supports attestation of the platforms backed by Trusted Platform Modules (TPMs) alongside the ability to attest to the state of Trusted Execution Environments (TEEs) such as [Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) enclaves, [Virtualization-based Security](/windows-hardware/design/device-experiences/oem-vbs) (VBS) enclaves, [Trusted Platform Modules (TPMs)](/windows/security/information-protection/tpm/trusted-platform-module-overview), [Trusted launch for Azure VMs](/azure/virtual-machines/trusted-launch) and [Azure confidential VMs](/azure/confidential-computing/confidential-vm-overview).
+Microsoft Azure Attestation is a unified solution for remotely verifying the trustworthiness of a platform and integrity of the binaries running inside it. The service supports attestation of the platforms backed by Trusted Platform Modules (TPMs) alongside the ability to attest to the state of Trusted Execution Environments (TEEs) such as [Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) enclaves, [Virtualization-based Security](/windows-hardware/design/device-experiences/oem-vbs) (VBS) enclaves, [Trusted Platform Modules (TPMs)](/windows/security/information-protection/tpm/trusted-platform-module-overview), [Trusted launch for Azure VMs](../virtual-machines/trusted-launch.md) and [Azure confidential VMs](../confidential-computing/confidential-vm-overview.md).
Attestation is a process for demonstrating that software binaries were properly instantiated on a trusted platform. Remote relying parties can then gain confidence that only such intended software is running on trusted hardware. Azure Attestation is a unified customer-facing service and framework for attestation.
Client applications can be designed to take advantage of TPM attestation by dele
### Azure Confidential VM attestation
-Azure [Confidential VM](/azure/confidential-computing/confidential-vm-overview) (CVM) is based on [AMD processors with SEV-SNP technology](/azure/confidential-computing/virtual-machine-solutions-amd) and aims to improve VM security posture by removing trust in host, hypervisor and Cloud Service Provider (CSP). To achieve this, CVM offers VM OS disk encryption option with platform-managed keys and binds the disk encryption keys to the virtual machine's TPM. When a CVM boots up, SNP report containing the guest VM firmware measurements will be sent to Azure Attestation. The service validates the measurements and issues an attestation token that is used to release keys from [Managed-HSM](/azure/key-vault/managed-hsm/overview) or [Azure Key Vault](/azure/key-vault/general/basic-concepts). These keys are used to decrypt the vTPM state of the guest VM, unlock the OS disk and start the CVM. The attestation and key release process is performed automatically on each CVM boot, and the process ensures the CVM boots up only upon successful attestation of the hardware.
+Azure [Confidential VM](../confidential-computing/confidential-vm-overview.md) (CVM) is based on [AMD processors with SEV-SNP technology](../confidential-computing/virtual-machine-solutions-amd.md) and aims to improve VM security posture by removing trust in host, hypervisor and Cloud Service Provider (CSP). To achieve this, CVM offers VM OS disk encryption option with platform-managed keys and binds the disk encryption keys to the virtual machine's TPM. When a CVM boots up, SNP report containing the guest VM firmware measurements will be sent to Azure Attestation. The service validates the measurements and issues an attestation token that is used to release keys from [Managed-HSM](../key-vault/managed-hsm/overview.md) or [Azure Key Vault](../key-vault/general/basic-concepts.md). These keys are used to decrypt the vTPM state of the guest VM, unlock the OS disk and start the CVM. The attestation and key release process is performed automatically on each CVM boot, and the process ensures the CVM boots up only upon successful attestation of the hardware.
## Azure Attestation can run in a TEE
Clusters deployed in two regions will operate independently under normal circums
## Next steps - Learn about [Azure Attestation basic concepts](basic-concepts.md) - [How to author and sign an attestation policy](author-sign-policy.md)-- [Set up Azure Attestation using PowerShell](quickstart-powershell.md)
+- [Set up Azure Attestation using PowerShell](quickstart-powershell.md)
automation Automation Create Standalone Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-create-standalone-account.md
When the Automation account is successfully created, several resources are autom
## Manage Automation account keys
-When you create an Automation account, Azure generates two 512-bit automation account access keys for that account. These keys are shared access keys that are used as registration keys for registering [DSC nodes](/azure/automation/automation-dsc-onboarding#use-dsc-metaconfiguration-to-register-hybrid-machines) as well as [Windows](/azure/automation/automation-windows-hrw-install#manual-deployment) and [Linux](/azure/automation/automation-linux-hrw-install#manually-run-powershell-commands) Hybrid runbook workers. These keys are only used while registering DSC nodes and Hybrid workers. Existing machines configured as DSC nodes or hybrid workers wonΓÇÖt be affected after rotation of these keys.
+When you create an Automation account, Azure generates two 512-bit automation account access keys for that account. These keys are shared access keys that are used as registration keys for registering [DSC nodes](./automation-dsc-onboarding.md#use-dsc-metaconfiguration-to-register-hybrid-machines) as well as [Windows](./automation-windows-hrw-install.md#manual-deployment) and [Linux](./automation-linux-hrw-install.md#manually-run-powershell-commands) Hybrid runbook workers. These keys are only used while registering DSC nodes and Hybrid workers. Existing machines configured as DSC nodes or hybrid workers wonΓÇÖt be affected after rotation of these keys.
### View Automation account keys
automation Automation Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-role-based-access-control.md
An Automation Contributor can manage all resources in the Automation account exc
|**Actions** |**Description** | |||
-|[Microsoft.Automation](/azure/role-based-access-control/resource-provider-operations#microsoftautomation)/automationAccounts/* | Create and manage resources of all types.|
+|[Microsoft.Automation](../role-based-access-control/resource-provider-operations.md#microsoftautomation)/automationAccounts/* | Create and manage resources of all types.|
|Microsoft.Authorization/*/read|Read roles and role assignments.| |Microsoft.Resources/deployments/*|Create and manage resource group deployments.| |Microsoft.Resources/subscriptions/resourceGroups/read|Read resource group deployments.|
The following table shows the permissions granted for the role:
|Microsoft.Resources/deployments/* |Create and manage resource group deployments. | |Microsoft.Insights/alertRules/* | Create and manage alert rules. | |Microsoft.Support/* |Create and manage support tickets.|
-|[Microsoft.ResourceHealth](/azure/role-based-access-control/resource-provider-operations#microsoftresourcehealth)/availabilityStatuses/read| Gets the availability statuses for all resources in the specified scope.|
+|[Microsoft.ResourceHealth](../role-based-access-control/resource-provider-operations.md#microsoftresourcehealth)/availabilityStatuses/read| Gets the availability statuses for all resources in the specified scope.|
### Automation Job Operator
The Built-in Reader role for the Automation Account can't use the `API ΓÇô GET /
To access the `API ΓÇô GET /AUTOMATIONACCOUNTS/AGENTREGISTRATIONINFORMATION`, we recommend that you switch to the built-in roles like Owner, Contributor or Automation Contributor to access the Automation account keys. These roles, by default, will have the *listKeys* permission. As a best practice, we recommend that you create a custom role with limited permissions to access the Automation account keys. For a custom role, you need to add `Microsoft.Automation/automationAccounts/listKeys/action` permission to the role definition.
-[Learn more](/azure/role-based-access-control/custom-roles) about how to create custom role from the Azure portal.
+[Learn more](../role-based-access-control/custom-roles.md) about how to create custom role from the Azure portal.
## Feature setup permissions
The following sections describe the minimum required permissions needed for enab
## Manage Role permissions for Hybrid Worker Groups and Hybrid Workers
-You can create [Azure custom roles](/azure/role-based-access-control/custom-roles) in Automation and grant the following permissions to Hybrid Worker Groups and Hybrid Workers:
+You can create [Azure custom roles](../role-based-access-control/custom-roles.md) in Automation and grant the following permissions to Hybrid Worker Groups and Hybrid Workers:
-- [Extension-based Hybrid Runbook Worker](/azure/automation/extension-based-hybrid-runbook-worker-install?tabs=windows#manage-role-permissions-for-hybrid-worker-groups-and-hybrid-workers)-- [Agent-based Windows Hybrid Runbook Worker](/azure/automation/automation-windows-hrw-install#manage-role-permissions-for-hybrid-worker-groups-and-hybrid-workers)
+- [Extension-based Hybrid Runbook Worker](./extension-based-hybrid-runbook-worker-install.md?tabs=windows#manage-role-permissions-for-hybrid-worker-groups-and-hybrid-workers)
+- [Agent-based Windows Hybrid Runbook Worker](./automation-windows-hrw-install.md#manage-role-permissions-for-hybrid-worker-groups-and-hybrid-workers)
+ - [Agent-based Linux Hybrid Runbook Worker](./automation-linux-hrw-install.md#manage-role-permissions-for-hybrid-worker-groups-and-hybrid-workers)
## Update Management permissions
When a user assigned to the Automation Operator role on the Runbook scope views
* To find out more about Azure RBAC using PowerShell, see [Add or remove Azure role assignments using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md). * For details of the types of runbooks, see [Azure Automation runbook types](automation-runbook-types.md).
-* To start a runbook, see [Start a runbook in Azure Automation](start-runbooks.md).
+* To start a runbook, see [Start a runbook in Azure Automation](start-runbooks.md).
automation Automation Security Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-guidelines.md
# Best practices for security in Azure Automation This article details the best practices to securely execute the automation jobs.
-[Azure Automation](/azure/automation/overview) provides you the platform to orchestrate frequent, time consuming, error-prone infrastructure management and operational tasks, as well as mission-critical operations. This service allows you to execute scripts, known as automation runbooks seamlessly across cloud and hybrid environments.
+[Azure Automation](./overview.md) provides you the platform to orchestrate frequent, time consuming, error-prone infrastructure management and operational tasks, as well as mission-critical operations. This service allows you to execute scripts, known as automation runbooks seamlessly across cloud and hybrid environments.
The platform components of Azure Automation Service are actively secured and hardened. The service goes through robust security and compliance checks. [Azure security benchmark](/security/benchmark/azure/overview) details the best practices and recommendations to help improve the security of workloads, data, and services on Azure. Also see [Azure security baseline for Azure Automation](/security/benchmark/azure/baselines/automation-security-baseline?toc=/azure/automation/TOC.json).
This section guides you in configuring your Automation account securely.
### Permissions
-1. Follow the principle of least privilege to get the work done when granting access to Automation resources. Implement [Automation granular RBAC roles](/azure/automation/automation-role-based-access-control) and avoid assigning broader roles or scopes such as subscription level. When creating the custom roles, only include the permissions users need. By limiting roles and scopes, you limit the resources that are at risk if the security principal is ever compromised. For detailed information on role-based access control concepts, see [Azure role-based access control best practices](/azure/role-based-access-control/best-practices).
+1. Follow the principle of least privilege to get the work done when granting access to Automation resources. Implement [Automation granular RBAC roles](./automation-role-based-access-control.md) and avoid assigning broader roles or scopes such as subscription level. When creating the custom roles, only include the permissions users need. By limiting roles and scopes, you limit the resources that are at risk if the security principal is ever compromised. For detailed information on role-based access control concepts, see [Azure role-based access control best practices](../role-based-access-control/best-practices.md).
1. Avoid roles that include Actions having a wildcard (_*_) as it implies full access to the Automation resource or a sub-resource, for example _automationaccounts/*/read_. Instead, use specific actions only for the required permission.
-1. Configure [Role based access at a runbook level](/azure/automation/automation-role-based-access-control) if the user doesn't require access to all the runbooks in the Automation account.
+1. Configure [Role based access at a runbook level](./automation-role-based-access-control.md) if the user doesn't require access to all the runbooks in the Automation account.
1. Limit the number of highly privileged roles such as Automation Contributor to reduce the potential for breach by a compromised owner.
-1. Use [Azure AD Privileged Identity Management](/azure/active-directory/roles/security-planning#use-azure-ad-privileged-identity-management) to protect the privileged accounts from malicious cyber-attacks to increase your visibility into their use through reports and alerts.
+1. Use [Azure AD Privileged Identity Management](../active-directory/roles/security-planning.md#use-azure-ad-privileged-identity-management) to protect the privileged accounts from malicious cyber-attacks to increase your visibility into their use through reports and alerts.
### Securing Hybrid Runbook worker role
-1. Install Hybrid workers using the [Hybrid Runbook Worker VM extension](/azure/automation/extension-based-hybrid-runbook-worker-install?tabs=windows), that doesn't have any dependency on the Log Analytics agent. We recommend this platform as it leverages Azure AD based authentication.
- [Hybrid Runbook Worker](/azure/automation/automation-hrw-run-runbooks) feature of Azure Automation allows you to execute runbooks directly on the machine hosting the role in Azure or non-Azure machine to execute Automation jobs in the local environment.
- - Use only high privilege users or [Hybrid worker custom roles](/azure/automation/extension-based-hybrid-runbook-worker-install?tabs=windows#manage-role-permissions-for-hybrid-worker-groups) for users responsible for managing operations such as registering or unregistering Hybrid workers and hybrid groups and executing runbooks against Hybrid runbook worker groups.
+1. Install Hybrid workers using the [Hybrid Runbook Worker VM extension](./extension-based-hybrid-runbook-worker-install.md?tabs=windows), that doesn't have any dependency on the Log Analytics agent. We recommend this platform as it leverages Azure AD based authentication.
+ [Hybrid Runbook Worker](./automation-hrw-run-runbooks.md) feature of Azure Automation allows you to execute runbooks directly on the machine hosting the role in Azure or non-Azure machine to execute Automation jobs in the local environment.
+ - Use only high privilege users or [Hybrid worker custom roles](./extension-based-hybrid-runbook-worker-install.md?tabs=windows) for users responsible for managing operations such as registering or unregistering Hybrid workers and hybrid groups and executing runbooks against Hybrid runbook worker groups.
- The same user would also require VM contributor access on the machine hosting Hybrid worker role. Since the VM contributor is a high privilege role, ensure only a limited right set of users have access to manage Hybrid works, thereby reducing the potential for breach by a compromised owner.
- Follow the [Azure RBAC best practices](/azure/role-based-access-control/best-practices).
+ Follow the [Azure RBAC best practices](../role-based-access-control/best-practices.md).
1. Follow the principle of least privilege and grant only the required permissions to users for runbook execution against a Hybrid worker. Don't provide unrestricted permissions to the machine hosting the hybrid runbook worker role. In case of unrestricted access, a user with VM Contributor rights or having permissions to run commands against the hybrid worker machine can use the Automation Account Run As certificate from the hybrid worker machine and could potentially allow a malicious user access as a subscription contributor. This could jeopardize the security of your Azure environment.
- Use [Hybrid worker custom roles](/azure/automation/extension-based-hybrid-runbook-worker-install?tabs=windows#manage-role-permissions-for-hybrid-worker-groups) for users responsible to manage Automation runbooks against Hybrid runbook workers and Hybrid runbook worker groups.
+ Use [Hybrid worker custom roles](./extension-based-hybrid-runbook-worker-install.md?tabs=windows) for users responsible to manage Automation runbooks against Hybrid runbook workers and Hybrid runbook worker groups.
-1. [Unregister](/azure/automation/extension-based-hybrid-runbook-worker-install?tabs=windows#delete-a-hybrid-runbook-worker) any unused or non-responsive hybrid workers.
+1. [Unregister](./extension-based-hybrid-runbook-worker-install.md?tabs=windows#delete-a-hybrid-runbook-worker) any unused or non-responsive hybrid workers.
### Authentication certificate and identities
-1. For runbook authentication, we recommend that you use [Managed identities](/azure/automation/automation-security-overview#managed-identities) instead of Run As accounts. The Run As accounts are an administrative overhead and we plan to deprecate them. A managed identity from Azure Active Directory (Azure AD) allows your runbook to easily access other Azure AD-protected resources such as Azure Key Vault. The identity is managed by the Azure platform and does not require you to provision or rotate any secrets. For more information about managed identities in Azure Automation, see [Managed identities for Azure Automation](/azure/automation/automation-security-overview#managed-identities)
+1. For runbook authentication, we recommend that you use [Managed identities](./automation-security-overview.md#managed-identities) instead of Run As accounts. The Run As accounts are an administrative overhead and we plan to deprecate them. A managed identity from Azure Active Directory (Azure AD) allows your runbook to easily access other Azure AD-protected resources such as Azure Key Vault. The identity is managed by the Azure platform and does not require you to provision or rotate any secrets. For more information about managed identities in Azure Automation, see [Managed identities for Azure Automation](./automation-security-overview.md#managed-identities)
You can authenticate an Automation account using two types of managed identities: - **System-assigned identity** is tied to your application and is deleted if your app is deleted. An app can only have one system-assigned identity. - **User-assigned identity** is a standalone Azure resource that can be assigned to your app. An app can have multiple user-assigned identities.
- Follow the [Managed identity best practice recommendations](/azure/active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations#choosing-system-or-user-assigned-managed-identities) for more details.
+ Follow the [Managed identity best practice recommendations](../active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations.md#choosing-system-or-user-assigned-managed-identities) for more details.
1. If you use Run As accounts as the authentication mechanism for your runbooks, ensure the following: - Track the service principals in your inventory. Service principals often have elevated permissions. - Delete any unused Run As accounts to minimize your exposed attack surface.
- - [Renew the Run As certificate](/azure/automation/manage-runas-account#cert-renewal) periodically.
- - Follow the RBAC guidelines to limit the permissions assigned to Run As account using this [script](/azure/automation/manage-runas-account#limit-run-as-account-permissions). Do not assign high privilege permissions like Contributor, Owner and so on.
+ - [Renew the Run As certificate](./manage-runas-account.md#cert-renewal) periodically.
+ - Follow the RBAC guidelines to limit the permissions assigned to Run As account using this [script](./manage-runas-account.md#limit-run-as-account-permissions). Do not assign high privilege permissions like Contributor, Owner and so on.
-1. Rotate the [Azure Automation keys](/azure/automation/automation-create-standalone-account?tabs=azureportal#manage-automation-account-keys) periodically. The key regeneration prevents future DSC or hybrid worker node registrations from using previous keys. We recommend to use the [Extension based hybrid workers](/azure/automation/automation-hybrid-runbook-worker) that use Azure AD authentication instead of Automation keys. Azure AD centralizes the control and management of identities and resource credentials.
+1. Rotate the [Azure Automation keys](./automation-create-standalone-account.md?tabs=azureportal#manage-automation-account-keys) periodically. The key regeneration prevents future DSC or hybrid worker node registrations from using previous keys. We recommend to use the [Extension based hybrid workers](./automation-hybrid-runbook-worker.md) that use Azure AD authentication instead of Automation keys. Azure AD centralizes the control and management of identities and resource credentials.
### Data security
-1. Secure the assets in Azure Automation including credentials, certificates, connections and encrypted variables. These assets are protected in Azure Automation using multiple levels of encryption. By default, data is encrypted with Microsoft-managed keys. For additional control over encryption keys, you can supply customer-managed keys to use for encryption of Automation assets. These keys must be present in Azure Key Vault for Automation service to be able to access the keys. See [Encryption of secure assets using customer-managed keys](/azure/automation/automation-secure-asset-encryption).
+1. Secure the assets in Azure Automation including credentials, certificates, connections and encrypted variables. These assets are protected in Azure Automation using multiple levels of encryption. By default, data is encrypted with Microsoft-managed keys. For additional control over encryption keys, you can supply customer-managed keys to use for encryption of Automation assets. These keys must be present in Azure Key Vault for Automation service to be able to access the keys. See [Encryption of secure assets using customer-managed keys](./automation-secure-asset-encryption.md).
1. Don't print any credentials or certificate details in the job output. An Automation job operator who is the low privilege user can view the sensitive information.
-1. Maintain a valid [backup of Automation](/azure/automation/automation-managing-data#data-backup) configuration like runbooks and assets ensuring backups are validated and protected to maintain business continuity after an unexpected event.
+1. Maintain a valid [backup of Automation](./automation-managing-data.md#data-backup) configuration like runbooks and assets ensuring backups are validated and protected to maintain business continuity after an unexpected event.
### Network isolation
-1. Use [Azure Private Link](/azure/automation/how-to/private-link-security) to securely connect Hybrid runbook workers to Azure Automation. Azure Private Endpoint is a network interface that connects you privately and securely to a an Azure Automation service powered by Azure Private Link. Private Endpoint uses a private IP address from your Virtual Network (VNet), to effectively bring the Automation service into your VNet.
+1. Use [Azure Private Link](./how-to/private-link-security.md) to securely connect Hybrid runbook workers to Azure Automation. Azure Private Endpoint is a network interface that connects you privately and securely to a an Azure Automation service powered by Azure Private Link. Private Endpoint uses a private IP address from your Virtual Network (VNet), to effectively bring the Automation service into your VNet.
If you want to access and manage other services privately through runbooks from Azure VNet without the need to open an outbound connection to the internet, you can execute runbooks on a Hybrid Worker that is connected to the Azure VNet. ### Policies for Azure Automation
-Review the Azure Policy recommendations for Azure Automation and act as appropriate. See [Azure Automation policies](/azure/automation/policy-reference).
+Review the Azure Policy recommendations for Azure Automation and act as appropriate. See [Azure Automation policies](./policy-reference.md).
## Next steps * To learn how to use Azure role-based access control (Azure RBAC), see [Manage role permissions and security in Azure Automation](/automation/automation-role-based-access-control).
-* For information on how Azure protects your privacy and secures your data, see [Azure Automation data security](/azure/automation/automation-managing-data).
+* For information on how Azure protects your privacy and secures your data, see [Azure Automation data security](./automation-managing-data.md).
* To learn about configuring the Automation account to use encryption, see [Encryption of secure assets in Azure Automation](/automation/automation-secure-asset-encryption).
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookW
- To learn how to troubleshoot your Hybrid Runbook Workers, see [Troubleshoot Hybrid Runbook Worker issues](troubleshoot/extension-based-hybrid-runbook-worker.md). -- To learn about Azure VM extensions, see [Azure VM extensions and features for Windows](/azure/virtual-machines/extensions/features-windows) and [Azure VM extensions and features for Linux](/azure/virtual-machines/extensions/features-linux).
+- To learn about Azure VM extensions, see [Azure VM extensions and features for Windows](../virtual-machines/extensions/features-windows.md) and [Azure VM extensions and features for Linux](../virtual-machines/extensions/features-linux.md).
-- To learn about VM extensions for Arc-enabled servers, see [VM extension management with Azure Arc-enabled servers](/azure/azure-arc/servers/manage-vm-extensions).
+- To learn about VM extensions for Arc-enabled servers, see [VM extension management with Azure Arc-enabled servers](../azure-arc/servers/manage-vm-extensions.md).
automation Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new-archive.md
Automation account and State Configuration availability in Japan West region. Fo
**Type :** New feature
-You can use the new Azure Policy compliance rule to allow creation of jobs, webhooks, and job schedules to run only on Hybrid Worker groups. Read the details in [Use Azure Policy to enforce job execution on Hybrid Runbook Worker](/azure/automation/enforce-job-execution-hybrid-worker?tabs=azure-cli)
+You can use the new Azure Policy compliance rule to allow creation of jobs, webhooks, and job schedules to run only on Hybrid Worker groups. Read the details in [Use Azure Policy to enforce job execution on Hybrid Runbook Worker](./enforce-job-execution-hybrid-worker.md?tabs=azure-cli)
### Update Management availability in East US, France Central, and North Europe regions
Azure Service Management (ASM) REST APIs for Azure Automation will be retired an
## Next steps
-If you'd like to contribute to Azure Automation documentation, see the [Docs Contributor Guide](/contribute/).
+If you'd like to contribute to Azure Automation documentation, see the [Docs Contributor Guide](/contribute/).
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
This page is updated monthly, so revisit it regularly. If you're looking for ite
**Type:** New change
-To strengthen the overall Azure Automation security posture, the built-in RBAC Reader role would not have access to Automation account keys through the API call - `GET /automationAccounts/agentRegistrationInformation`. Read [here](/azure/automation/automation-role-based-access-control#reader) for more information.
+To strengthen the overall Azure Automation security posture, the built-in RBAC Reader role would not have access to Automation account keys through the API call - `GET /automationAccounts/agentRegistrationInformation`. Read [here](./automation-role-based-access-control.md#reader) for more information.
### Restore deleted Automation Accounts **Type:** New change
-Users can now restore an Automation account deleted within 30 days. Read [here](/azure/automation/delete-account?tabs=azure-portal#restore-a-deleted-automation-account) for more information.
+Users can now restore an Automation account deleted within 30 days. Read [here](./delete-account.md?tabs=azure-portal#restore-a-deleted-automation-account) for more information.
## December 2021
Azure Automation now supports [system-assigned managed identities](./automation-
## Next steps
-If you'd like to contribute to Azure Automation documentation, see the [Docs Contributor Guide](/contribute/).
+If you'd like to contribute to Azure Automation documentation, see the [Docs Contributor Guide](/contribute/).
azure-app-configuration Rest Api Authorization Hmac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authorization-hmac.md
HTTP/1.1 403 Forbidden
``` **Reason:** The access key used to authenticate the request does not provide the required permissions to perform the requested operation.+ **Solution:** Obtain an access key that provides permission to perform the requested operation and use it to authenticate the request.
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* A basic understanding of [Kubernetes core concepts](/azure/aks/concepts-clusters-workloads).
+* A basic understanding of [Kubernetes core concepts](../../aks/concepts-clusters-workloads.md).
* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version >= 2.16.0 and <= 2.29.0
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* A basic understanding of [Kubernetes core concepts](/azure/aks/concepts-clusters-workloads).
+* A basic understanding of [Kubernetes core concepts](../../aks/concepts-clusters-workloads.md).
* [Azure PowerShell version 5.9.0 or later](/powershell/azure/install-az-ps)
Remove-AzConnectedKubernetes -ClusterName AzureArcTest1 -ResourceGroupName Azure
Advance to the next article to learn how to deploy configurations to your connected Kubernetes cluster using GitOps. > [!div class="nextstepaction"]
-> [Deploy configurations using GitOps](tutorial-use-gitops-connected-cluster.md)
+> [Deploy configurations using GitOps](tutorial-use-gitops-connected-cluster.md)
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
The following providers and their corresponding Kubernetes distributions have su
| Provider name | Distribution name | Version | | | -- | - |
-| RedHat | [OpenShift Container Platform](https://www.openshift.com/products/container-platform) | [4.5.41+](https://docs.openshift.com/container-platform/4.5/release_notes/ocp-4-5-release-notes.html), [4.6.35+](https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-release-notes.html), [4.7.18+](https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-release-notes.html) |
+| RedHat | [OpenShift Container Platform](https://www.openshift.com/products/container-platform) | [4.7.18+](https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-release-notes.html), [4.9.17+](https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html), [4.10.0+](https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html) |
| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) | TKGm 1.4.0; upstream K8s v1.21.2+vmware.1 <br>TKGm 1.3.1; upstream K8s v1.20.5_vmware.2 <br>TKGm 1.2.1; upstream K8s v1.19.3+vmware.1 | | Canonical | [Charmed Kubernetes](https://ubuntu.com/kubernetes) | [1.19](https://ubuntu.com/kubernetes/docs/1.19/components) | | SUSE Rancher | [Rancher Kubernetes Engine](https://rancher.com/products/rke/) | RKE CLI version: [v1.2.4](https://github.com/rancher/rke/releases/tag/v1.2.4); Kubernetes versions: [1.19.6](https://github.com/kubernetes/kubernetes/releases/tag/v1.19.6)), [1.18.14](https://github.com/kubernetes/kubernetes/releases/tag/v1.18.14)), [1.17.16](https://github.com/kubernetes/kubernetes/releases/tag/v1.17.16)) |
azure-cache-for-redis Cache Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-private-link.md
Your application should connect to `<cachename>.redis.cache.windows.net` on port
A private DNS zone, named `*.privatelink.redis.cache.windows.net`, is automatically created in your subscription. The private DNS zone is vital for establishing the TLS connection with the private endpoint.
-For more information, see [Azure services DNS zone configuration](/azure/private-link/private-endpoint-dns).
+For more information, see [Azure services DNS zone configuration](../private-link/private-endpoint-dns.md).
### Why can't I connect to a private endpoint?
It's only linked to your VNet. Because it's not in your VNet, NSG rules don't ne
## Next steps - To learn more about Azure Private Link, see the [Azure Private Link documentation](../private-link/private-link-overview.md).-- To compare various network isolation options for your cache, see [Azure Cache for Redis network isolation options documentation](cache-network-isolation.md).
+- To compare various network isolation options for your cache, see [Azure Cache for Redis network isolation options documentation](cache-network-isolation.md).
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
Active geo-replication public preview now supports:
### Azure TLS Certificate Change
-Microsoft is updating Azure services to use TLS certificates from a different set of Root Certificate Authorities (CAs). This change is being made because the current CA certificates don't comply with one of the CA/Browser Forum Baseline requirements. For full details, see [Azure TLS Certificate Changes](/azure/security/fundamentals/tls-certificate-changes).
+Microsoft is updating Azure services to use TLS certificates from a different set of Root Certificate Authorities (CAs). This change is being made because the current CA certificates don't comply with one of the C).
For more information on the effect to Azure Cache for Redis, see [Azure TLS Certificate Change](cache-best-practices-development.md#azure-tls-certificate-change). ## Next steps
-If you have more questions, contact us through [support](https://azure.microsoft.com/support/options/).
+If you have more questions, contact us through [support](https://azure.microsoft.com/support/options/).
azure-functions Functions Bindings Event Hubs Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-trigger.md
For information on setup and configuration details, see the [overview](functions
[!INCLUDE [functions-bindings-event-hubs-trigger](../../includes/functions-bindings-event-hubs-trigger.md)] + ## host.json settings
-The [host.json](functions-host-json.md#eventhub) file contains settings that control Event Hub trigger behavior. See the [host.json settings](functions-bindings-event-hubs.md#hostjson-settings) section for details regarding available settings.
+The [host.json](functions-host-json.md#eventhub) file contains settings that control Event Hubs trigger behavior. See the [host.json settings](functions-bindings-event-hubs.md#hostjson-settings) section for details regarding available settings.
## Next steps
azure-functions Functions Bindings Event Iot Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-iot-trigger.md
For information on setup and configuration details, see the [overview](functions
[!INCLUDE [functions-bindings-event-hubs](../../includes/functions-bindings-event-hubs-trigger.md)]
+## Connections
+
+The `connection` property is a reference to environment configuration that contains name of an application setting containing a connection string. You can get this connection string by selecting the **Connection Information** button for the [namespace](../event-hubs/event-hubs-create.md#create-an-event-hubs-namespace). The connection string must be for an Event Hubs namespace, not the event hub itself.
+
+The connection string must have at least "read" permissions to activate the function.
+
+This connection string should be stored in an application setting with a name matching the value specified by the `connection` property of the binding configuration.
+
+> [!NOTE]
+> Identity-based connections aren't supported by the IoT Hub trigger. If you need to use managed identities end-to-end, you can instead use IoT Hub Routing to send data to an event hub you control. In that way, outbound routing can be authenticated with managed identity the event can be read [from that event hub using managed identity](functions-bindings-event-hubs-trigger.md?tabs=extensionv5#identity-based-connections).
+ ## host.json properties The [host.json](functions-host-json.md#eventhub) file contains settings that control Event Hub trigger behavior. See the [host.json settings](functions-bindings-event-iot.md#hostjson-settings) section for details regarding available settings.
azure-government Documentation Government Overview Nerc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-nerc.md
Both Azure and Azure Government are audited extensively by independent third-par
Microsoft maintains all three of these compliance audits for both Azure and Azure Government and makes the respective audit documents available to registered entities.
-NERC CIP compliance requirements can be addressed during a NERC audit and in line with the [shared responsibility model](../security/fundamentals/shared-responsibility.md) for cloud computing. We believe that Azure and Azure Government cloud services can be used in a manner compliant with NERC CIP standards. Microsoft is prepared to assist you with NERC audits by furnishing Azure or Azure Government audit documentation and control implementation details in support of NERC audit requirements. Moreover, Microsoft has developed a **[Cloud implementation guide for NERC audits](https://aka.ms/AzureNERCGuide)**, which is a technical how-to guidance to help you address NERC CIP compliance requirements for your Azure assets. The document contains pre-filled [Reliability Standard Audit Worksheets](https://www.nerc.com/pa/comp/Pages/Reliability-Standard-Audit-Worksheets-(RSAWs).aspx) (RSAWs) narratives that help explain how Azure controls address NERC CIP requirements. It also contains guidance to help you use Azure services to implement controls that you own. The guide is available for download to existing Azure or Azure Government customers under a non-disclosure agreement (NDA) from the Service Trust Portal (STP). You must sign in to access this document on the STP. For more information, see [Get started with the Microsoft Service Trust Portal](https://aka.ms/stphelp).
+NERC CIP compliance requirements can be addressed during a NERC audit and in line with the [shared responsibility model](../security/fundamentals/shared-responsibility.md) for cloud computing. We believe that Azure and Azure Government cloud services can be used in a manner compliant with NERC CIP standards. Microsoft is prepared to assist you with NERC audits by furnishing Azure or Azure Government audit documentation and control implementation details in support of NERC audit requirements. Moreover, Microsoft has developed a **[Cloud implementation guide for NERC audits](https://aka.ms/AzureNERCGuide)**, which is a technical how-to guidance to help you address NERC CIP compliance requirements for your Azure assets. The document contains pre-filled [Reliability Standard Audit Worksheets](https://www.nerc.com/pa/comp/Pages/Reliability-Standard-Audit-Worksheets-(RSAWs).aspx) (RSAWs) narratives that help explain how Azure controls address NERC CIP requirements. It also contains guidance to help you use Azure services to implement controls that you own. The guide is available for download to existing Azure or Azure Government customers under a non-disclosure agreement (NDA) from the Service Trust Portal (STP). You must sign in to access this document on the STP. For more information, see [Get started with the Microsoft Service Trust Portal](/microsoft-365/compliance/get-started-with-service-trust-portal).
> [!NOTE] >
The FedRAMP High authorization represents the highest bar for FedRAMP compliance
**Both Azure and Azure Government maintain FedRAMP High P-ATOs issued by the JAB** in addition to more than 250 Moderate and High ATOs issued by individual federal agencies for the in-scope services. For more information, see [Azure FedRAMP compliance offering](/azure/compliance/offerings/offering-fedramp).
-A comparison between the FedRAMP Moderate control baseline and NERC CIP standards requirements reveals that FedRAMP Moderate control baseline encompasses all NERC CIP requirements. Microsoft has developed a **[Cloud implementation guide for NERC audits](https://aka.ms/AzureNERCGuide)** that includes control mappings between the current set of NERC CIP standards requirements and FedRAMP Moderate control baseline as documented in [NIST SP 800-53 Rev 4](https://csrc.nist.gov/Projects/risk-management/sp800-53-controls/release-search#!/800-53). The Cloud implementation guide for NERC audits contains pre-filled [Reliability Standard Audit Worksheets](https://www.nerc.com/pa/comp/Pages/Reliability-Standard-Audit-Worksheets-(RSAWs).aspx) (RSAWs) narratives that help explain how Azure controls address NERC CIP requirements. It also contains guidance to help you use Azure services to implement controls that you own. You can download the Cloud implementation guide for NERC audits under a non-disclosure agreement (NDA) from the Service Trust Portal (STP). You must sign in to access this document on the STP. For more information, see [Get started with the Microsoft Service Trust Portal](https://aka.ms/stphelp).
+A comparison between the FedRAMP Moderate control baseline and NERC CIP standards requirements reveals that FedRAMP Moderate control baseline encompasses all NERC CIP requirements. Microsoft has developed a **[Cloud implementation guide for NERC audits](https://aka.ms/AzureNERCGuide)** that includes control mappings between the current set of NERC CIP standards requirements and FedRAMP Moderate control baseline as documented in [NIST SP 800-53 Rev 4](https://csrc.nist.gov/Projects/risk-management/sp800-53-controls/release-search#!/800-53). The Cloud implementation guide for NERC audits contains pre-filled [Reliability Standard Audit Worksheets](https://www.nerc.com/pa/comp/Pages/Reliability-Standard-Audit-Worksheets-(RSAWs).aspx) (RSAWs) narratives that help explain how Azure controls address NERC CIP requirements. It also contains guidance to help you use Azure services to implement controls that you own. You can download the Cloud implementation guide for NERC audits under a non-disclosure agreement (NDA) from the Service Trust Portal (STP). You must sign in to access this document on the STP. For more information, see [Get started with the Microsoft Service Trust Portal](/microsoft-365/compliance/get-started-with-service-trust-portal).
There are many valid reasons why a registered entity subject to NERC CIP compliance obligations might want to use an existing FedRAMP P-ATO or ATO when assessing the security posture of a cloud services offering:
Nuclear electric utilities may also be subject to the DoE 10 CFR Part 810 export
Registered entities subject to NERC CIP compliance obligations can use existing audits applicable to cloud services when assessing the security posture of a cloud services offering, including the Cloud Security Alliance STAR program, SOC 2 Type 2 attestation, and FedRAMP authorization. For example, FedRAMP relies on an in-depth audit with mandatory provisions for continuous monitoring. It provides strong assurances to registered entities that audited controls are operating effectively. A comparison between the FedRAMP Moderate control baseline and NERC CIP standards requirements reveals that FedRAMP Moderate control baseline encompasses all NERC CIP standards requirements. FedRAMP doesn't replace NERC CIP standards and it doesn't alter the responsibility that registered entities have for meeting their NERC CIP compliance obligations. Rather, a cloud service providerΓÇÖs existing FedRAMP authorization can deliver assurances that NIST-based control evidence mapped to NERC CIP standards requirements for which cloud service provider is responsible has already been examined by an accredited FedRAMP auditor.
-If you're a registered entity contemplating a NERC audit, you should review MicrosoftΓÇÖs **[Cloud implementation guide for NERC audits](https://aka.ms/AzureNERCGuide)**, which provides detailed technical how-to guidance to help you address NERC CIP compliance requirements for your Azure assets. It contains control mappings between the current set of NERC CIP standards and FedRAMP Moderate control baseline as documented in NIST SP 800-53 Rev 4. Moreover, a complete set of Reliability Standard Audit Worksheets (RSAWs) narratives with Azure control implementation details is provided to explain how Microsoft addresses NERC CIP standards requirements for controls that are part of cloud service providerΓÇÖs responsibility. Also provided is guidance to help you use Azure services to implement controls that you own. The guide is available for download to existing Azure or Azure Government customers under a non-disclosure agreement (NDA) from the Service Trust Portal (STP). You must sign in to access this document on the STP. For more information, see [Get started with the Microsoft Service Trust Portal](https://aka.ms/stphelp).
+If you're a registered entity contemplating a NERC audit, you should review MicrosoftΓÇÖs **[Cloud implementation guide for NERC audits](https://aka.ms/AzureNERCGuide)**, which provides detailed technical how-to guidance to help you address NERC CIP compliance requirements for your Azure assets. It contains control mappings between the current set of NERC CIP standards and FedRAMP Moderate control baseline as documented in NIST SP 800-53 Rev 4. Moreover, a complete set of Reliability Standard Audit Worksheets (RSAWs) narratives with Azure control implementation details is provided to explain how Microsoft addresses NERC CIP standards requirements for controls that are part of cloud service providerΓÇÖs responsibility. Also provided is guidance to help you use Azure services to implement controls that you own. The guide is available for download to existing Azure or Azure Government customers under a non-disclosure agreement (NDA) from the Service Trust Portal (STP). You must sign in to access this document on the STP. For more information, see [Get started with the Microsoft Service Trust Portal](/microsoft-365/compliance/get-started-with-service-trust-portal).
If you're a registered entities subject to compliance with NERC CIP standards, you can also engage Microsoft for audit assistance, including furnishing Azure or Azure Government audit documentation and control implementation details in support of NERC audit requirements. Contact your Microsoft account team for assistance. You're ultimately responsible for meeting your NERC CIP compliance obligations.
If you're a registered entities subject to compliance with NERC CIP standards, y
- NERC [Critical Infrastructure Protection (CIP) standards](https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx) - NERC [compliance guidance](https://www.nerc.com/pa/comp/guidance/) - NERC [Glossary of Terms](https://www.nerc.com/pa/Stand/Glossary%20of%20Terms/Glossary_of_Terms.pdf)-- NERC [registered entities](https://www.nerc.com/pa/comp/Pages/Registration.aspx)
+- NERC [registered entities](https://www.nerc.com/pa/comp/Pages/Registration.aspx)
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
For more details, read the [Geolocation service documentation](/rest/api/maps/ge
### Render service
-The [Render service V2](/rest/api/maps/renderv2) introduces a new version of the [Get Map Tile V2 API](/rest/api/maps/render-v2/get-map-tile). The Get Map Tile V2 API now allows customers to request Azure Maps road tiles, weather tiles, or the map tiles created using Azure Maps Creator. It's recommended that you use the new Get Map Tile V2 API.
+[Render service V2](/rest/api/maps/render-v2) introduces a new version of the [Get Map Tile V2 API](/rest/api/maps/render-v2/get-map-tile) that supports using Azure Maps tiles not only in the Azure Maps SDKs but other map controls as well. It includes raster and vector tile formats, 256x256 or 512x512 (where applicable) tile sizes and numerous map types such as road, weather, contour, or map tiles created using Azure Maps Creator. For a complete list see [TilesetID](/rest/api/maps/render-v2/get-map-tile#tilesetid) in the REST API documentation. It's recommended that you use Render service V2 instead of Render service V1. You are required to display the appropriate copyright attribution on the map anytime you use the Azure Maps Render service V2, either as basemaps or layers, in any third-party map control. For more information see [How to use the Get Map Attribution API](how-to-show-attribution.md).
:::image type="content" source="./media/about-azure-maps/intro_map.png" border="false" alt-text="Example of a map from the Render service V2":::
-For more details, read the [Render service V2 documentation](/rest/api/maps/renderv2).
-
-To learn more about the Render service V1 that is in GA (General Availability), see the [Render service V1 documentation](/rest/api/maps/render).
- ### Route service The route services can be used to calculate the estimated arrival times (ETAs) for each requested route. Route APIs consider factors, such as real-time traffic information and historic traffic data, like the typical road speeds on the requested day of the week and time of day. The APIs return the shortest or fastest routes available to multiple destinations at a time in sequence or in optimized order, based on time or distance. The service allows developers to calculate directions across several travel modes, such as car, truck, bicycle, or walking, and electric vehicle. The service also considers inputs, such as departure time, weight restrictions, or hazardous material transport.
azure-maps How To Show Attribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-show-attribution.md
+
+ Title: Show the correct map copyright attribution information
+
+description: The map copyright attribution information must be displayed in any applications that use the Render V2 API, including web and mobile applications. In this article, you'll learn how to display the correct attribution every time you display or update a tile.
++ Last updated : 3/16/2022+++++
+# Show the correct copyright attribution
+
+When using the [Azure Maps Render service V2](/rest/api/maps/render-v2), either as a basemap or layer, you're required to display the appropriate data provider copyright attribution on the map. This information should be displayed in the lower right-hand corner of the map.
++
+The above image is an example of a map from the Render service V2, displaying the road style. It shows the copyright attribution in the lower right-hand corner of the map.
++
+The above image is an example of a map from the Render service V2, displaying the satellite style. note that there's another data provider listed.
+
+## The Get Map Attribution API
+
+The [Get Map Attribution API](/rest/api/maps/render-v2/get-map-attribution) enables you to request map copyright attribution information so that you can display in on the map within your applications.
+
+### When to use the Get Map Attribution API
+
+The map copyright attribution information must be displayed on the map in any applications that use the Render V2 API, including web and mobile applications.
+
+The attribution is automatically displayed and updated on the map When using any of the Azure Maps SDKs. This includes the [Web SDK](how-to-use-map-control.md), [Android SDK](how-to-use-android-map-control-library.md) and the [iOS SDK](how-to-use-ios-map-control-library.md).
+
+When using map tiles from the Render service in a third-party map, you must display and update the copyright attribution information on the map.
+
+Map content changes whenever an end user selects a different style, zooms in or out, or pans the map. Each of these user actions causes an event to fire. When any of these events fire, you need to call the Get Map Attribution API. Once you have the updated copyright attribution information, you then need to display it in the lower right-hand corner of the map.
+
+Since the data providers can differ depending on the *region* and *zoom* level, the Get Map Attribution API takes these parameters as input and returns the corresponding attribution text.
+
+### How to use the Get Map Attribution API
+
+You'll need the following information to run the `attribution` command:
+
+| Parameter | Type | Description |
+| -- | | -- |
+| api-version | string | Version number of Azure Maps API. Current version is 2.1 |
+| bounds | array | A string that represents the rectangular area of a bounding box. The bounds parameter is defined by the four bounding box coordinates. The first 2 are the WGS84 longitude and latitude defining the southwest corner and the last 2 are the WGS84 longitude and latitude defining the northeast corner. The string is presented in the following format: [SouthwestCorner_Longitude, SouthwestCorner_Latitude, NortheastCorner_Longitude, NortheastCorner_Latitude]. |
+| tilesetId | TilesetID | A tileset is a collection of raster or vector data broken up into a uniform grid of square tiles at preset zoom levels. Every tileset has a tilesetId to use when making requests. The tilesetId for tilesets created using Azure Maps Creator are generated through the [Tileset Create API](/rest/api/maps/v2/tileset/create). There are ready-to-use tilesets supplied by Azure Maps, such as `microsoft.base.road`, `microsoft.base.hybrid` and `microsoft.weather.radar.main`, a complete list can be found the [Get Map Attribution](/rest/api/maps/render-v2/get-map-attribution#tilesetid) REST API documentation. |
+| zoom | integer | Zoom level for the selected tile. The valid range depends on the tile, see the [TilesetID](/rest/api/maps/render-v2/get-map-attribution#tilesetid) table for valid values for a specific tileset. For more information, see the [Zoom levels and tile grid](zoom-levels-and-tile-grid.md) article. |
+| subscription-key | string | One of the Azure Maps keys provided from an Azure Map Account. For more information, see the [Authentication with Azure Maps](azure-maps-authentication.md) article. |
+
+Run the following GET request to get the corresponding copyright attribution to display on the map:
+
+```http
+https://atlas.microsoft.com/map/attribution?subscription-key={Azure-Maps-Primary-Subscription-key}&api-version=2.1&tilesetId=microsoft.base&zoom=6&bounds=-122.414162,47.579490,-122.247157,47.668372
+```
+
+## Additional information
+
+* For more information, see the [Azure Maps Render service V2](/rest/api/maps/render-v2) documentation.
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
The decision to migrate to AMA will be based on the different features and servi
Azure Monitor agent currently supports the following core functionality: -- **Collect guest logs and metrics** from any machine in Azure, in other clouds, or on-premises. [Azure Arc-enabled servers](/azure/azure-arc/servers/overview) are required for machines outside of Azure.
+- **Collect guest logs and metrics** from any machine in Azure, in other clouds, or on-premises. [Azure Arc-enabled servers](../../azure-arc/servers/overview.md) are required for machines outside of Azure.
- **Centrally manage data collection configuration** using [data collection rules](/azure/azure-monitor/agents/data-collection-rule-overview), and management configuration using Azure Resource Manager (ARM) templates or policies. - **Use Windows event filtering or multi-homing** for Windows or Linux logs. - **Improved extension management.** The Azure Monitor agent uses a new method of handling extensibility that's more transparent and controllable than management packs and Linux plug-ins in the current Log Analytics agents.
Azure Monitor agent currently supports the following core functionality:
> [!NOTE] > Windows and Linux machines that reside on cloud platforms other than Azure, or are on-premises machines, must be Azure Arc-enabled so that the AMA can send logs to the Log Analytics workspace. For more information, see: >
-> - [What are Azure ArcΓÇôenabled servers?](/azure/azure-arc/servers/overview)
-> - [Overview of Azure Arc ΓÇô enabled servers agent](/azure/azure-arc/servers/agent-overview)
-> - [Plan and deploy Azure Arc ΓÇô enabled servers at scale](/azure/azure-arc/servers/plan-at-scale-deployment)
+> - [What are Azure ArcΓÇôenabled servers?](../../azure-arc/servers/overview.md)
+> - [Overview of Azure Arc ΓÇô enabled servers agent](../../azure-arc/servers/agent-overview.md)
+> - [Plan and deploy Azure Arc ΓÇô enabled servers at scale](../../azure-arc/servers/plan-at-scale-deployment.md)
## Plan your migration
You migration plan to the Azure Monitor agent should include the following consi
|Consideration |Description | |||
-|**Environment requirements** | Verify that your environment is currently supported by the AMA. For more information, see [Supported operating systems](/azure/azure-monitor/agents/agents-overview#supported-operating-systems). |
-|**Current and new feature requirements** | While the AMA provides [several new features](#current-capabilities), such as filtering, scoping, and multi-homing, it is not yet at parity with the legacy Log Analytics agent.As you plan your migration, make sure that the features your organization requires are already supported by the AMA. You may decide to continue using the Log Analytics agent for now, and migrate at a later date. See [Supported services and features](/azure/azure-monitor/agents/azure-monitor-agent-overview#supported-services-and-features) for a current status of features that are supported and that may be in preview. |
+|**Environment requirements** | Verify that your environment is currently supported by the AMA. For more information, see [Supported operating systems](./agents-overview.md#supported-operating-systems). |
+|**Current and new feature requirements** | While the AMA provides [several new features](#current-capabilities), such as filtering, scoping, and multi-homing, it is not yet at parity with the legacy Log Analytics agent.As you plan your migration, make sure that the features your organization requires are already supported by the AMA. You may decide to continue using the Log Analytics agent for now, and migrate at a later date. See [Supported services and features](./azure-monitor-agent-overview.md#supported-services-and-features) for a current status of features that are supported and that may be in preview. |
## Gap analysis between agents
For more information, see:
- [Overview of the Azure Monitor agents](agents-overview.md) - [AMA migration for Microsoft Sentinel](../../sentinel/ama-migrate.md)-- [Frequently asked questions for AMA migration](/azure/azure-monitor/faq#azure-monitor-agent)
+- [Frequently asked questions for AMA migration](/azure/azure-monitor/faq#azure-monitor-agent)
azure-monitor Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/log-analytics-agent.md
The following table lists the proxy and firewall configuration information requi
For firewall information required for Azure Government, see [Azure Government management](../../azure-government/compare-azure-government-global-azure.md#azure-monitor).
+> [!IMPORTANT]
+> If your firewall is doing CNAME inspections, you need to configure it to allow all domains in the CNAME.
+ If you plan to use the Azure Automation Hybrid Runbook Worker to connect to and register with the Automation service to use runbooks or management features in your environment, it must have access to the port number and the URLs described in [Configure your network for the Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md#network-planning). ### Proxy configuration
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
There are many ways to explore Application Insights telemetry. For more informat
Review [frequently asked questions](../faq.yml). ### Microsoft Q&A questions forum
-Post questions to the Microsoft Q&A [answers forum](https://docs.microsoft.com/answers/topics/24223/azure-monitor.html).
+Post questions to the Microsoft Q&A [answers forum](/answers/topics/24223/azure-monitor.html).
### Stack Overflow
azure-monitor Custom Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-endpoints.md
If you currently use the [Application Insights REST API](https://dev.application
| Azure China | REST API | `api.applicationinsights.azure.cn` | | Azure Government | REST API | `api.applicationinsights.us`|
-> [!NOTE]
-> Codeless agent/extension based monitoring for Azure App Services is **currently not supported** in these regions. As soon as this functionality becomes available this article will be updated.
- ## Next steps - To learn more about the custom modifications for Azure Government, consult the detailed guidance for [Azure monitoring and management](../../azure-government/compare-azure-government-global-azure.md#application-insights).
azure-monitor Migrate From Instrumentation Keys To Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md
Billing isn't impacted.
### Microsoft Q&A
-Post questions to the [answers forum](https://docs.microsoft.com/answers/topics/24223/azure-monitor.html).
+Post questions to the [answers forum](/answers/topics/24223/azure-monitor.html).
azure-monitor Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/performance-counters.md
Like other telemetry, **performanceCounters** also has a column `cloud_RoleInsta
## Performance counters for applications running in Azure Web Apps and Windows Containers on Azure App Service
-Both ASP.NET and ASP.NET Core applications deployed to Azure Web Apps run in a special sandbox environment. Applications deployed to Azure App Service can utilize a [Windows container](/azure/app-service/quickstart-custom-container?tabs=dotnet&pivots=container-windows) or be hosted in a sandbox environment. If the application is deployed in a Windows Container all standard performance counters are available in the container image.
+Both ASP.NET and ASP.NET Core applications deployed to Azure Web Apps run in a special sandbox environment. Applications deployed to Azure App Service can utilize a [Windows container](../../app-service/quickstart-custom-container.md?pivots=container-windows&tabs=dotnet) or be hosted in a sandbox environment. If the application is deployed in a Windows Container all standard performance counters are available in the container image.
The sandbox environment does not allow direct access to system performance counters. However, a limited subset of counters are exposed as environment variables as described [here](https://github.com/projectkudu/kudu/wiki/Perf-Counters-exposed-as-environment-variables). Only a subset of counters are available in this environment, and the full list can be found [here](https://github.com/microsoft/ApplicationInsights-dotnet/blob/main/WEB/Src/PerformanceCollector/PerformanceCollector/Implementation/WebAppPerformanceCollector/CounterFactory.cs).
Like other metrics, you can [set an alert](../alerts/alerts-log.md) to warn you
## <a name="next"></a>Next steps * [Dependency tracking](./asp-net-dependencies.md)
-* [Exception tracking](./asp-net-exceptions.md)
+* [Exception tracking](./asp-net-exceptions.md)
azure-monitor Snapshot Debugger Appservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-appservice.md
Below you can find scenarios where Snapshot Collector is not supported:
|Scenario | Side Effects | Recommendation | ||--|-|
-|When using the Snapshot Collector SDK in your application directly (.csproj) and you have enabled the advance option "Interop".| The local Application Insights SDK (including Snapshot Collector telemetry) will be lost, therefore, no Snapshots will be available.<br /><br />Your application could crash at startup with `System.ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.`<br /><br />For more information about the Application Insights feature "Interop", see the [documentation.](https://docs.microsoft.com/azure/azure-monitor/app/azure-web-apps-net-core?#troubleshooting) | If you are using the advance option "Interop", use the codeless Snapshot Collector injection (enabled thru the Azure Portal UX) |
+|When using the Snapshot Collector SDK in your application directly (.csproj) and you have enabled the advance option "Interop".| The local Application Insights SDK (including Snapshot Collector telemetry) will be lost, therefore, no Snapshots will be available.<br /><br />Your application could crash at startup with `System.ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.`<br /><br />For more information about the Application Insights feature "Interop", see the [documentation.](./azure-web-apps-net-core.md#troubleshooting) | If you are using the advance option "Interop", use the codeless Snapshot Collector injection (enabled thru the Azure Portal UX) |
## Next steps
azure-monitor Snapshot Debugger Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-troubleshoot.md
Below you can find scenarios where Snapshot Collector is not supported:
|Scenario | Side Effects | Recommendation | ||--|-|
-|When using the Snapshot Collector SDK in your application directly (.csproj) and you have enabled the advance option "Interop".| The local Application Insights SDK (including Snapshot Collector telemetry) will be lost, therefore, no Snapshots will be available.<br /><br />Your application could crash at startup with `System.ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.`<br /><br />For more information about the Application Insights feature "Interop", see the [documentation.](https://docs.microsoft.com/azure/azure-monitor/app/azure-web-apps-net-core?#troubleshooting) | If you are using the advance option "Interop", use the codeless Snapshot Collector injection (enabled thru the Azure Portal UX) |
+|When using the Snapshot Collector SDK in your application directly (.csproj) and you have enabled the advance option "Interop".| The local Application Insights SDK (including Snapshot Collector telemetry) will be lost, therefore, no Snapshots will be available.<br /><br />Your application could crash at startup with `System.ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.`<br /><br />For more information about the Application Insights feature "Interop", see the [documentation.](./azure-web-apps-net-core.md#troubleshooting) | If you are using the advance option "Interop", use the codeless Snapshot Collector injection (enabled thru the Azure Portal UX) |
## Make sure you're using the appropriate Snapshot Debugger Endpoint
If you still don't see an exception with that snapshot ID, then the exception re
If your application connects to the Internet via a proxy or a firewall, you may need to update the rules to communicate with the Snapshot Debugger service.
-The IPs used by Application Insights Snapshot Debugger are included in the Azure Monitor service tag. For more information, see [Service Tags documentation](../../virtual-network/service-tags-overview.md).
+The IPs used by Application Insights Snapshot Debugger are included in the Azure Monitor service tag. For more information, see [Service Tags documentation](../../virtual-network/service-tags-overview.md).
azure-monitor Autoscale Predictive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-predictive.md
PS G:\works\kusto_onboard\test_arm_template> new-azurermresourcegroupdeployment
} ```
-For more information on Azure Resource Manager templates, see [Resource Manager template overview](/azure/azure-resource-manager/templates/overview)
+For more information on Azure Resource Manager templates, see [Resource Manager template overview](../../azure-resource-manager/templates/overview.md)
## Common questions
Learn more about Autoscale by referring to the following:
- [Azure Monitor autoscale common metrics](./autoscale-common-metrics.md) - [Best practices for Azure Monitor autoscale](./autoscale-best-practices.md) - [Use autoscale actions to send email and webhook alert notifications](./autoscale-webhook-email.md)-- [Autoscale REST API](/rest/api/monitor/autoscalesettings)
+- [Autoscale REST API](/rest/api/monitor/autoscalesettings)
azure-monitor Container Insights Transition Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-transition-solution.md
description: "Learn how to migrate from using the legacy OMS solution to monitor
# Transition from the Container Monitoring Solution to using Container Insights
-With both the underlying platform and agent deprecations, on March 1, 2025 the [Container Monitoring Solution](./containers.md) will be retired. If you use the Container Monitoring Solution to ingest data to your Log Analytics workspace, make sure to transition to using [Container Insights](./container-insights-overview.md) prior to that date.
+With both the underlying platform and agent deprecations, on March 31, 2025 the [Container Monitoring Solution](./containers.md) will be retired. If you use the Container Monitoring Solution to ingest data to your Log Analytics workspace, make sure to transition to using [Container Insights](./container-insights-overview.md) prior to that date.
## Steps to complete the transition
azure-monitor Rest Api Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/rest-api-walkthrough.md
New-AzRoleAssignment -RoleDefinitionName Reader `
```
-To query the Azure Monitor API, the client application should use the previously created service principal to authenticate. The following example PowerShell script shows one approach, using the [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-overview) to obtain the authentication token.
+To query the Azure Monitor API, the client application should use the previously created service principal to authenticate. The following example PowerShell script shows one approach, using the [Microsoft Authentication Library (MSAL)](../../active-directory/develop/msal-overview.md) to obtain the authentication token.
```powershell $ClientID = "{client_id}"
If you receive a 429, 503, or 504 error, please retry the API in one minute.
* Review the [Overview of Monitoring](../overview.md). * View the [Supported metrics with Azure Monitor](./metrics-supported.md). * Review the [Microsoft Azure Monitor REST API Reference](/rest/api/monitor/).
-* Review the [Azure Management Library](/previous-versions/azure/reference/mt417623(v=azure.100)).
+* Review the [Azure Management Library](/previous-versions/azure/reference/mt417623(v=azure.100)).
azure-monitor Azure Cli Log Analytics Workspace Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-cli-log-analytics-workspace-sample.md
You can delete [Custom Log](custom-logs-overview.md), [Search Results](search-jo
To delete a table, run the [az monitor log-analytics workspace table delete](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-data-export-delete) command: ```azurecli
-az monitor log-analytics workspace table delete ΓÇôsubscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
- --name MySearchTable_SRCH
+az monitor log-analytics workspace table delete ΓÇôsubscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name MySearchTable_SRCH
``` ## Export data from selected tables
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
For example:
- To set Basic Logs: ```azurecli
- az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
- --name ContainerLog --plan Basic
+ az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name ContainerLog --plan Basic
``` - To set Analytics Logs: ```azurecli
- az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
- --name ContainerLog --plan Analytics
+ az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name ContainerLog --plan Analytics
```
To check the configuration of a table, run the [az monitor log-analytics workspa
For example: ```azurecli
-az monitor log-analytics workspace table show --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
- --name Syslog --output table \
+az monitor log-analytics workspace table show --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name Syslog --output table
```
azure-monitor Basic Logs Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-query.md
Last updated 01/27/2022
# Query Basic Logs in Azure Monitor (Preview)
-Basic Logs reduce the cost of high-volume verbose logs you donΓÇÖt need for analytics and alerts. Basic Logs have reduced charges for ingestion and limitations on log queries and other Azure Monitor features. This article describes how to query data from tables configured for Basic Logs in the Azure portal and using the Log Analytics REST API.
+Basic Logs tables reduce the cost of ingesting high-volume verbose logs and let you query the data they store using a limited set of log queries. This article explains how to query data from Basic Logs tables.
+
+For more information, see [Azure log data plans](log-analytics-workspace-overview.md#log-data-plans-preview) and [Configure a table for Basic Logs](basic-logs-configure.md).
+ > [!NOTE] > Other tools that use the Azure API for querying - for example, Grafana and Power BI - cannot access Basic Logs.
-## Limits
+## Limitations
Queries with Basic Logs are subject to the following limitations: ### KQL language limits Log queries against Basic Logs are optimized for simple data retrieval using a subset of KQL language, including the following operators:
You can use all functions and binary operators within these operators.
Specify the time range in the query header in Log Analytics or in the API call. You can't specify the time range in the query body using a **where** statement. ### Query context
-Queries with Basic Logs must use a workspace for the scope. You can't run queries using another resource for the scope. For more details, see [Log query scope and time range in Azure Monitor Log Analytics](scope.md).
+Queries with Basic Logs must use a workspace for the scope. You can't run queries using another resource for the scope. For more information, see [Log query scope and time range in Azure Monitor Log Analytics](scope.md).
### Concurrent queries You can run two concurrent queries per user. ### Purge
-You cannot [purge personal data](personal-data-mgmt.md#how-to-export-and-delete-private-data) from Basic Logs tables.
+You canΓÇÖt [purge personal data](personal-data-mgmt.md#how-to-export-and-delete-private-data) from Basic Logs tables.
-## Run a query from the Azure portal
+## Run a query on a Basic Logs table
Creating a query using Basic Logs is the same as any other query in Log Analytics. See [Get started with Azure Monitor Log Analytics](./log-analytics-tutorial.md) if you aren't familiar with this process.
-Open Log Analytics in the Azure portal and open the **Tables** tab. When browsing the list of tables, Basic Logs tables are identified with a unique icon:
+# [Portal](#tab/portal-1)
+
+In the Azure portal, select **Monitor** > **Logs** > **Tables**.
+
+In the list of tables, you can identify Basic Logs tables by their unique icon:
![Screenshot of the Basic Logs table icon in the table list.](./media/basic-logs-configure/table-icon.png)
-You can also hover over a table name for the table information view. This will specify that the table is configured as Basic Logs:
+You can also hover over a table name for the table information view, which will specify that the table is configured as Basic Logs:
![Screenshot of the Basic Logs table indicator in the table details.](./media/basic-logs-configure/table-info.png) - When you add a table to the query, Log Analytics will identify a Basic Logs table and align the authoring experience accordingly. The following example shows when you attempt to use an operator that isn't supported by Basic Logs. ![Screenshot of Query on Basic Logs limitations.](./media/basic-logs-query/query-validator.png)
-## Run a query from REST API
+# [API](#tab/api-1)
+ Use **/search** from the [Log Analytics API](api/overview.md) to run a query with Basic Logs using a REST API. This is similar to the [/query](api/request-format.md) API with the following differences: - The query is subject to the language limitations described above. - The time span must be specified in the header of the request and not in the query statement.
-### Sample Request
+**Sample Request**
+ ```http https://api.loganalytics.io/v1/workspaces/testWS/search?timespan=P1D ```
https://api.loganalytics.io/v1/workspaces/testWS/search?timespan=P1D
} ```
-## Costs
+
+## Pricing model
The charge for a query on Basic Logs is based on the amount of data the query scans, not just the amount of data the query returns. For example, a query that scans three days of data in a table that ingests 100 GB each day, would be charged for 300 GB. Calculation is based on chunks of up to one day of data. For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
azure-monitor Data Retention Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md
To set the retention and archive duration for a table, run the [az monitor log-a
This example sets table's interactive retention to 30 days, and the total retention to two years. This means the archive duration is 23 months: ```azurecli
-az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
name AzureMetrics --retention-time 30 --total-retention-time 730
+az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name AzureMetrics --retention-time 30 --total-retention-time 730
``` To reapply the workspace's default interactive retention value to the table and reset its total retention to 0, run the [az monitor log-analytics workspace table update](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-update) command with the `--retention-time` and `--total-retention-time` parameters set to `-1`.
To reapply the workspace's default interactive retention value to the table and
For example: ```azurecli
-az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
- --name Syslog --retention-time -1 --total-retention-time -1
+az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name Syslog --retention-time -1 --total-retention-time -1
```
To get the retention policy of a particular table, run the [az monitor log-analy
For example: ```azurecli
-az monitor log-analytics workspace table show --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
- --name SecurityEvent
+az monitor log-analytics workspace table show --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name SecurityEvent
```
azure-monitor Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-tutorial.md
A **where** statement is added to the query with the value that you selected. Th
### Time range
-All tables in a Log Analytics workspace have a column called **TimeGenerated**, which is the time that the record was created. All queries have a time range that limits the results to records that have a **TimeGenerated** value within that range. You can set the time range in the query or by using the selector at the top of the screen.
-By default, the query returns records from the last 24 hours. You should see a message here that says we're not seeing all of the results. This is because Log Analytics can return a maximum of 30,000 records, and our query returned more records than that. Select the **Time range** dropdown list, and change the value to **12 hours**. Select **Run** again to return the results.
+All queries return records generated within a set time range. By default, the query returns records generated in the last 24 hours.
+
+You can set a different time range using the [where operator](/azure/data-explorer/kusto/query/tutorial?pivots=azuremonitor#filter-by-boolean-expression-where-1) in the query, or using the **Time range** dropdown list at the top of the screen.
+
+LetΓÇÖs change the time range of the query by selecting **Last 12 hours** from the **Time range** dropdown. Select **Run** to return the results.
+
+> [!NOTE]
+> Changing the time range using the **Time range** dropdown does not change the query in the query editor.
:::image type="content" source="media/log-analytics-tutorial/query-results-max.png" alt-text="Screenshot that shows the time range." lightbox="media/log-analytics-tutorial/query-results-max.png":::
azure-monitor Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/restore.md
To restore data from a table, run the [az monitor log-analytics workspace table
For example: ```azurecli
-az monitor log-analytics workspace table restore create --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
- --name Heartbeat_RST --restore-source-table Heartbeat --start-restore-time "2022-01-01T00:00:00.000Z" --end-restore-time "2022-01-08T00:00:00.000Z" --no-wait
+az monitor log-analytics workspace table restore create --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name Heartbeat_RST --restore-source-table Heartbeat --start-restore-time "2022-01-01T00:00:00.000Z" --end-restore-time "2022-01-08T00:00:00.000Z" --no-wait
```
To delete a restore table, run the [az monitor log-analytics workspace table del
For example: ```azurecli
-az monitor log-analytics workspace table delete --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
- --name Heartbeat_RST
+az monitor log-analytics workspace table delete --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name Heartbeat_RST
```
azure-monitor Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md
To run a search job, run the [az monitor log-analytics workspace table search-jo
For example: ```azurecli
-az monitor log-analytics workspace table search-job create --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
- --name HeartbeatByIp_SRCH --search-query 'Heartbeat | where ComputerIP has "00.000.00.000"' --limit 1500 \
- --start-search-time "2022-01-01T00:00:00.000Z" --end-search-time "2022-01-08T00:00:00.000Z" --no-wait
+az monitor log-analytics workspace table search-job create --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name HeartbeatByIp_SRCH --search-query 'Heartbeat | where ComputerIP has "00.000.00.000"' --limit 1500 --start-search-time "2022-01-01T00:00:00.000Z" --end-search-time "2022-01-08T00:00:00.000Z" --no-wait
```
To check the status and details of a search job table, run the [az monitor log-a
For example: ```azurecli
-az monitor log-analytics workspace table show --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
- --name HeartbeatByIp_SRCH --output table \
+az monitor log-analytics workspace table show --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name HeartbeatByIp_SRCH --output table \
```
To delete a search table, run the [az monitor log-analytics workspace table dele
For example: ```azurecli
-az monitor log-analytics workspace table delete --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
- --name HeartbeatByIp_SRCH
+az monitor log-analytics workspace table delete --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name HeartbeatByIp_SRCH
```
azure-monitor Tutorial Custom Logs Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-custom-logs-api.md
Start by gathering information that you'll need from your workspace.
:::image type="content" source="media/tutorial-custom-logs-api/workspace-resource-id.png" lightbox="media/tutorial-custom-logs-api/workspace-resource-id.png" alt-text="Screenshot showing workspace resource ID."::: ## Configure application
-Start by registering an Azure Active Directory application to authenticate against the API. Any ARM authentication scheme is supported, but this will follow the [Client Credential Grant Flow scheme](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow) for this tutorial.
+Start by registering an Azure Active Directory application to authenticate against the API. Any ARM authentication scheme is supported, but this will follow the [Client Credential Grant Flow scheme](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) for this tutorial.
1. From the **Azure Active Directory** menu in the Azure portal, select **App registrations** and then **New registration**.
azure-monitor Tutorial Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-custom-logs.md
In this tutorial, you'll use a PowerShell script to send sample Apache access lo
## Configure application
-Start by registering an Azure Active Directory application to authenticate against the API. Any ARM authentication scheme is supported, but this will follow the [Client Credential Grant Flow scheme](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow) for this tutorial.
+Start by registering an Azure Active Directory application to authenticate against the API. Any ARM authentication scheme is supported, but this will follow the [Client Credential Grant Flow scheme](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) for this tutorial.
1. From the **Azure Active Directory** menu in the Azure portal, select **App registrations** and then **New registration**.
azure-monitor Tutorial Ingestion Time Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-ingestion-time-transformations.md
In this tutorial, you learn to:
To complete this tutorial, you need the following: - Log Analytics workspace where you have at least [contributor rights](manage-access.md#manage-access-using-azure-permissions) .-- [Permissions to create Data Collection Rule objects](https://docs.microsoft.com/azure/azure-monitor/essentials/data-collection-rule-overview#permissions) in the workspace.
+- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
## Overview of tutorial
There is currently a known issue affecting dynamic columns. A temporary workarou
- [Read more about ingestion-time transformations](ingestion-time-transformations.md) - [See which tables support ingestion-time transformations](tables-feature-support.md)-- [Learn more about writing transformation queries](../essentials/data-collection-rule-transformations.md)
+- [Learn more about writing transformation queries](../essentials/data-collection-rule-transformations.md)
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
The table below lists the available curated visualizations and more detailed inf
| [Azure Monitor for Azure Cache for Redis (preview)](./insights/redis-cache-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/redisCacheInsights) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health | | [Azure Cosmos DB Insights](./insights/cosmosdb-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/cosmosDBInsights) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. | | [Azure Container Insights](/azure/azure-monitor/insights/container-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/containerInsights) | Monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS). It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. |
-| [Azure Data Explorer insights](/azure/azure-monitor/insights/data-explorer) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. |
-| [Azure HDInsight (preview)](/azure/hdinsight/log-analytics-migration#insights) | Preview | No | An Azure Monitor workbook that collects important performance metrics from your HDInsight cluster and provides the visualizations and dashboards for most common scenarios. Gives a complete view of a single HDInsight cluster including resource utilization and application status|
- | [Azure IoT Edge](/azure/iot-edge/how-to-explore-curated-visualizations/) | GA | No | Visualize and explore metrics collected from the IoT Edge device right in the Azure portal using Azure Monitor Workbooks based public templates. The curated workbooks use built-in metrics from the IoT Edge runtime. These views don't need any metrics instrumentation from the workload modules. |
- | [Azure Key Vault Insights (preview)](/azure/azure-monitor/insights/key-vault-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/keyvaultsInsights) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. |
- | [Azure Monitor Application Insights](/azure/azure-monitor/app/app-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible Application Performance Management (APM) service which monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It leverages the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to a variety of development tools and integrates with Visual Studio to support your DevOps processes. |
- | [Azure Monitor Log Analytics Workspace](/azure/azure-monitor/logs/log-analytics-workspace-insights-overview) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights (preview) provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights (preview). |
- | [Azure Service Bus Insights](/azure/service-bus-messaging/service-bus-insights) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/serviceBusInsights) | Azure Service Bus insights provide a view of the overall performance, failures, capacity, and operational health of all your Service Bus resources in a unified interactive experience. |
- | [Azure SQL insights](/azure/azure-monitor/insights/sql-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you are just setting up SQL monitoring, use this instead of the SQL Analytics solution. |
+| [Azure Data Explorer insights](./insights/data-explorer.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. |
+| [Azure HDInsight (preview)](../hdinsight/log-analytics-migration.md#insights) | Preview | No | An Azure Monitor workbook that collects important performance metrics from your HDInsight cluster and provides the visualizations and dashboards for most common scenarios. Gives a complete view of a single HDInsight cluster including resource utilization and application status|
+ | [Azure IoT Edge](../iot-edge/how-to-explore-curated-visualizations.md) | GA | No | Visualize and explore metrics collected from the IoT Edge device right in the Azure portal using Azure Monitor Workbooks based public templates. The curated workbooks use built-in metrics from the IoT Edge runtime. These views don't need any metrics instrumentation from the workload modules. |
+ | [Azure Key Vault Insights (preview)](./insights/key-vault-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/keyvaultsInsights) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. |
+ | [Azure Monitor Application Insights](./app/app-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible Application Performance Management (APM) service which monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It leverages the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to a variety of development tools and integrates with Visual Studio to support your DevOps processes. |
+ | [Azure Monitor Log Analytics Workspace](./logs/log-analytics-workspace-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights (preview) provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights (preview). |
+ | [Azure Service Bus Insights](../service-bus-messaging/service-bus-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/serviceBusInsights) | Azure Service Bus insights provide a view of the overall performance, failures, capacity, and operational health of all your Service Bus resources in a unified interactive experience. |
+ | [Azure SQL insights](./insights/sql-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you are just setting up SQL monitoring, use this instead of the SQL Analytics solution. |
| [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/storageInsights) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. | | [Azure Network Insights](./insights/network-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/networkInsights) | Provides a comprehensive view of health and metrics for all your network resource. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resource that are hosting your website, by simply searching for your website name. | | [Azure Monitor for Resource Groups](./insights/resource-group-insights.md) | GA | No | Triage and diagnose any problems your individual resources encounter, while offering context as to the health and performance of the resource group as a whole. |
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
**Updated articles** -- [Azure Monitor agent overview](/azure/azure-monitor/agents/azure-monitor-agent-overview.md)-- [Manage the Azure Monitor agent](/azure/azure-monitor/agents/azure-monitor-agent-manage.md)
+- [Azure Monitor agent overview](./agents/azure-monitor-agent-overview.md)
+- [Manage the Azure Monitor agent](./agents/azure-monitor-agent-manage.md)
### Alerts **Updated articles** -- [How to trigger complex actions with Azure Monitor alerts](/azure/azure-monitor/alerts/action-groups-logic-app.md)
+- [How to trigger complex actions with Azure Monitor alerts](./alerts/action-groups-logic-app.md)
### Application Insights
This article lists significant changes to Azure Monitor documentation.
**Updated articles** -- [Application Monitoring for Azure App Service and Java](/azure/azure-monitor/app/azure-web-apps-java.md)-- [Application Monitoring for Azure App Service and Node.js](/azure/azure-monitor/app/azure-web-apps-nodejs.md)-- [Enable Snapshot Debugger for .NET apps in Azure App Service](/azure/azure-monitor/app/snapshot-debugger-appservice.md)-- [Profile live Azure App Service apps with Application Insights](/azure/azure-monitor/app/profiler.md)
+- [Application Monitoring for Azure App Service and Java](./app/azure-web-apps-java.md)
+- [Application Monitoring for Azure App Service and Node.js](./app/azure-web-apps-nodejs.md)
+- [Enable Snapshot Debugger for .NET apps in Azure App Service](./app/snapshot-debugger-appservice.md)
+- [Profile live Azure App Service apps with Application Insights](./app/profiler.md)
- [Visualizations for Application Change Analysis (preview)](/azure/azure-monitor/app/change-analysis-visualizations.md) ### Autoscale
This article lists significant changes to Azure Monitor documentation.
**Updated articles** -- [Azure Activity log](/azure/azure-monitor/essentials/activity-log.md)
+- [Azure Activity log](./essentials/activity-log.md)
### Logs
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
This setting is configured in the **Active Directory Connections** under **NetAp
![Active Directory AES encryption](../media/azure-netapp-files/active-directory-aes-encryption.png) -
- * <a name="encrypted-smb-connection"></a>**Encrypted SMB connection to domain controller**
-
- Select this checkbox to enable SMB encryption for communication between the Azure NetApp Files service and the domain controller (DC). When you enable this functionality, SMB3 protocol will be used for encrypted DC connections, because encryption is supported only by SMB3. SMB, Kerberos, and LDAP enabled volume creation will fail if the DC doesn't support the SMB3 protocol.
-
- ![Snapshot that shows the option for encrypted SMB connection to domain controller.](../media/azure-netapp-files/encrypted-smb-domain-controller.png)
- * **LDAP Signing** Select this checkbox to enable LDAP signing. This functionality enables secure LDAP lookups between the Azure NetApp Files service and the user-specified [Active Directory Domain Services domain controllers](/windows/win32/ad/active-directory-domain-services). For more information, see [ADV190023 | Microsoft Guidance for Enabling LDAP Channel Binding and LDAP Signing](https://portal.msrc.microsoft.com/en-us/security-guidance/advisory/ADV190023).
azure-netapp-files Cross Region Replication Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-introduction.md
na Previously updated : 02/23/2022 Last updated : 03/17/2022
Azure NetApp Files volume replication is supported between various [Azure region
| Geography | Regional Pair A | Regional Pair B | |: |: |: | | Australia/Southeast Asia | Australia East | Southeast Asia |
+| France/Europe | France Central | West Europe |
| Germany/UK | Germany West Central | UK South | | Germany/Europe | Germany West Central | West Europe | | Germany/France | Germany West Central | France Central |
azure-netapp-files Troubleshoot Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-volumes.md
na Previously updated : 03/15/2022 Last updated : 03/17/2022 # Troubleshoot volume errors for Azure NetApp Files This article describes error messages and resolutions that can help you troubleshoot Azure NetApp Files volumes.
-## General errors for volume creation or management
-
-| Error conditions | Resolutions |
-|-|-|
-| Error during SMB, LDAP, or Kerberos volume creation: <br> `Failed to create the Active Directory machine account "PAKA-5755". Reason: SecD Error: no server available Details: Error: Machine account creation procedure failed [ 34] Loaded the preliminary configuration. [ 80] Created a machine account in the domain [ 81] Successfully connected to ip 10.193.169.25, port 445 using TCP [ 83] Unable to connect to LSA service on win-2bovaekb44b.harikrb.com (Error: RESULT_ERROR_SPINCLIENT_SOCKET_RECEIVE_ERROR) [ 83] No servers available for MS_LSA, vserver: 251, domain: http://contoso.com/. **[ 83] FAILURE: Unable to make a connection (LSA:CONTOSO.COM), ** result: 6940 [ 85] Could not find Windows SID 'S-1-5-21-192389270-1514950320-2551433173-512' [ 133] Deleted existing account 'CN=PAKA-5755,CN=Computers,DC=contoso,DC=com' .` | SMB3 is disabled on the domain controller. <br> Enable SMB3 on the domain controller and then try creating the volume. See [How to detect, enable and disable SMBv1, SMBv2, and SMBv3 in Windows](/windows-server/storage/file-server/troubleshoot/detect-enable-and-disable-smbv1-v2-v3) for details about enabling SMB3. |
- ## Errors for SMB and dual-protocol volumes | Error conditions | Resolutions |
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 03/15/2022 Last updated : 03/17/2022
Azure NetApp Files is updated regularly. This article provides a summary about t
## March 2022
-* [Encrypted SMB connection to domain controller](create-active-directory-connections.md#encrypted-smb-connection)
-
- You can now enable SMB encryption for communication between the Azure NetApp Files service and the Active Directory Domain Services domain controller (DC). When you enable this functionality, SMB3 protocol will be used for encrypted DC connections.
- * Features that are now generally available (GA) The following features are now GA. You no longer need to register the features before using them.
azure-resource-manager Learn Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/learn-bicep.md
Last updated 12/03/2021
Ready to see how Bicep can help simplify and accelerate your deployments to Azure? Check out the many hands-on courses on Microsoft Learn. > [!TIP]
-> Want to learn Bicep live from subject matter experts? [Learn Live with our experts every Tuesday (Pacific time) beginning March 8, 2022.](https://aka.ms/learnlive-iac-and-bicep)
+> Want to learn Bicep live from subject matter experts? [Learn Live with our experts every Tuesday (Pacific time) beginning March 8, 2022.](/events/learntv/learnlive-iac-and-bicep/)
## Get started
After that, you might be interested in adding your Bicep code to a deployment pi
## Next steps * For a short introduction to Bicep, see [Bicep quickstart](quickstart-create-bicep-use-visual-studio-code.md).
-* For suggestions about how to improve your Bicep files, see [Best practices for Bicep](best-practices.md).
+* For suggestions about how to improve your Bicep files, see [Best practices for Bicep](best-practices.md).
azure-resource-manager Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/migrate.md
description: Describes the recommended workflow when migrating Azure resources a
Previously updated : 09/09/2021 Last updated : 03/16/2022 # Migrate to Bicep
When you have existing JSON Azure Resource Manager templates (ARM templates) and
:::image type="content" source="./media/migrate/five-phases.png" alt-text="Diagram of the five phases for migrating Azure resources to Bicep: convert, migrate, refactor, test, and deploy." border="false":::
-The first step in the process is to capture your Azure resources as a JSON file, if one doesn't already exist. You then decompile the JSON file to an initial Bicep file, which you improve upon by refactoring. When you have a working file, you test and deploy using a process that minimizes the risk of breaking changes to your Azure environment.
+The first step in the process is to capture an initial representation of your Azure resources. If required, you then decompile the JSON file to an initial Bicep file, which you improve upon by refactoring. When you have a working file, you test and deploy using a process that minimizes the risk of breaking changes to your Azure environment.
:::image type="content" source="./media/migrate/migrate-bicep.png" alt-text="Diagram of the recommended workflow for migrating Azure resources to Bicep." border="false":::
In the _convert_ phase of migrating your resources to Bicep, the goal is to capt
The convert phase consists of two steps, which you complete in sequence:
-1. **Capture a JSON representation of your Azure resources.** If you have an existing JSON template that you're converting to Bicep, the first step is easy - you already have your source template. If you're converting Azure resources deployed through the portal or another tool, you need to *export* the resource definitions and then convert them to Bicep. You can use the Azure portal, Azure CLI, and Azure PowerShell cmdlets to export single resources, multiple resources, and entire resource groups.
+1. **Capture a representation of your Azure resources.** If you have an existing JSON template that you're converting to Bicep, the first step is easy - you already have your source template. If you're converting Azure resources that were deployed by using the portal or another tool, you need to capture the resource definitions. You can capture a JSON representation of your resources using the Azure portal, Azure CLI, or Azure PowerShell cmdlets to *export* single resources, multiple resources, and entire resource groups. You can use the **Import Resource** command within Visual Studio Code to import a Bicep representation of your Azure resource.
-1. **Convert the JSON representation to Bicep using the _decompile_ command.** [The Bicep tooling includes the `decompile` command to convert templates.](decompile.md) You can invoke the `decompile` command from either the Azure CLI, or from the Bicep CLI. The decompilation process is a best-effort process and doesn't guarantee a full mapping from JSON to Bicep. You may need to revise the generated Bicep file to meet your template best practices before using the file to deploy resources.
+1. **If required, convert the JSON representation to Bicep using the _decompile_ command.** [The Bicep tooling includes the `decompile` command to convert templates.](decompile.md) You can invoke the `decompile` command from either the Azure CLI, or from the Bicep CLI. The decompilation process is a best-effort process and doesn't guarantee a full mapping from JSON to Bicep. You may need to revise the generated Bicep file to meet your template best practices before using the file to deploy resources.
+
+> [!NOTE]
+> You can import a resource by opening the Visual Studio Code command palette. Use <kbd>Ctrl+Shift+P</kbd> on Windows and Linux and <kbd>Γîÿ+Shift+P</kbd> on macOS.
## Phase 2: Migrate
The deploy phase consists of eight steps, which you complete in any order:
In the _test_ phase of migrating your resources to Bicep, the goal is to verify the integrity of your migrated templates and to perform a test deployment.
-The test phase consists of two steps:, which you complete in sequence:
+The test phase consists of two steps, which you complete in sequence:
1. **Run the ARM template deployment what-if operation.** To help you verify your converted templates before deployment, you can use the [Azure Resource Manager template deployment what-if operation](../templates/deploy-what-if.md). It compares the current state of your environment with the desired state that is defined in the template. The tool outputs the list of changes that will occur *without* applying the changes to your environment. You can use what-if with both incremental and complete mode deployments. Even if you plan to deploy your template using incremental mode, it's a good idea to run your what-if operation in complete mode.
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
Title: Azure subscription limits and quotas description: Provides a list of common Azure subscription and service limits, quotas, and constraints. This article includes information on how to increase limits along with maximum values. Previously updated : 03/14/2022 Last updated : 03/17/2022 # Azure subscription and service limits, quotas, and constraints
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources.md
You apply tags to your Azure resources, resource groups, and subscriptions to lo
For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
+Resource tags support all cost-accruing services. You can use Azure Policy to ensure that cost-accruing services are provisioned with a tag by using one of the many different [tag policies](/azure/azure-resource-manager/management/tag-policies).
+ > [!WARNING] > Tags are stored as plain text. Never add sensitive values to tags. Sensitive values could be exposed through many methods, including cost reports, tag taxonomies, deployment histories, exported templates, and monitoring logs.
azure-resource-manager Common Deployment Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/common-deployment-errors.md
Title: Troubleshoot common Azure deployment errors
-description: Describes common errors for Azure resources deployed with Azure Resource Manager templates (ARM templates) or Bicep files.
+description: Describes common deployment errors for Azure resources that are deployed with Azure Resource Manager templates (ARM templates) or Bicep files.
tags: top-support-issue Previously updated : 02/23/2022 Last updated : 03/17/2022
This article describes common Azure deployment errors, and provides information about solutions. Azure resources can be deployed with Azure Resource Manager templates (ARM templates) or Bicep files. If you can't find the error code for your deployment error, see [Find error code](find-error-code.md).
-If your error code isn't listed, submit a GitHub issue. On the right side of the page, select **Feedback**. At the bottom of the page, under **Feedback** select **This page**.
+If your error code isn't listed, submit a GitHub issue. On the right side of the page, select **Feedback**. At the bottom of the page, under **Feedback** select **This page**. Provide your documentation feedback but **don't include confidential information** because GitHub issues are public.
## Error codes
If your error code isn't listed, submit a GitHub issue. On the right side of the
| PrivateIPAddressInReservedRange | The specified IP address includes an address range required by Azure. Change IP address to avoid reserved range. | [Private IP addresses](../../virtual-network/ip-services/private-ip-addresses.md) | PrivateIPAddressNotInSubnet | The specified IP address is outside of the subnet range. Change IP address to fall within subnet range. | [Private IP addresses](../../virtual-network/ip-services/private-ip-addresses.md) | | PropertyChangeNotAllowed | Some properties can't be changed on a deployed resource. When updating a resource, limit your changes to permitted properties. | [Update resource](/azure/architecture/guide/azure-resource-manager/advanced-templates/update-resource) |
+| PublicIPCountLimitReached | You've reached the limit for the number of running public IPs. Shut down unneeded resources or contact Azure support to request an increase. For example, in Azure Databricks, see [Unexpected cluster termination](/azure/databricks/kb/clusters/termination-reasons) and [IP address limit prevents cluster creation](/azure/databricks/kb/clusters/azure-ip-limit). | [Public IP address limits](../management/azure-subscription-service-limits.md#publicip-address) |
| RegionDoesNotAllowProvisioning | Select a different region or submit a quota support request for **Region access**. | | | RequestDisallowedByPolicy | Your subscription includes a resource policy that prevents an action you're trying to do during deployment. Find the policy that blocks the action. If possible, change your deployment to meet the limitations from the policy. | [Resolve policies](error-policy-requestdisallowedbypolicy.md) | | ReservedResourceName | Provide a resource name that doesn't include a reserved name. | [Reserved resource names](error-reserved-resource-name.md) |
If your error code isn't listed, submit a GitHub issue. On the right side of the
## Next steps -- [Find error codes](find-error-code.md)-- [Enable debug logging](enable-debug-logging.md)-- [Create troubleshooting template](create-troubleshooting-template.md)
+- For information about validation or deployment errors, see [Find error codes](find-error-code.md).
+- To get more details to troubleshoot a deployment, see [Enable debug logging](enable-debug-logging.md).
+- To isolate the cause of a deployment error, see [Create a troubleshooting template](create-troubleshooting-template.md).
azure-sql Automation Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/automation-manage.md
Azure Automation also has the ability to communicate with SQL servers directly,
The runbook and module galleries for [Azure Automation](../../automation/automation-runbook-gallery.md) offer a variety of runbooks from Microsoft and the community that you can import into Azure Automation. To use one, download a runbook from the gallery, or you can directly import runbooks from the gallery, or from your Automation account in the Azure portal. >[!NOTE]
-> The Automation runbook may run from a range of IP addresses at any datacenter in an Azure region. To learn more, see [Automation region DNS records](/azure/automation/how-to/automation-region-dns-records).
+> The Automation runbook may run from a range of IP addresses at any datacenter in an Azure region. To learn more, see [Automation region DNS records](../../automation/how-to/automation-region-dns-records.md).
## Next steps Now that you've learned the basics of Azure Automation and how it can be used to manage Azure SQL Database, follow these links to learn more about Azure Automation. - [Azure Automation Overview](../../automation/automation-intro.md)-- [My first runbook](../../automation/learn/powershell-runbook-managed-identity.md)
+- [My first runbook](../../automation/learn/powershell-runbook-managed-identity.md)
azure-sql Data Discovery And Classification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/data-discovery-and-classification-overview.md
This is the required action to modify the data classification of a database are:
Learn more about role-based permissions in [Azure RBAC](../../role-based-access-control/overview.md). > [!NOTE]
-> The Azure SQL built-in roles in this section apply to a dedicated SQL pool (formerly SQL DW) but are not available for dedicated SQL pools and other SQL resources within Azure Synapse workspaces. For SQL resources in Azure Synapse workspaces, use the available actions for data classification to create custom Azure roles as needed for labelling. For more information on the `Microsoft.Synapse/workspaces/sqlPools` provider operations, see [Microsoft.Synapse](/azure/role-based-access-control/resource-provider-operations#microsoftsynapse).
+> The Azure SQL built-in roles in this section apply to a dedicated SQL pool (formerly SQL DW) but are not available for dedicated SQL pools and other SQL resources within Azure Synapse workspaces. For SQL resources in Azure Synapse workspaces, use the available actions for data classification to create custom Azure roles as needed for labelling. For more information on the `Microsoft.Synapse/workspaces/sqlPools` provider operations, see [Microsoft.Synapse](../../role-based-access-control/resource-provider-operations.md#microsoftsynapse).
## Manage classifications
You can use the following SQL drivers to retrieve classification metadata:
- Consider configuring [Azure SQL Auditing](../../azure-sql/database/auditing-overview.md) for monitoring and auditing access to your classified sensitive data. - For a presentation that includes data Discovery & Classification, see [Discovering, classifying, labeling & protecting SQL data | Data Exposed](https://www.youtube.com/watch?v=itVi9bkJUNc).-- To classify your Azure SQL Databases and Azure Synapse Analytics with Azure Purview labels using T-SQL commands, see [Classify your Azure SQL data using Azure Purview labels](../../sql-database/scripts/sql-database-import-purview-labels.md).
+- To classify your Azure SQL Databases and Azure Synapse Analytics with Azure Purview labels using T-SQL commands, see [Classify your Azure SQL data using Azure Purview labels](../../sql-database/scripts/sql-database-import-purview-labels.md).
azure-sql Ledger Append Only Ledger Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/ledger-append-only-ledger-tables.md
For every append-only ledger table, the system automatically generates a view, c
| | | | | ledger_transaction_id | bigint | The ID of the transaction that created or deleted a row version. | | ledger_sequence_number | bigint | The sequence number of a row-level operation within the transaction on the table. |
-| ledger_operation_type_id | tinyint | Contains `0` (**INSERT**) or `1` (**DELETE**). Inserting a row into the ledger table produces a new row in the ledger view that contains `0` in this column. Deleting a row from the ledger table produces a new row in the ledger view that contains `1` in this column. Updating a row in the ledger table produces two new rows in the ledger view. One row contains `1` (**DELETE**), and the other row contains `1` (**INSERT**) in this column. A DELETE shouldn't occur on an append-only ledger table. |
+| ledger_operation_type | tinyint | Contains `1` (**INSERT**) or `2` (**DELETE**). Inserting a row into the ledger table produces a new row in the ledger view that contains `1` in this column. Deleting a row from the ledger table produces a new row in the ledger view that contains `2` in this column. Updating a row in the ledger table produces two new rows in the ledger view. One row contains `2` (**DELETE**), and the other row contains `1` (**INSERT**) in this column. A DELETE shouldn't occur on an append-only ledger table. |
| ledger_operation_type_desc | nvarchar(128) | Contains `INSERT` or `DELETE`. For more information, see the preceding row. | ## Next steps
azure-sql Ledger Updatable Ledger Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/ledger-updatable-ledger-tables.md
The ledger view's schema mirrors the columns defined in the updatable ledger and
| | | | | ledger_transaction_id | bigint | The ID of the transaction that created or deleted a row version. | | ledger_sequence_number | bigint | The sequence number of a row-level operation within the transaction on the table. |
-| ledger_operation_type_id | tinyint | Contains `0` (**INSERT**) or `1` (**DELETE**). Inserting a row into the ledger table produces a new row in the ledger view that contains `0` in this column. Deleting a row from the ledger table produces a new row in the ledger view that contains `1` in this column. Updating a row in the ledger table produces two new rows in the ledger view. One row contains `1` (**DELETE**), and the other row contains `1` (**INSERT**) in this column. |
+| ledger_operation_type | tinyint | Contains `1` (**INSERT**) or `2` (**DELETE**). Inserting a row into the ledger table produces a new row in the ledger view that contains `1` in this column. Deleting a row from the ledger table produces a new row in the ledger view that contains `2` in this column. Updating a row in the ledger table produces two new rows in the ledger view. One row contains `2` (**DELETE**), and the other row contains `1` (**INSERT**) in this column. |
| ledger_operation_type_desc | nvarchar(128) | Contains `INSERT` or `DELETE`. For more information, see the preceding row. | ## Next steps
azure-sql Sql Data Sync Data Sql Server Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/sql-data-sync-data-sql-server-sql-database.md
Federation Root Database can be used in the SQL Data Sync Service without any li
The Dynamics 365 bring your own database feature lets administrators export data entities from the application into their own Microsoft Azure SQL database. Data Sync can be used to sync this data into other databases if data is exported using **incremental push** (full push is not supported) and **enable triggers in target database** is set to **yes**.
+### How do I create Data Sync in Failover group to support Disaster Recovery?
+
+- To ensure data sync operations in failover region are at par with Primary region, after failover you have to manually re-create the Sync Group in failover region with same settings as primary region.
+ ## Next steps ### Update the schema of a synced database
azure-sql Transparent Data Encryption Byok Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/transparent-data-encryption-byok-key-rotation.md
This article describes key rotation for a [server](logical-servers.md) using a T
## Important considerations when rotating the TDE Protector - When the TDE protector is changed/rotated, old backups of the database, including backed-up log files, are not updated to use the latest TDE protector. To restore a backup encrypted with a TDE protector from Key Vault, make sure that the key material is available to the target server. Therefore, we recommend that you keep all the old versions of the TDE protector in Azure Key Vault (AKV), so database backups can be restored. - Even when switching from customer managed key (CMK) to service-managed key, keep all previously used keys in AKV. This ensures database backups, including backed-up log files, can be restored with the TDE protectors stored in AKV. -- Apart from old backups, transaction log files might also require access to the older TDE Protector. To determine if there are any remaining logs that still require the older key, after performing key rotation, use the [sys.dm_db_log_info](https://docs.microsoft.com/sql/relational-databases/system-dynamic-management-views/sys-dm-db-log-info-transact-sql) dynamic management view (DMV). This DMV returns information on the virtual log file (VLF) of the transantion log along with its encryption key thumbprint of the VLF.
+- Apart from old backups, transaction log files might also require access to the older TDE Protector. To determine if there are any remaining logs that still require the older key, after performing key rotation, use the [sys.dm_db_log_info](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-log-info-transact-sql) dynamic management view (DMV). This DMV returns information on the virtual log file (VLF) of the transantion log along with its encryption key thumbprint of the VLF.
- Older keys need to be kept in AKV and available to the server based on the backup retention period configured as back of backup retention policies on the database. This helps ensure any Long Term Retention (LTR) backups on the server can still be restored using the older keys.
The following examples use [az sql server tde-key set](/powershell/module/az.sql
- In case of a security risk, learn how to remove a potentially compromised TDE protector: [Remove a potentially compromised key](transparent-data-encryption-byok-remove-tde-protector.md). -- Get started with Azure Key Vault integration and Bring Your Own Key support for TDE: [Turn on TDE using your own key from Key Vault using PowerShell](transparent-data-encryption-byok-configure.md).
+- Get started with Azure Key Vault integration and Bring Your Own Key support for TDE: [Turn on TDE using your own key from Key Vault using PowerShell](transparent-data-encryption-byok-configure.md).
azure-sql Data Virtualization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/data-virtualization-overview.md
adls://<container>@<storage_account>.dfs.core.windows.net/<path>/<file_name>.par
-If you're new to data virtualization and want to quickly test functionality, start by querying publicly available data sets available in [Azure Open Datasets](/azure/open-datasets/dataset-catalog), like the [Bing COVID-19 dataset](/azure/open-datasets/dataset-bing-covid-19?tabs=azure-storage) allowing anonymous access.
+If you're new to data virtualization and want to quickly test functionality, start by querying publicly available data sets available in [Azure Open Datasets](../../open-datasets/dataset-catalog.md), like the [Bing COVID-19 dataset](../../open-datasets/dataset-bing-covid-19.md?tabs=azure-storage) allowing anonymous access.
Use the following endpoints to query the Bing COVID-19 data sets:
FROM OPENROWSET(
The `OPENROWSET` command also allows querying multiple files or folders by using wildcards in the BULK path.
-The following example uses the [NYC yellow taxi trip records open data set](/azure/open-datasets/dataset-taxi-yellow):
+The following example uses the [NYC yellow taxi trip records open data set](../../open-datasets/dataset-taxi-yellow.md):
```sql --Query all files with .parquet extension in folders matching name pattern:
Issues with query execution are typically caused by managed instance not being a
- To learn more about syntax options available with OPENROWSET, see [OPENROWSET T-SQL](/sql/t-sql/functions/openrowset-transact-sql). - For more information about creating external table in SQL Managed Instance, see [CREATE EXTERNAL TABLE](/sql/t-sql/statements/create-external-table-transact-sql).-- To learn more about creating external file format, see [CREATE EXTERNAL FILE FORMAT](/sql/t-sql/statements/create-external-file-format-transact-sql)
+- To learn more about creating external file format, see [CREATE EXTERNAL FILE FORMAT](/sql/t-sql/statements/create-external-file-format-transact-sql)
azure-sql Doc Changes Updates Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/doc-changes-updates-known-issues.md
ms.devlang: Previously updated : 01/25/2022 Last updated : 03/17/2022 # Known issues with Azure SQL Managed Instance [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
This article lists the currently known issues with [Azure SQL Managed Instance](
||||| |[Querying external table fails with 'not supported' error message](#querying-external-table-fails-with-not-supported-error-message)|Jan 2022|Has Workaround|| |[When using SQL Server authentication, usernames with '@' are not supported](#when-using-sql-server-authentication-usernames-with--are-not-supported)|Oct 2021|||
-|[Misleading error message on Azure portal suggesting recreation of the Service Principal](#misleading-error-message-on-azure-portal-suggesting-recreation-of-the-service-principal)|Sep 2021|||
+|[Misleading error message on Azure portal suggesting recreation of the Service Principal](#misleading-error-message-on-azure-portal-suggesting-recreation-of-the-service-principal)|Sep 2021||Oct 2021|
|[Changing the connection type does not affect connections through the failover group endpoint](#changing-the-connection-type-does-not-affect-connections-through-the-failover-group-endpoint)|Jan 2021|Has Workaround|| |[Procedure sp_send_dbmail may transiently fail when @query parameter is used](#procedure-sp_send_dbmail-may-transiently-fail-when--parameter-is-used)|Jan 2021|Has Workaround|| |[Distributed transactions can be executed after removing managed instance from Server Trust Group](#distributed-transactions-can-be-executed-after-removing-managed-instance-from-server-trust-group)|Oct 2020|Has Workaround||
This article lists the currently known issues with [Azure SQL Managed Instance](
|Database mail feature with external (non-Azure) mail servers using secure connection||Resolved|Oct 2019| |Contained databases not supported in SQL Managed Instance||Resolved|Aug 2019| -
-## Resolved
-
-### Restoring manual backup without CHECKSUM might fail
-
-In certain circumstances manual backup of databases that was made on a managed instance without CHECKSUM might fail to be restored. In such cases, retry restoring the backup until you're successful.
-
-**Workaround**: Take manual backups of databases on managed instances with CHECKSUM enabled.
-
-### Agent becomes unresponsive upon modifying, disabling, or enabling existing jobs
-
-In certain circumstances, modifying, disabling, or enabling an existing job can cause the agent to become unresponsive. The issue is automatically mitigated upon detection, resulting in a restart of the agent process.
-
-### Permissions on resource group not applied to SQL Managed Instance
-
-When the SQL Managed Instance Contributor Azure role is applied to a resource group (RG), it's not applied to SQL Managed Instance and has no effect.
-
-**Workaround**: Set up a SQL Managed Instance Contributor role for users at the subscription level.
-
-### SQL Agent jobs can be interrupted by Agent process restart
-
-**(Resolved in March 2020)** SQL Agent creates a new session each time a job is started, gradually increasing memory consumption. To avoid hitting the internal memory limit, which would block execution of scheduled jobs, Agent process will be restarted once its memory consumption reaches threshold. It may result in interrupting execution of jobs running at the moment of restart.
-
-### @query parameter not supported in sp_send_db_mail
-
-The `@query` parameter in the [sp_send_db_mail](/sql/relational-databases/system-stored-procedures/sp-send-dbmail-transact-sql) procedure doesn't work.
- ## Has workaround ### Querying external table fails with not supported error message
using (var scope = new TransactionScope())
### When using SQL Server authentication, usernames with '@' are not supported
-Usernames that contain the '@' symbol in the middle (e.g. 'abc@xy') are not able to login using SQL Server authentication.
-
-### Misleading error message on Azure portal suggesting recreation of the Service Principal
-
-_Active Directory admin_ blade of Azure portal for Azure SQL Managed Instance may be showing the following error message even though Service Principal already exists:
-
-"Managed Instance needs a Service Principal to access Azure Active Directory. Click here to create a Service Principal"
-
-You can neglect this error message if Service Principal for the managed instance already exists, and/or Azure Active Directory authentication on the managed instance works.
-
-To check whether Service Principal exists, navigate to the _Enterprise applications_ page on the Azure portal, choose _Managed Identities_ from the _Application type_ dropdown list, select _Apply_ and type the name of the managed instance in the search box. If the instance name shows up in the result list, Service Principal already exists and no further actions are needed.
-
-If you already followed the instructions from the error message and clicked the link from the error message, Service Principal of the managed instance has been recreated. In that case, please assign Azure AD read permissions to the newly created Service Principal in order for Azure AD authentication to work properly. This can be done via Azure PowerShell by following [instructions](../database/authentication-aad-configure.md?tabs=azure-powershell#powershell).
+Usernames that contain the '@' symbol in the middle (e.g. 'abc@xy') are not able to log in using SQL Server authentication.
### Azure AD logins and users are not supported in SSDT
The `tempdb` database is always split into 12 data files, and the file structure
Error logs that are available in SQL Managed Instance aren't persisted, and their size isn't included in the maximum storage limit. Error logs might be automatically erased if failover occurs. There might be gaps in the error log history because SQL Managed Instance was moved several times on several virtual machines.
+## Resolved
+
+### Restoring manual backup without CHECKSUM might fail
+
+In certain circumstances manual backup of databases that was made on a managed instance without CHECKSUM might fail to be restored. In such cases, retry restoring the backup until you're successful.
+
+**Workaround**: Take manual backups of databases on managed instances with CHECKSUM enabled.
+
+### Agent becomes unresponsive upon modifying, disabling, or enabling existing jobs
+
+In certain circumstances, modifying, disabling, or enabling an existing job can cause the agent to become unresponsive. The issue is automatically mitigated upon detection, resulting in a restart of the agent process.
+
+### Permissions on resource group not applied to SQL Managed Instance
+
+When the SQL Managed Instance Contributor Azure role is applied to a resource group (RG), it's not applied to SQL Managed Instance and has no effect.
+
+**Workaround**: Set up a SQL Managed Instance Contributor role for users at the subscription level.
+
+### SQL Agent jobs can be interrupted by Agent process restart
+
+**(Resolved in March 2020)** SQL Agent creates a new session each time a job is started, gradually increasing memory consumption. To avoid hitting the internal memory limit, which would block execution of scheduled jobs, Agent process will be restarted once its memory consumption reaches threshold. It may result in interrupting execution of jobs running at the moment of restart.
+
+### @query parameter not supported in sp_send_db_mail
+
+The `@query` parameter in the [sp_send_db_mail](/sql/relational-databases/system-stored-procedures/sp-send-dbmail-transact-sql) procedure doesn't work.
+
+### Misleading error message on Azure portal suggesting recreation of the Service Principal
+
+_Active Directory admin_ blade of Azure portal for Azure SQL Managed Instance may be showing the following error message even though Service Principal already exists:
+
+"Managed Instance needs a Service Principal to access Azure Active Directory. Click here to create a Service Principal"
+
+You can neglect this error message if Service Principal for the managed instance already exists, and/or Azure Active Directory authentication on the managed instance works.
+
+To check whether Service Principal exists, navigate to the _Enterprise applications_ page on the Azure portal, choose _Managed Identities_ from the _Application type_ dropdown list, select _Apply_ and type the name of the managed instance in the search box. If the instance name shows up in the result list, Service Principal already exists and no further actions are needed.
+
+If you already followed the instructions from the error message and clicked the link from the error message, Service Principal of the managed instance has been recreated. In that case, please assign Azure AD read permissions to the newly created Service Principal in order for Azure AD authentication to work properly. This can be done via Azure PowerShell by following [instructions](../database/authentication-aad-configure.md?tabs=azure-powershell#powershell).
+ ## Contribute to content To contribute to the Azure SQL documentation, see the [Docs contributor guide](/contribute/).
azure-sql Link Feature Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/link-feature-best-practices.md
ms.devlang: --++ Last updated 03/11/2022
This article outlines best practices when using the link feature for Azure SQL M
The link feature replicates data using the [Distributed availability groups](/sql/database-engine/availability-groups/windows/distributed-availability-groups) concept based on the Always On availability groups technology stack. Data replication with distributed availability groups is based on replicating transaction log records. No transaction log records can be truncated from the database on the primary instance until they're replicated to the database on the secondary instance. If transaction log record replication is slow or blocked due to network connection issues, the log file keeps growing on the primary instance. Growth speed depends on the intensity of workload and the network speed. If there's a prolonged network connection outage and heavy workload on primary instance, the log file may take all available storage space. - To minimize the risk of running out of space on your primary instance due to log file growth, make sure to **take database log backups regularly**. By taking log backups regularly, you make your database more resilient to unplanned log growth events. Consider scheduling daily log backup tasks using SQL Server Agent job. You can use a Transact-SQL (T-SQL) script to back up the log file, such as the sample provided in this section. Replace the placeholders in the sample script with name of your database, name and path of the backup file, and the description.
-To back up your transaction log, use the following sample Transact-SQL (T-SQL) script:
+To back up your transaction log, use the following sample Transact-SQL (T-SQL) script on SQL Server:
```sql-
+-- Execute on SQL Server
USE [<DatabaseName>] --Set current database inside job step or script --Check that you are executing the script on the primary instance
NAME = N'<Description>', SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 1
END ``` -
-Use the following Transact-SQL (T-SQL) command to check the log spaced used by your database:
+Use the following Transact-SQL (T-SQL) command to check the log spaced used by your database on SQL Server:
```sql
+-- Execute on SQL Server
DBCC SQLPERF(LOGSPACE); ```
azure-sql Link Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/link-feature.md
Previously updated : 02/04/2022 Last updated : 03/17/2022 # Link feature for Azure SQL Managed Instance (preview) [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
To use the link feature, you'll need:
- Azure SQL Managed Instance provisioned on any service tier. > [!NOTE]
-> SQL Managed Instance link feature is available in regions: Australia Central, Australia Central 2, Australia Southeast, Brazil South, Brazil Southeast, France Central, France South, South India, Central India, West India, Japan West, Japan East, Jio India West, Jio India Central, Korea Central, Korea South, North Central US, North Europe, Norway West, Norway East, South Africa North, South Africa West, South Central US, Southeast Asia, Sweden Central, Switzerland North, Switzerland West, UK South, UK West, West Central US, West Europe, West US, West US 2, West US 3. We are working on enabling link feature in all regions.
+> SQL Managed Instance link feature is now available in all Azure regions.
## Overview
Data replicated through the link feature from SQL Server to Azure SQL Managed In
![Managed Instance link main scenario](./media/managed-instance-link/mi-link-main-scenario.png) - ### Use Azure services Use the link feature to leverage Azure services using SQL Server data without migrating to the cloud. Examples include reporting, analytics, backups, machine learning, and other jobs that send data to Azure.
Secure connectivity, such as VPN or Express Route is used between an on-premises
There could exist up to 100 links from the same, or various SQL Server sources to a single SQL Managed Instance. This limit is governed by the number of databases that could be hosted on a managed instance at this time. Likewise, a single SQL Server can establish multiple parallel database replication links with several managed instances in different Azure regions in a 1 to 1 relationship between a database and a managed instance . The feature requires CU13 or higher to be installed on SQL Server 2019.
+## Use the link feature
+
+To help with the initial environment setup, we have prepared the following online guide on how to setup your SQL Server environment to use with the link feature for Managed Instance:
+
+* [Prepare environment for the link](managed-instance-link-preparation.md)
+
+Once you have ensured the pre-requirements have been met, you can create the link using the automated wizard in SSMS, or you can choose to setup the link manually using scripts. Create the link using one of the following instructions:
+
+* [Replicate database with link feature in SSMS](managed-instance-link-use-ssms-to-replicate-database.md), or alternatively
+* [Replicate database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts](managed-instance-link-use-scripts-to-replicate-database.md)
+
+Once the link has been created, ensure that you follow the best practices for maintaining the link, by following instructions described at this page:
+
+* [Best practices with link feature for Azure SQL Managed Instance](link-feature-best-practices.md)
+
+If and when you are ready to migrate a database to Azure with a minimum downtime, you can do this using an automated wizard in SSMS, or you can choose to do this manually with scripts. Migrate database to Azure link using one of the following instructions:
+
+* [Failover database with link feature in SSMS](managed-instance-link-use-ssms-to-failover-database.md), or alternatively
+* [Failover (migrate) database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts](managed-instance-link-use-scripts-to-failover-database.md)
+ ## Limitations This section describes the productΓÇÖs functional limitations.
Managed Instance link has a set of general limitations, and those are listed in
- In case distributed transactions are used with database replicated from the SQL Server, and in case of migration scenario, on the cutover to the cloud, the DTC capabilities won't be transferred. There will be no possibility for migrated database to get involved in distributed transactions with SQL Server, as Managed Instance doesn't support distributed transactions with SQL Server at this time. For reference, Managed Instance today supports distributed transactions only between other Managed Instances, see [this article](../database/elastic-transactions-overview.md#transactions-for-sql-managed-instance). - Managed Instance link can replicate database of any size if it fits into chosen storage size of target Managed Instance.
-### Additional limitations
+### Preview limitations
Some Managed Instance link features and capabilities are limited **at this time**. Details can be found in the following list. - SQL Server 2019, Enterprise Edition or Developer Edition, CU15 (or higher) on Windows or Linux host OS is supported. - Private endpoint (VPN/VNET) is supported to connect Distributed Availability Groups to Managed Instance. Public endpoint can't be used to connect to Managed Instance. - Managed Instance Link authentication between SQL Server instance and Managed Instance is certificate-based, available only through exchange of certificates. Windows authentication between instances isn't supported. - Replication of user databases from SQL Server to Managed Instance is one-way. User databases from Managed Instance can't be replicated back to SQL Server.-- Auto failover groups replication to secondary Managed Instance can't be used in parallel while operating the Managed Instance Link with SQL Server.-- Replicated databases aren't part of auto-backup process on SQL Managed Instance.
+- [Auto failover groups](auto-failover-group-sql-mi.md) replication to secondary Managed Instance can't be used in parallel while operating the Managed Instance link with SQL Server.
+- Replicated R/O databases aren't part of auto-backup process on SQL Managed Instance.
## Next steps
azure-sql Managed Instance Link Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-preparation.md
ms.devlang:
-+ Last updated 03/10/2022
To use the Managed Instance link feature, you need the following prerequisites:
- [SQL Server 2019 Enterprise or Developer edition](https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2019?filetype=EXE), starting with [CU15 (15.0.4198.2)](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6). - An instance of Azure SQL Managed Instance. [Get started](instance-create-quickstart.md) if you don't have one. -- ## Prepare your SQL Server instance To prepare your SQL Server instance, you need to validate:
You'll need to restart SQL Server for these changes to take effect.
### Install CU15 (or higher)
-The link feature for SQL Managed Instance was introduced in CU15 of SQL Server 2019.
+The link feature for SQL Managed Instance was introduced in CU15 of SQL Server 2019.
-To check your SQL Server version, run the following Transact-SQL (T-SQL) script:
+To check your SQL Server version, run the following Transact-SQL (T-SQL) script on SQL Server:
```sql
+-- Execute on SQL Server
-- Shows the version and CU of the SQL Server SELECT @@VERSION ```
If your SQL Server version is lower than CU15 (15.0.4198.2), either install the
### Create database master key in the master database
-Create database master key in the master database by running the following T-SQL script.
+Create database master key in the master database by running the following T-SQL script on SQL Server.
```sql
+-- Execute on SQL Server
-- Create MASTER KEY USE MASTER CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<strong_password>' ```
-To check if you have database master key, use the following T-SQL script.
+To check if you have database master key, use the following T-SQL script on SQL Server.
```sql
+-- Execute on SQL Server
SELECT * FROM sys.symmetric_keys WHERE name LIKE '%DatabaseMasterKey%' ```
SELECT * FROM sys.symmetric_keys WHERE name LIKE '%DatabaseMasterKey%'
The link feature for SQL Managed Instance relies on the Always On availability groups feature, which isn't enabled by default. To learn more, review [enabling the Always On availability groups feature](/sql/database-engine/availability-groups/windows/enable-and-disable-always-on-availability-groups-sql-server).
-To confirm the Always On availability groups feature is enabled, run the following Transact-SQL (T-SQL) script:
+To confirm the Always On availability groups feature is enabled, run the following Transact-SQL (T-SQL) script on SQL Server:
```sql
+-- Execute on SQL Server
-- Is HADR enabled on this SQL Server? declare @IsHadrEnabled sql_variant = (select SERVERPROPERTY('IsHadrEnabled')) select
To enable these trace flags at startup, follow these steps:
1. Select **OK** to close the **Properties** window. - To learn more, review [enabling trace flags](/sql/t-sql/database-console-commands/dbcc-traceon-transact-sql). ### Restart SQL Server and validate configuration
After the restart, use Transact-SQL to validate the configuration of your SQL Se
To validate your configuration, run the following Transact-SQL (T-SQL) script: ```sql
+-- Execute on SQL Server
-- Shows the version and CU of SQL Server SELECT @@VERSION
DBCC TRACESTATUS
The following screenshot is an example of the expected outcome for a SQL Server that's been properly configured: - :::image type="content" source="./media/managed-instance-link-preparation/ssms-results-expected-outcome.png" alt-text="Screenshot showing expected outcome in S S M S."::: ### User database recovery mode and backup
-All databases that are to be replicated via SQL Managed Instance link must be in full recovery mode and have at least one backup.
+All databases that are to be replicated via instance link must be in full recovery mode and have at least one backup. Execute the following on SQL Server:
```sql
+-- Execute on SQL Server
-- Set full recovery mode for all databases you want to replicate. ALTER DATABASE [<DatabaseName>] SET RECOVERY FULL GO
GO
## Configure network connectivity
-For the Managed Instance link to work, there must be network connectivity between SQL Server and SQL Managed Instance. The network option that you choose depends on where your SQL Server resides - whether it's on-premises or on a virtual machine (VM).
+For the instance link to work, there must be network connectivity between SQL Server and SQL Managed Instance. The network option that you choose depends on where your SQL Server resides - whether it's on-premises or on a virtual machine (VM).
### SQL Server on Azure VM
If your SQL Server on Azure VM is in a different VNet to your managed instance,
>[!NOTE] > Global VNet peering is enabled by default on managed instances provisioned after November 2020. [Raise a support ticket](../database/quota-increase-request.md) to enable Global VNet peering on older instances. - ### SQL Server outside of Azure If your SQL Server is hosted outside of Azure, establish a VPN connection between your SQL Server and your SQL Managed Instance with either option:
The following table describes port actions for each environment:
|SQL Server (outside of Azure) | Open both inbound and outbound traffic on port 5022 for the network firewall to the entire subnet of the SQL Managed Instance. If necessary, do the same on the Windows firewall as well. | |SQL Managed Instance |[Create an NSG rule](../../virtual-network/manage-network-security-group.md#create-a-security-rule) in the Azure portal to allow inbound and outbound traffic from the IP address of the SQL Server on port 5022 to the virtual network hosting the SQL Managed Instance. |
-Use the following PowerShell script on the host SQL Server to open ports in the Windows Firewall:
+Use the following PowerShell script on the Windows host of the SQL Server to open ports in the Windows Firewall:
```powershell New-NetFirewallRule -DisplayName "Allow TCP port 5022 inbound" -Direction inbound -Profile Any -Action Allow -LocalPort 5022 -Protocol TCP New-NetFirewallRule -DisplayName "Allow TCP port 5022 outbound" -Direction outbound -Profile Any -Action Allow -LocalPort 5022 -Protocol TCP ``` - ## Test bidirectional network connectivity Bidirectional network connectivity between SQL Server and SQL Managed Instance is necessary for the Managed Instance link feature to work. After opening your ports on the SQL Server side, and configuring an NSG rule on the SQL Managed Instance side, test connectivity. - ### Test connection from SQL Server to SQL Managed Instance
-To check if SQL Server can reach your SQL Managed Instance, use the `tnc` command in PowerShell from the SQL Server host machine. Replace `<ManagedInstanceFQDN>` with the fully qualified domain name of the Azure SQL Managed Instance.
+To check if SQL Server can reach your SQL Managed Instance, use the `tnc` command in PowerShell from the SQL Server host machine. Replace `<ManagedInstanceFQDN>` with the fully qualified domain (FQDN) name of the Azure SQL Managed Instance. You can copy this information from the managed instance overview page in Azure portal.
```powershell tnc <ManagedInstanceFQDN> -port 5022
tnc <ManagedInstanceFQDN> -port 5022
A successful test shows `TcpTestSucceeded : True`: - :::image type="content" source="./media/managed-instance-link-preparation/powershell-output-tnc-command.png" alt-text="Screenshot showing output of T N C command in PowerShell."::: If the response is unsuccessful, verify the following network settings: - There are rules in both the network firewall *and* the windows firewall that allow traffic to the *subnet* of the SQL Managed Instance. - There's an NSG rule allowing communication on port 5022 for the virtual network hosting the SQL Managed Instance. - #### Test connection from SQL Managed Instance to SQL Server
-To check that the SQL Managed Instance can reach your SQL Server, create a test endpoint, and then use the SQL Agent to execute a PowerShell script with the `tnc` command pinging SQL Server on port 5022.
-
+To check that the SQL Managed Instance can reach your SQL Server, create a test endpoint on SQL Server, and then use the SQL Agent on Managed Instance to execute a PowerShell script with the `tnc` command pinging SQL Server on port 5022 from Managed Instance.
-
-Connect to the SQL Managed Instance and run the following Transact-SQL (T-SQL) script to create test endpoint:
+Connect to SQL Server and run the following Transact-SQL (T-SQL) script to create a test endpoint:
```sql Create certificate needed for the test endpoint
+-- Execute on SQL Server
+-- Create certificate needed for the test endpoint on SQL Server
USE MASTER CREATE CERTIFICATE TEST_CERT WITH SUBJECT = N'Certificate for SQL Server', EXPIRY_DATE = N'3/30/2051' GO Create test endpoint
+-- Create test endpoint on SQL Server
USE MASTER CREATE ENDPOINT TEST_ENDPOINT STATE=STARTED
CREATE ENDPOINT TEST_ENDPOINT
) ```
-Next, create a new SQL Agent job called `NetHelper`, using the public IP address or DNS name that can be resolved from the SQL Managed Instance for `SQL_SERVER_ADDRESS`.
+To verify that SQL Server endpoint is receiving connections on the port 5022, execute the following PowerShell command on the host OS of your SQL Server:
+
+```powershell
+tnc localhost -port 5022
+```
+
+A successful test shows `TcpTestSucceeded : True`. We can then proceed creating an SQL Agent job on Managed Instance to attempt testing the SQL Server test endpoint on port 5022 from the managed instance.
-To create the SQL Agent Job, run the following Transact-SQL (T-SQL) script:
+Next, create a new SQL Agent job on managed instance called `NetHelper`, using the public IP address or DNS name that can be resolved from the SQL Managed Instance for `SQL_SERVER_ADDRESS`.
+To create the SQL Agent Job, run the following Transact-SQL (T-SQL) script on managed instance:
```sql
+-- Execute on Managed Instance
-- SQL_SERVER_ADDRESS should be public IP address, or DNS name that can be resolved from the Managed Instance host machine. DECLARE @SQLServerIpAddress NVARCHAR(MAX) = '<SQL_SERVER_ADDRESS>' DECLARE @tncCommand NVARCHAR(MAX) = 'tnc ' + @SQLServerIpAddress + ' -port 5022 -InformationLevel Quiet'
EXEC msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
EXEC msdb.dbo.sp_start_job @job_name = N'NetHelper' ``` -
-Execute the SQL Agent job by running the following T-SQL command:
+Execute the SQL Agent job by running the following T-SQL command on managed instance:
```sql
+-- Execute on Managed Instance
EXEC msdb.dbo.sp_start_job @job_name = N'NetHelper' ```
-Execute the following query to show the log of the SQL Agent job:
+Execute the following query on managed instance to show the log of the SQL Agent job:
```sql
+-- Execute on Managed Instance
SELECT sj.name JobName, sjs.step_id, sjs.step_name, sjsl.log, sjsl.date_modified FROM
If the connection is successful, the log will show `True`. If the connection is
:::image type="content" source="./media/managed-instance-link-preparation/ssms-output-tnchelper.png" alt-text="Screenshot showing expected output of NetHelper S Q L Agent job.":::
-Finally, drop the test endpoint and certificate with the following Transact-SQL (T-SQL) commands:
+Finally, drop the test endpoint and certificate on SQL Server with the following Transact-SQL (T-SQL) commands:
```sql
+-- Execute on SQL Server
DROP ENDPOINT TEST_ENDPOINT GO DROP CERTIFICATE TEST_CERT
After installation completes, open SSMS and connect to your supported SQL Server
## Next steps
-After your environment has been prepared, you're ready to start [replicating your database](managed-instance-link-use-ssms-to-replicate-database.md). To learn more, review [Link feature in Azure SQL Managed Instance](link-feature.md).
+After your environment has been prepared, you're ready to start [replicating your database](managed-instance-link-use-ssms-to-replicate-database.md). To learn more, review [Link feature in Azure SQL Managed Instance](link-feature.md).
azure-sql Managed Instance Link Use Scripts To Failover Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-scripts-to-failover-database.md
ms.devlang:
-+ Last updated 03/15/2022
Use the following PowerShell script to call REST API that changes the replicatio
Replace `<YourSubscriptionID>` with your subscription ID and replace `<ManagedInstanceName>` with the name of your managed instance. Replace `<DAGName>` with the name of Distributed Availability Group link for which youΓÇÖd like to get the status. ```powershell
+# Execute in Azure Cloud Shell
# ==================================================================================== # POWERSHELL SCRIPT TO SWITCH REPLICATION MODE SYNC-ASYNC ON MANAGED INSTANCE # USER CONFIGURABLE VALUES
Invoke-WebRequest -Method PATCH -Headers $headers -Uri $uriFull -ContentType "ap
## Switch replication mode on SQL Server
-Use the following T-SQL script to change the replication mode of Distributed Availability Group on SQL Server from async to sync. Replace `<DAGName>` with the name of Distributed Availability Group, and replace `<AGName>` with the name of Availability Group created on SQL Server. In addition, replace `<ManagedInstanceName>` with the name of your SQL Managed Instance.
-With this step, the migration of the database from SQL Server to SQL Managed Instance is completed.
+Use the following T-SQL script on SQL Server to change the replication mode of Distributed Availability Group on SQL Server from async to sync. Replace `<DAGName>` with the name of Distributed Availability Group, and replace `<AGName>` with the name of Availability Group created on SQL Server. In addition, replace `<ManagedInstanceName>` with the name of your SQL Managed Instance.
```sql
+-- Execute on SQL Server
-- Sets the Distributed Availability Group to synchronous commit. -- ManagedInstanceName example 'sqlmi1' USE master
AVAILABILITY GROUP ON
To validate change of the link replication, execute the following DMV, and expected results are shown below. They're indicating SYNCHRONOUS_COMIT state. ```sql
+-- Execute on SQL Server
-- Verifies the state of the distributed availability group SELECT ag.name, ag.is_distributed, ar.replica_server_name,
To complete the migration, we need to ensure that the replication has completed.
Use the following T-SQL query on SQL Server to read the LSN number of the last recorded transaction log. Replace `<DatabaseName>` with your database name and look for the last hardened LSN number, as shown below. ```sql
+-- Execute on SQL Server
-- Obtain last hardened LSN for a database on SQL Server. SELECT ag.name AS [Replication group],
Use the following T-SQL query on SQL Managed Instance to read the LSN number of
Query shown below will work on General Purpose SQL Managed Instance. For Business Critical Managed Instance, you will need to uncomment `and drs.is_primary_replica = 1` at the end of the script. On Business Critical, this filter will make sure that only primary replica details are read. ```sql
+-- Execute on Managed Instance
-- Obtain LSN for a database on SQL Managed Instance. SELECT db.name AS [Database name],
SQL Managed Instance link database failover and migration to Azure is accomplish
Use the following API to initiate database failover to Azure. Replace `<YourSubscriptionID>` with your actual Azure subscription ID. Replace `<RG>` with the resource group where your SQL Managed Instance is deployed and replace `<ManagedInstanceName>` with the name of our SQL Managed Instance. In addition, replace `<DAGName>` with the name of Distributed Availability Group made on SQL Server. ```PowerShell
+# Execute in Azure Cloud Shell
# ==================================================================================== # POWERSHELL SCRIPT TO FAILOVER AND MIGRATE DATABASE WITH SQL MANAGED INSTANCE LINK # USER CONFIGURABLE VALUES
Invoke-WebRequest -Method DELETE -Headers $headers -Uri $uriFull -ContentType "a
## Cleanup Availability Group and Distributed Availability Group on SQL Server
-After breaking the link and migrating database to Azure SQL Managed Instance, consider cleaning up Availability Group and Distributed Availability Group on SQL Server if they aren't used otherwise.
+After breaking the link and migrating database to Azure SQL Managed Instance, consider cleaning up Availability Group and Distributed Availability Group on SQL Server if they aren't used otherwise on SQL Server.
Replace `<DAGName>` with the name of the Distributed Availability Group on SQL Server and replace `<AGName>` with Availability Group name on the SQL Server. ``` sql
+-- Execute on SQL Server
DROP AVAILABILITY GROUP <DAGName> GO DROP AVAILABILITY GROUP <AGName>
For more information on the link feature, see the following resources:
- [Prepare for SQL Managed Instance link](./managed-instance-link-preparation.md). - [Use SQL Managed Instance link with scripts to replicate database](./managed-instance-link-use-scripts-to-replicate-database.md). - [Use SQL Managed Instance link via SSMS to replicate database](./managed-instance-link-use-ssms-to-replicate-database.md).-- [Use SQL Managed Instance link via SSMS to migrate database](./managed-instance-link-use-ssms-to-failover-database.md).
+- [Use SQL Managed Instance link via SSMS to migrate database](./managed-instance-link-use-ssms-to-failover-database.md).
azure-sql Managed Instance Link Use Scripts To Replicate Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-scripts-to-replicate-database.md
ms.devlang:
-+ Last updated 03/15/2022
The following section discloses steps to complete these actions.
First, create master key on SQL Server and generate authentication certificate. ```sql
+-- Execute on SQL Server
-- Create MASTER KEY encryption password -- Keep the password confidential and in a secure place. USE MASTER CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<strong_password>' GO Create the SQL Server certificate for SQL Managed Instance link
+-- Create the SQL Server certificate for the instance link
USE MASTER GO
EXEC sp_executesql @stmt = @create_sqlserver_certificate_command
GO ```
-Then, use the following T-SQL query to verify the certificate has been created.
+Then, use the following T-SQL query on SQL Server to verify the certificate has been created.
```sql
+-- Execute on SQL Server
USE MASTER GO SELECT * FROM sys.certificates
SELECT * FROM sys.certificates
In the query results you'll find the certificate and will see that it has been encrypted with the master key.
-Now you can get the public key of the generated certificate.
+Now you can get the public key of the generated certificate on SQL Server.
```sql
+-- Execute on SQL Server
-- Show the public key of the generated SQL Server certificate USE MASTER GO
Next step should be executed in PowerShell, with installed Az.Sql module, versio
Execute the following PowerShell script in Azure Cloud Shell (fill out necessary user information, copy, paste into Azure Cloud Shell and execute). Replace `<SubscriptionID>` with your Azure Subscription ID. Replace `<ManagedInstanceName>` with the short name of your managed instance. Replace `<PublicKeyEncoded>` below with the public portion of the SQL Server certificate in binary format generated in the previous step. That will be a long string value starting with 0x, that you've obtained from SQL Server.
-
```powershell
+# Execute in Azure Cloud Shell
# =============================================================================== # POWERSHELL SCRIPT TO IMPORT SQL SERVER CERTIFICATE TO MANAGED INSTANCE # USER CONFIGURABLE VALUES
$ManagedInstanceName = "<YourManagedInstanceName>"
# Insert the cert public key blob you got from the SQL Server $PublicKeyEncoded = "<PublicKeyEncoded>" - # =============================================================================== # INVOKING THE API CALL -- REST OF THE SCRIPT IS NOT USER CONFIGURABLE # ===============================================================================
if ((Get-AzContext ) -eq $null)
} Select-AzSubscription -SubscriptionName $SubscriptionID - # Build URI for the API call. # $miRG = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName
$bodyFull = @"
echo $bodyFull - # Get auth token and build the HTTP request header. # $azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
$authToken = $token.AccessToken
$headers = @{} $headers.Add("Authorization", "Bearer "+"$authToken") - # Invoke API call # Invoke-WebRequest -Method POST -Headers $headers -Uri $uriFull -ContentType "application/json" -Body $bodyFull
Certificate for securing the endpoint for SQL Managed Instance link is automatic
Use SSMS to connect to the SQL Managed Instance and execute stored procedure [sp_get_endpoint_certificate](/sql/relational-databases/system-stored-procedures/sp-get-endpoint-certificate-transact-sql) to get the certificate public key. ```sql Execute stored procedure on SQL Managed Instance to get public key of the instance certificate.
+-- Execute on Managed Instance
EXEC sp_get_endpoint_certificate @endpoint_type = 4 ```
-Copy the entire public key from Managed Instance starting with ΓÇ£0xΓÇ¥ shown in the previous step and use it in the below query by replacing `<InstanceCertificate>` with the key value. No quotations need to be used.
+Copy the entire public key from Managed Instance starting with ΓÇ£0xΓÇ¥ shown in the previous step and use it in the below query on SQL Server by replacing `<InstanceCertificate>` with the key value. No quotations need to be used.
> [!IMPORTANT] > Name of the certificate must be SQL Managed Instance FQDN. ```sql
+-- Execute on SQL Server
USE MASTER CREATE CERTIFICATE [<SQLManagedInstanceFQDN>] FROM BINARY = <InstanceCertificate>
FROM BINARY = <InstanceCertificate>
Finally, verify all created certificates by viewing the following DMV. ```sql
+-- Execute on SQL Server
SELECT * FROM sys.certificates ``` ## Mirroring endpoint on SQL Server
-If you donΓÇÖt have existing Availability Group nor mirroring endpoint, the next step is to create a mirroring endpoint on SQL Server and secure it with the certificate. If you do have existing Availability Group or mirroring endpoint, go straight to the next section ΓÇ£Altering existing database mirroring endpointΓÇ¥
+If you donΓÇÖt have existing Availability Group nor mirroring endpoint on SQL Server, the next step is to create a mirroring endpoint on SQL Server and secure it with the certificate. If you do have existing Availability Group or mirroring endpoint, go straight to the next section ΓÇ£Altering existing database mirroring endpointΓÇ¥
To verify that you don't have an existing database mirroring endpoint created, use the following script. ```sql
+-- Execute on SQL Server
-- View database mirroring endpoints on SQL Server SELECT * FROM sys.database_mirroring_endpoints WHERE type_desc = 'DATABASE_MIRRORING' ```
-In case that the above query doesn't show there exists a previous database mirroring endpoint, execute the following script to create a new database mirroring endpoint on the port 5022 and secure it with a certificate.
+In case that the above query doesn't show there exists a previous database mirroring endpoint, execute the following script on SQL Server to create a new database mirroring endpoint on the port 5022 and secure it with a certificate.
```sql
+-- Execute on SQL Server
-- Create connection endpoint listener on SQL Server USE MASTER CREATE ENDPOINT database_mirroring_endpoint
GO
Validate that the mirroring endpoint was created by executing the following on SQL Server. - ```sql
+-- Execute on SQL Server
-- View database mirroring endpoints on SQL Server SELECT name, type_desc, state_desc, role_desc,
New mirroring endpoint was created with CERTIFICATE authentication, and AES encr
> [!NOTE] > Skip this step if you've just created a new mirroring endpoint. Use this step only if using existing Availability Groups with existing database mirroring endpoint. - In case existing Availability Groups are used for SQL Managed Instance link, or in case there's an existing database mirroring endpoint, first validate it satisfies the following mandatory conditions for SQL Managed Instance Link: - Type must be ΓÇ£DATABASE_MIRRORINGΓÇ¥. - Connection authentication must be ΓÇ£CERTIFICATEΓÇ¥. - Encryption must be enabled. - Encryption algorithm must be ΓÇ£AESΓÇ¥.
-Execute the following query to view details for an existing database mirroring endpoint.
+Execute the following query on SQL Server to view details for an existing database mirroring endpoint.
```sql
+-- Execute on SQL Server
-- View database mirroring endpoints on SQL Server SELECT name, type_desc, state_desc, role_desc, connection_auth_desc,
On SQL Server, one database mirroring endpoint is used for both Availability Gro
Similarly, if encryption doesn't include AES and you need RC4 encryption, it's possible to alter the endpoint to use both algorithms. See details of possible options at documentation page for [sys.database_mirroring_endpoints](/sql/relational-databases/system-catalog-views/sys-database-mirroring-endpoints-transact-sql).
-The script below is provided as an example of how to alter your existing database mirroring endpoint. Depending on your existing specific configuration, you perhaps might need to customize it further for your scenario. Replace `<YourExistingEndpointName>` with your existing endpoint name. Replace `<CERTIFICATE-NAME>` with the name of the generated SQL Server certificate. You can also use `SELECT * FROM sys.certificates` to get the name of the created certificate on the SQL Server.
+The script below is provided as an example of how to alter your existing database mirroring endpoint on SQL Server. Depending on your existing specific configuration, you perhaps might need to customize it further for your scenario. Replace `<YourExistingEndpointName>` with your existing endpoint name. Replace `<CERTIFICATE-NAME>` with the name of the generated SQL Server certificate. You can also use `SELECT * FROM sys.certificates` to get the name of the created certificate on the SQL Server.
```sql
+-- Execute on SQL Server
-- Alter the existing database mirroring endpoint to use CERTIFICATE for authentication and AES for encryption USE MASTER ALTER ENDPOINT <YourExistingEndpointName>
ALTER ENDPOINT <YourExistingEndpointName>
GO ```
-After running the ALTER endpoint query and setting the dual authentication mode to Windows and Certificate, use again this query to show the database mirroring endpoint details.
+After running the ALTER endpoint query and setting the dual authentication mode to Windows and Certificate, use again this query on SQL Server to show the database mirroring endpoint details.
```sql
+-- Execute on SQL Server
-- View database mirroring endpoints on SQL Server SELECT name, type_desc, state_desc, role_desc, connection_auth_desc,
If you don't have existing AG the next step is to create an AG on SQL Server. If
- Failover mode MANUAL - Seeding mode AUTOMATIC
-Use the following script to create a new AG on SQL Server. Replace `<SQLServerName>` with the name of your SQL Server. Find out your SQL Server name with executing the following T-SQL:
+Use the following script to create a new Availability Group on SQL Server. Replace `<SQLServerName>` with the name of your SQL Server. Find out your SQL Server name with executing the following T-SQL:
```sql
+-- Execute on SQL Server
SELECT @@SERVERNAME AS SQLServerName ``` Replace `<AGName>` with the name of your availability group. For multiple databases you'll need to create multiple Availability Groups. Managed Instance link requires one database per AG. In this respect, consider naming each AG so that its name reflects the corresponding database - for example `AG_<db_name>`. Replace `<DatabaseName>` with the name of database you wish to replicate. Replace `<SQLServerIP>` with SQL ServerΓÇÖs IP address. Alternatively, resolvable SQL Server host machine name can be used, but you need to make sure that the name is resolvable from SQL Managed Instance virtual network. ```sql
+-- Execute on SQL Server
-- Create primary AG on SQL Server USE MASTER CREATE AVAILABILITY GROUP [<AGName>]
GO
Use the following script to list all available Availability Groups and Distributed Availability Groups on the SQL Server. Availability Group state needs to be connected, and Distributed Availability Group state disconnected at this point. Distributed Availability Group state will move to `connected` only when it has been joined with SQL Managed Instance. This will be explained in one of the next steps. ```sql
+-- Execute on SQL Server
-- This will show that Availability Group and Distributed Availability Group have been created on SQL Server. SELECT name, is_distributed, cluster_type_desc,
Invoking direct API call to Azure can be accomplished with various API clients.
Log in to Azure portal and execute the below PowerShell scripts in Azure Cloud Shell. Make the following replacements with the actual values in the script: Replace `<SubscriptionID>` with your Azure Subscription ID. Replace `<ManagedInstanceName>` with the short name of your managed instance. Replace `<AGName>` with the name of Availability Group created on SQL Server. Replace `<DAGName>` with the name of Distributed Availability Group create on SQL Server. Replace `<DatabaseName>` with the database replicated in Availability Group on SQL Server. Replace `<SQLServerAddress>` with the address of the SQL Server. This can be a DNS name, or public IP or even private IP address, as long as the address provided can be resolved from the backend node hosting the SQL Managed Instance. ```powershell
+# Execute in Azure Cloud Shell
# ============================================================================= # POWERSHELL SCRIPT FOR CREATING MANAGED INSTANCE LINK # USER CONFIGURABLE VALUES
The result of this operation will be the time stamp of the successful execution
To verify that connection has been made between SQL Managed Instance and SQL Server, execute the following query on SQL Server. Have in mind that connection will not be instantaneous upon executing the API call. It can take up to a minute for the DMV to start showing a successful connection. Keep refreshing the DMV until connection is shown as CONNECTED for SQL Managed Instance replica. ```sql
+-- Execute on SQL Server
SELECT r.replica_server_name AS [Replica], r.endpoint_url AS [Endpoint],
For more information on the link feature, see the following:
- [Prepare for SQL Managed Instance link](./managed-instance-link-preparation.md). - [Use SQL Managed Instance link with scripts to migrate database](./managed-instance-link-use-scripts-to-failover-database.md). - [Use SQL Managed Instance link via SSMS to replicate database](./managed-instance-link-use-ssms-to-replicate-database.md).-- [Use SQL Managed Instance link via SSMS to migrate database](./managed-instance-link-use-ssms-to-failover-database.md).
+- [Use SQL Managed Instance link via SSMS to migrate database](./managed-instance-link-use-ssms-to-failover-database.md).
azure-sql Managed Instance Link Use Ssms To Failover Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-ssms-to-failover-database.md
ms.devlang:
-+ Last updated 03/10/2022 # Failover database with link feature in SSMS - Azure SQL Managed Instance
And then reviewing the database on the SQL Managed Instance:
For more information about Managed Instance link feature, see the following resources:
-To learn more, review [Link feature in Azure SQL Managed Instance](link-feature.md).
+To learn more, review [Link feature in Azure SQL Managed Instance](link-feature.md).
azure-sql Managed Instance Link Use Ssms To Replicate Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-ssms-to-replicate-database.md
ms.devlang:
-+ Last updated 03/10/2022 # Replicate database with link feature in SSMS - Azure SQL Managed Instance
Connect to your SQL Managed Instance and use **Object Explorer** to view your r
## Next steps
-To break the link and failover your database to the SQL Managed Instance, see [failover database](managed-instance-link-use-ssms-to-failover-database.md). To learn more, see [Link feature in Azure SQL Managed Instance](link-feature.md).
+To break the link and failover your database to the SQL Managed Instance, see [failover database](managed-instance-link-use-ssms-to-failover-database.md). To learn more, see [Link feature in Azure SQL Managed Instance](link-feature.md).
azure-sql Winauth Azuread Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-overview.md
For example, a customer can enable a mobile analyst, using proven tools that rel
Learn more about implementing Windows Authentication for Azure AD principals on Azure SQL Managed Instance: - [How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory and Kerberos (Preview)](winauth-implementation-aad-kerberos.md)-- [How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and Kerberos (Preview)](winauth-azuread-setup.md)
+- [How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and Kerberos (Preview)](winauth-azuread-setup.md)
azure-sql Winauth Azuread Run Trace Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-run-trace-managed-instance.md
To use Windows Authentication to connect to and run a trace against a managed in
- To create or modify extended events sessions, ensure that your account has the [server permission](/sql/t-sql/statements/grant-server-permissions-transact-sql) of ALTER ANY EVENT SESSION on the managed instance. - To create or modify traces in SQL Server Profiler, ensure that your account has the [server permission](/sql/t-sql/statements/grant-server-permissions-transact-sql) of ALTER TRACE on the managed instance.
-If you have not yet enabled Windows authentication for Azure AD principals against your managed instance, you may run a trace against a managed instance using an [Azure AD Authentication](/azure/azure-sql/database/authentication-aad-overview.md) option, including:
+If you have not yet enabled Windows authentication for Azure AD principals against your managed instance, you may run a trace against a managed instance using an [Azure AD Authentication](../database/authentication-aad-overview.md) option, including:
- 'Azure Active Directory - Password' - 'Azure Active Directory - Universal with MFA'
Learn more about Windows Authentication for Azure AD principals with Azure SQL M
- [What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance? (Preview)](winauth-azuread-overview.md) - [How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and Kerberos (Preview)](winauth-azuread-setup.md) - [How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory and Kerberos (Preview)](winauth-implementation-aad-kerberos.md)-- [Extended Events](/sql/relational-databases/extended-events/extended-events)
+- [Extended Events](/sql/relational-databases/extended-events/extended-events)
azure-sql Winauth Azuread Setup Incoming Trust Based Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-setup-incoming-trust-based-flow.md
To implement the incoming trust-based authentication flow, first ensure that the
|Prerequisite |Description | ||| |Client must run Windows 10, Windows Server 2012, or a higher version of Windows. | |
-|Clients must be joined to AD. The domain must have a functional level of Windows Server 2012 or higher. | You can determine if the client is joined to AD by running the [dsregcmd command](/azure/active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
+|Clients must be joined to AD. The domain must have a functional level of Windows Server 2012 or higher. | You can determine if the client is joined to AD by running the [dsregcmd command](../../active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
|Azure AD Hybrid Authentication Management Module. | This PowerShell module provides management features for on-premises setup. | |Azure tenant. | | |Azure subscription under the same Azure AD tenant you plan to use for authentication.| |
Install-Module -Name AzureADHybridAuthenticationManagement -AllowClobber
## Configure the Group Policy Object (GPO)
-1. Identify your [Azure AD tenant ID](/azure/active-directory/fundamentals/active-directory-how-to-find-tenant.md).
+1. Identify your [Azure AD tenant ID](../../active-directory/fundamentals/active-directory-how-to-find-tenant.md).
1. Deploy the following Group Policy setting to client machines using the incoming trust-based flow:
azure-sql Winauth Azuread Setup Modern Interactive Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-setup-modern-interactive-flow.md
There is no AD to Azure AD set up required for enabling software running on Azur
|Prerequisite |Description | ||| |Clients must run Windows 10 20H1, Windows Server 2022, or a higher version of Windows. | |
-|Clients must be joined to Azure AD or Hybrid Azure AD. | You can determine if this prerequisite is met by running the [dsregcmd command](/azure/active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
+|Clients must be joined to Azure AD or Hybrid Azure AD. | You can determine if this prerequisite is met by running the [dsregcmd command](../../active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
|Application must connect to the managed instance via an interactive session. | This supports applications such as SQL Server Management Studio (SSMS) and web applications, but won't work for applications that run as a service. | |Azure AD tenant. | | |Azure AD Connect installed. | Hybrid environments where identities exist both in Azure AD and AD. |
azure-sql Winauth Azuread Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-setup.md
Following this, a system administrator configures authentication flows. Two auth
### Synchronize AD with Azure AD
-Customers should first implement [Azure AD Connect](/azure/active-directory/hybrid/whatis-azure-ad-connect.md) to integrate on-premises directories with Azure AD.
+Customers should first implement [Azure AD Connect](../../active-directory/hybrid/whatis-azure-ad-connect.md) to integrate on-premises directories with Azure AD.
### Select which authentication flow(s) you will implement
The following prerequisites are required to implement the modern interactive aut
|Prerequisite |Description | ||| |Clients must run Windows 10 20H1, Windows Server 2022, or a higher version of Windows. | |
-|Clients must be joined to Azure AD or Hybrid Azure AD. | You can determine if this prerequisite is met by running the [dsregcmd command](/azure/active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
+|Clients must be joined to Azure AD or Hybrid Azure AD. | You can determine if this prerequisite is met by running the [dsregcmd command](../../active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
|Application must connect to the managed instance via an interactive session. | This supports applications such as SQL Server Management Studio (SSMS) and web applications, but won't work for applications that run as a service. | |Azure AD tenant. | | |Azure AD Connect installed. | Hybrid environments where identities exist both in Azure AD and AD. |
The following prerequisites are required to implement the incoming trust-based a
|Prerequisite |Description | ||| |Client must run Windows 10, Windows Server 2012, or a higher version of Windows. | |
-|Clients must be joined to AD. The domain must have a functional level of Windows Server 2012 or higher. | You can determine if the client is joined to AD by running the [dsregcmd command](/azure/active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
+|Clients must be joined to AD. The domain must have a functional level of Windows Server 2012 or higher. | You can determine if the client is joined to AD by running the [dsregcmd command](../../active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
|Azure AD Hybrid Authentication Management Module. | This PowerShell module provides management features for on-premises setup. | |Azure tenant. | | |Azure subscription under the same Azure AD tenant you plan to use for authentication.| |
Learn more about implementing Windows Authentication for Azure AD principals on
- [How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory and Kerberos (Preview)](winauth-implementation-aad-kerberos.md) - [How to set up Windows Authentication for Azure Active Directory with the modern interactive flow (Preview)](winauth-azuread-setup-modern-interactive-flow.md) - [How to set up Windows Authentication for Azure AD with the incoming trust-based flow (Preview)](winauth-azuread-setup-incoming-trust-based-flow.md)-- [Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory (Preview)](winauth-azuread-kerberos-managed-instance.md)
+- [Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory (Preview)](winauth-azuread-kerberos-managed-instance.md)
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
+
+ Title: Deploy Arc for Azure VMware Solution (Preview)
+description: Learn how to set up and enable Arc for your Azure VMware Solution private cloud.
+ Last updated : 01/31/2022++++
+# Deploy Arc for Azure VMware Solution (Preview)
+
+In this article, you'll learn how to deploy Arc for Azure VMware Solution. Once you've set up the components needed for this public preview, you'll be ready to execute operations in Azure VMware Solution vCenter from the Azure portal. Operations are related to Create, Read, Update, and Delete (CRUD) virtual machines (VMs) in an Arc-enabled Azure VMware Solution private cloud. Users can also enable guest management and install Azure extensions once the private cloud is Arc-enabled.
+
+Before you begin checking off the prerequisites, verify the following actions have been done:
+
+- You deployed an Azure VMware Solution private cluster.
+- You have a connection to the Azure VMware Solution private cloud through your on-prem environment or your native Azure Virtual Network.
+- There should be an isolated NSX-T segment for deploying the Arc for Azure VMware Solution Open Virtualization Appliance (OVA). If an isolated NSX-T segment doesn't exist, one will be created.
+
+## Prerequisites
+
+The following items are needed to ensure you're set up to begin the onboarding process to deploy Arc for Azure VMware Solution (Preview).
+
+- A jump box virtual machine (VM) with network access to the Azure VMware Solution vCenter.
+ - From the jump-box VM, verify you have access to [vCenter and NSX-T portals](/azure/azure-vmware/tutorial-configure-networking).
+- Verify that your Azure subscription has been enabled or you have connectivity to Azure end points, mentioned in the [Appendices](#appendices).
+- Resource group in the subscription where you have owner or contributor role.
+- A minimum of three free non-overlapping IPs addresses.
+- Verify that your vCenter Server version is 6.7 or higher.
+- A resource pool with minimum-free capacity of 16 GB of RAM, 4 vCPUs.
+- A datastore with minimum 100 GB of free disk space that is available through the resource pool.
+- On the vCenter Server, allow inbound connections on TCP port 443, so that the Arc resource bridge and VMware cluster extension can communicate with the vCenter server.
+
+> [!NOTE]
+> Only the default port of 443 is supported. If you use a different port, Appliance VM creation will fail.
+
+At this point, you should have already deployed an Azure VMware Solution private cluster. You need to have a connection from your on-prem environment or your native Azure Virtual Network to the Azure VMware Solution private cloud.
+
+For Network planning and setup, use the [Network planning checklist - Azure VMware Solution | Microsoft Docs](/azure/azure-vmware/tutorial-network-checklist)
+
+### Registration to Arc for Azure VMware Solution feature set
+
+The following **Register features** are for provider registration using Azure CLI.
+
+```azurecli
+az provider register --namespace Microsoft.ConnectedVMwarevSphere
+az provider register --namespace Microsoft.ExtendedLocation
+az provider register --namespace Microsoft.KubernetesConfiguration
+az provider register --namespace Microsoft.ResourceConnector
+az provider register --namespace Microsoft.AVS
+```
+
+Alternately, users can sign into their Subscription, navigate to the **Resource providers** tab, and register themselves on the resource providers mentioned previously.
+
+For feature registration, users will need to sign into their **Subscription**, navigate to the **Preview features** tab, and search for 'Azure Arc for Azure VMware Solution'. Once registered, no other permissions are required for users to access Arc.
+
+Users need to ensure they've registered themselves to **Microsoft.AVS/earlyAccess**. After registering, use the following feature to verify registration.
+
+```azurecli
+az feature show --name AzureArcForAVS --namespace Microsoft.AVS
+```
+
+## Onboard process to deploy Azure Arc
+
+Use the following steps to guide you through the process to onboard in Arc for Azure VMware Solution (Preview).
+
+1. Sign into the jumpbox VM and extract the contents from the compressed file from the following [location](https://github.com/Azure/ArcOnAVS/releases). The extracted file contains the scripts to install the preview software.
+1. Open the 'config_avs.json' file and populate all the variables.
+
+ **Config JSON**
+ ```json
+ {
+ "subscriptionId": "",
+ "resourceGroup": "",
+ "applianceControlPlaneIpAddress": "",
+ "privateCloud": "",
+ "isStatic": true,
+ "staticIpNetworkDetails": {
+ "networkForApplianceVM": "",
+ "networkCIDRForApplianceVM": "",
+ "k8sNodeIPPoolStart": "",
+ "k8sNodeIPPoolEnd": "",
+ "gatewayIPAddress": ""
+ }
+ }
+ ```
+
+ - Populate the `subscriptionId`, `resourceGroup`, and `privateCloud` names respectively.
+ - `isStatic` is always true.
+ - `networkForApplianceVM` is the name for the segment for Arc appliance VM. One will be created if it doesn't already exist.
+ - `networkCIDRForApplianceVM` is the IP CIDR of the segment for Arc appliance VM. It should be unique and not affect other networks of Azure VMware Solution management IP CIDR.
+ - `GatewayIPAddress` is the gateway for the segment for Arc appliance VM.
+ - `applianceControlPlaneIpAddress` is the IP address for the Kubernetes API server that should be part of the segment IP CIDR provided. It shouldn't be part of the k8s node pool IP range.
+ - `k8sNodeIPPoolStart`, `k8sNodeIPPoolEnd` are the starting and ending IP of the pool of IPs to assign to the appliance VM. Both need to be within the `networkCIDRForApplianceVM`.
+
+ **Json example**
+ ```json
+ {
+ "subscriptionId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "resourceGroup": "test-rg ",
+ "applianceControlPlaneIpAddress": "10.14.10.10",
+ "privateCloud": "test-pc",
+ "isStatic": true,
+ "staticIpNetworkDetails": {
+ "networkForApplianceVM": "arc-segment",
+ "networkCIDRForApplianceVM": "10.14.10.1/24",
+ "k8sNodeIPPoolStart": "10.14.10.20",
+ "k8sNodeIPPoolEnd": "10.14.10.30",
+ "gatewayIPAddress": "10.14.10.1"
+ }
+ }
+ ```
+
+1. Run the installation scripts. We've provided you with the option to set up this preview from a Windows or Linux-based jump box/VM.
+
+ Run the following commands to execute the installation script.
+
+ # [Windows based jump box/VM](#tab/windows)
+ Script isn't signed so we need to bypass Execution Policy in PowerShell. Run the following commands.
+
+ ```
+ Set-ExecutionPolicy -Scope Process -ExecutionPolicy ByPass; .\run.ps1 -Operation onboard -FilePath {config-json-path}
+ ```
+ # [Linux based jump box/VM](#tab/linux)
+ Add execution permission for the script and run the following commands.
+
+ ```
+ $ chmod +x run.sh
+ $ sudo bash run.sh onboard {config-json-path}
+ ```
++
+4. You'll notice more Azure Resources have been created in your resource group.
+ - Resource bridge
+ - Custom location
+ - VMware vCenter
+
+> [!IMPORTANT]
+> You can't create the resources in a separate resource group. Make sure you use the same resource group from where the Azure VMware Solution private cloud was created to create the resources.
+
+## Discover and project your VMware infrastructure resources to Azure
+
+When Arc appliance is successfully deployed on your private cloud, you can do the following actions.
+
+- View the status from within the private cloud under **Operations > Azure Arc**, located in the left navigation.
+- View the VMware infrastructure resources from the private cloud left navigation under **Private cloud** then select **Azure Arc vCenter resources**.
+- Discover your VMware infrastructure resources and project them to Azure using the same browser experience, **Private cloud > Arc vCenter resources > Virtual Machines**.
+- Similar to VMs, customers can enable networks, templates, resource pools, and data-stores in Azure.
+
+After you've enabled VMs to be managed from Azure, you can install guest management and do the following actions.
+
+- Enable customers to install and use extensions.
+ - To enable guest management, customers will be required to use admin credentials
+ - VMtools should already be running on the VM
+> [!NOTE]
+> Azure VMware Solution vCenter will be available in global search but will NOT be available in the list of vCenters for ARc for VMware.
+
+- Customers can view the list of VM extensions available in public preview.
+ - Change tracking
+ - Log analytics
+ - Update management
+ - Azure policy guest configuration
+
+ **Azure VMware Solution private cloud with Azure Arc**
+
+When the script has run successfully, you can check the status to see if Azure Arc has been configured. To verify if your private cloud is Arc-enabled, do the following action:
+- In the left navigation, locate **Operations**.
+- Choose **Azure Arc (preview)**. Azure Arc state will show as **Configured**.
+
+ :::image type="content" source="media/deploy-arc-for-azure-vmware-solution/arc-private-cloud-configured.png" alt-text="Image showing navigation to Azure Arc state to verify it's configured."lightbox="media/deploy-arc-for-azure-vmware-solution/arc-private-cloud-configured.png":::
+
+**Arc enabled VMware resources**
+
+After the private cloud is Arc-enabled, vCenter resources should appear under **Virtual machines**.
+- From the left navigation, under **Azure Arc VMware resources (preview)**, locate **Virtual machines**.
+- Choose **Virtual machines** to view the vCenter resources.
+
+### Manage access to VMware resources through Azure Role-Based Access Control
+
+After your Azure VMware Solution vCenter resources have been enabled for access through Azure, there's one final step in setting up a self-service experience for your teams. You'll need to provide your teams with access to: compute, storage, networking, and other vCenter resources used to configure VMs.
+
+This section will demonstrate how to use custom roles to manage granular access to VMware resources through Azure.
+
+#### Arc-enabled VMware vSphere custom roles
+
+We provide three custom roles to meet your Role-based access controls (RBACs). These roles can be applied to a whole subscription, resource group, or a single resource.
+
+- Azure Arc VMware Administrator role
+- Azure Arc VMware Private Cloud User role
+- Azure Arc VMware VM Contributor role
+
+The first role is for an Administrator. The other two roles apply to anyone who needs to deploy or manage a VM.
+
+**Azure Arc Azure VMware Solution Administrator role**
+
+This custom role gives the user permission to conduct all possible operations for the `Microsoft.ConnectedVMwarevSphere` resource provider. This role should be assigned to users or groups who are administrators that manage Azure Arc-enabled Azure VMware Solution deployment.
+
+**Azure Arc Azure VMware Solution Private Cloud User role**
+
+This custom role gives the user permission to use the Arc-enabled Azure VMware Solutions vSphere resources that have been made accessible through Azure. This role should be assigned to any users or groups that need to deploy, update, or delete VMs.
+
+We recommend assigning this role at the individual resource pool (host or cluster), virtual network, or template that you want the user to deploy VMs with.
+
+**Azure Arc Azure VMware Solution VM Contributor role**
+
+This custom role gives the user permission to perform all VMware VM operations. This role should be assigned to any users or groups that need to deploy, update, or delete VMs.
+
+We recommend assigning this role at the subscription level or resource group you want the user to deploy VMs with.
+
+**Assign custom roles to users or groups**
+
+1. Navigate to the Azure portal.
+1. Locate the subscription, resource group, or the resource at the scope you want to provide for the custom role.
+1. Find the Arc-enabled Azure VMware Solution vCenter resources.
+ 1. Navigate to the resource group and select the **Show hidden types** checkbox.
+ 1. Search for "Azure VMware Solution".
+1. Select **Access control (IAM)** in the table of contents located on the left navigation.
+1. Select **Add role assignment** from the **Grant access to this resource**.
+ :::image type="content" source="media/deploy-arc-for-azure-vmware-solution/assign-custom-role-user-groups.png" alt-text="Image showing navigation to access control IAM and add role assignment."lightbox="media/deploy-arc-for-azure-vmware-solution/assign-custom-role-user-groups.png":::
+1. Select the custom role you want to assign, Azure Arc VMware Solution: **Administrator**, **Private Cloud User**, or **VM Contributor**.
+1. Search for **AAD user** or **group name** that you want to assign this role to.
+1. Select the **AAD user** or **group name**. Repeat this step for each user or group you want to give permission to.
+1. Repeat the above steps for each scope and role.
++
+## Create Arc-enabled Azure VMware Solution virtual machine
+
+This section shows users how to create a virtual machine (VM) on VMware vCenter using Azure Arc. Before you begin, check the following prerequisite list to ensure you're set up and ready to create an Arc-enabled Azure VMware Solution VM.
+
+### Prerequisites
+
+- An Azure subscription and resource group where you have an Arc VMware VM **Contributor role**.
+- A resource pool resource that you have an Arc VMware private cloud resource **User role**.
+- A virtual machine template resource that you have an Arc private cloud resource **User role**.
+- (Optional) a virtual network resource on which you have Arc private cloud resource **User role**.
+
+### Create VM flow
+
+- Open the [Azure portal](https://ms.portal.azure.com/)
+- On the **Home** page, search for **virtual machines**. Once you've navigated to **Virtual machines**, select the **+ Create** drop down and select **Azure VMware Solution virtual machine**.
+ :::image type="content" source="media/deploy-arc-for-azure-vmware-solution/deploy-vm-arc-1.2.png" alt-text="Image showing the location of the plus Create drop down menu and Azure VMware Solution virtual machine selection option."lightbox="media/deploy-arc-for-azure-vmware-solution/deploy-vm-arc-1.2.png":::
+
+Near the top of the **Virtual machines** page, you'll find five tabs labeled: **Basics**, **Disks**, **Networking**, **Tags**, and **Review + create**. Follow the steps or options provided in each tab to create your Azure VMware Solution virtual machine.
+
+**Basics**
+1. In **Project details**, select the **Subscription** and **Resource group** where you want to deploy your VM.
+1. In **Instance details**, provide the **virtual machine name**.
+1. Select a **Custom location** that your administrator has shared with you.
+1. Select the **Resource pool/cluster/host** where the VM should be deployed.
+1. For **Template details**, pick a **Template** based on the VM you plan to create.
+ - Alternately, you can check the **Override template defaults** box that allows you to override the CPU and memory specifications set in the template.
+ - If you chose a Windows template, you can provide a **Username** and **Password** for the **Administrator account**.
+1. For **Extension setup**, the box is checked by default to **Enable guest management**. If you donΓÇÖt want guest management enabled, uncheck the box.
+1. The connectivity method defaults to **Public endpoint**. Create a **Username**, **Password**, and **Confirm password**.
+
+**Disks**
+ - You can opt to change the disks configured in the template, add more disks, or update existing disks. These disks will be created on the default datastore per the VMware vCenter storage policies.
+ - You can change the network interfaces configured in the template, add Network interface cards (NICs), or update existing NICs. You can also change the network that the NIC will be attached to provided you have permissions to the network resource.
+
+**Networking**
+ - A network configuration is automatically created for you. You can choose to keep it or override it and add a new network interface instead.
+ - To override the network configuration, find and select **+ Add network interface** and add a new network interface.
+
+**Tags**
+ - In this section, you can add tags to the VM resource.
+
+**Review + create**
+ - Review the data and properties you've set up for your VM. When everything is set up how you want it, select **Create**. The VM should be created in a few minutes.
+
+## Enable guest management and extension installation
+
+The guest management must be enabled on the VMware virtual machine (VM) before you can install an extension. Use the following prerequisite steps to enable guest management.
+
+**Prerequisite**
+
+1. Navigate to [Azure portal](https://ms.portal.azure.com/).
+1. Locate the VMware VM you want to check for guest management and install extensions on, select the name of the VM.
+1. Select **Configuration** from the left navigation for a VMware VM.
+1. Verify **Enable guest management** has been checked.
+
+>[!NOTE]
+> The following conditions are necessary to enable guest management on a VM.
+
+- The machine must be running a [Supported operating system](/azure/azure-arc/servers/agent-overview).
+- The machine needs to connect through the firewall to communicate over the Internet. Make sure the [URLs](/azure/azure-arc/servers/agent-overview) listed aren't blocked.
+- The machine can't be behind a proxy, it's not supported yet.
+- If you're using Linux VM, the account must not prompt to sign in on pseudo commands.
+
+ Avoid pseudo commands by following these steps:
+
+ 1. Sign into Linux VM.
+ 1. Open terminal and run the following command: `sudo visudo`.
+ 1. Add the line `username` `ALL=(ALL) NOPASSWD:ALL` at the end of the file.
+ 1. Replace `username` with the appropriate user-name.
+
+If your VM template already has these changes incorporated, you won't need to do the steps for the VM created from that template.
+
+**Extension installation steps**
+
+1. Go to Azure portal.
+1. Find the Arc-enabled Azure VMware Solution VM that you want to install an extension on and select the VM name.
+1. Navigate to **Extensions** in the left navigation, select **Add**.
+1. Select the extension you want to install.
+ 1. Based on the extension, you'll need to provide details. For example, `workspace Id` and `key` for LogAnalytics extension.
+1. When you're done, select **Review + create**.
+
+When the extension installation steps are completed, they trigger deployment and install the selected extension on the VM.
+
+## Change Arc appliance credential
+
+Use the following guide to change your Arc appliance credential once you've changed your SDDC credentials.
+
+Use the **`Set Credential`** command to update the provider credentials for appliance resource. When **cloud admin** credentials are updated, use the following steps to update the credentials in the appliance store.
+
+1. Log into the jumpbox VM from where onboarding was performed. Change the directory to **onboarding directory**.
+1. Run the following command for Windows-based jumpbox VM.
+
+ `./.temp/.env/Scripts/activate`
+1. Run the following command.
+
+ `az arcappliance setcredential vmware --kubeconfig kubeconfig`
+
+1. Run the onboard command again. See step 3 in the [Process to onboard]() in Arc for Azure VMware Preview.
+
+> [!NOTE]
+> Customers need to ensure kubeconfig and SSH remain available as they will be required for log collection, appliance Upgrade, and credential rotation. These parameters will be required at the time of upgrade, log collection, and credential update scenarios.
+
+**Parameters**
+
+Required parameters
+
+`-kubeconfig # kubeconfig of Appliance resource`
+
+**Examples**
+
+The following command invokes the set credential for the specified appliance resource.
+
+` az arcappliance setcredential <provider> --kubeconfig <kubeconfig>`
+
+## Manual appliance upgrade
+
+Use the following steps to perform a manual upgrade for Arc appliance virtual machine (VM).
+
+1. Log into vCenter.
+1. Locate the arc appliance VM, which should be in the resource pool that was configured during onboarding.
+ 1. Power off the VM.
+ 1. Delete the VM.
+1. Delete the download template corresponding to the VM.
+1. Get the previous script `Config_avs` file and add the following configuration item:
+ 1. `"register":false`
+1. Download the latest version of the Azure VMware Solution onboarding script.
+1. Run the new onboarding script with the previous `config_avs.json` from the jump box VM, without changing other config items.
+
+## Off board from Azure Arc-enabled Azure VMware Solution
+
+This section demonstrates how to remove your VMware virtual machines (VMs) from Azure management services.
+
+If you've enabled guest management on your Arc-enabled Azure VMware Solution VMs and onboarded them to Azure management services by installing VM extensions on them, you'll need to uninstall the extensions to prevent continued billing. For example, if you installed an MMA extension to collect and send logs to an Azure Log Analytics workspace, you'll need to uninstall that extension. You'll also need to uninstall the Azure Connected Machine agent to avoid any problems installing the agent in future.
+
+Use the following steps to uninstall extensions from the portal.
+
+>[!NOTE]
+>**Steps 2-5** must be performed for all the VMs that have VM extensions installed.
+
+1. Log into your Azure VMware Solution private cloud.
+1. Select **Virtual machines** in **Private cloud**, found in the left navigation under ΓÇ£Arc-enabled VMware resourcesΓÇ¥.
+1. Search and select the virtual machine where you have **Guest management** enabled.
+1. Select **Extensions**.
+1. Select the extensions and select **Uninstall**.
+
+To avoid problems onboarding the same VM to **Guest management**, we recommend you do the following steps to cleanly disable guest management capabilities.
+
+>[!NOTE]
+>**Steps 2-3** must be performed for **all VMs** that have **Guest management** enabled.
+
+1. Sign into the virtual machine using administrator or root credentials and run the following command in the shell.
+ 1. `azcmagent disconnect --force-local-only`.
+1. Uninstall the `ConnectedMachine agent` from the machine.
+1. Set the **identity** on the VM resource to **none**.
+
+## Remove Arc-enabled Azure VMware Solution vSphere resources from Azure
+
+When you activate Arc-enabled Azure VMware Solution resources in Azure, a representation is created for them in Azure. Before you can delete the vCenter resource in Azure, you'll need to delete all of the Azure resource representations you created for your vSphere resources. To delete the Azure resource representations you created, do the following steps:
+
+1. Go to the Azure portal.
+1. Choose **Virtual machines** from Arc-enabled VMware resources in the private cloud.
+1. Select all the VMs that have an Azure Enabled value as **Yes**.
+1. Select **Remove from Azure**. This step will start deployment and remove these resources from Azure. The resources will remain in your vCenter.
+ 1. Repeat steps 2, 3 and 4 for **Resourcespools/clusters/hosts**, **Templates**, **Networks**, and **Datastores**.
+1. When the deletion completes, select **Overview**.
+ 1. Note the Custom location and the Azure Arc Resource bridge resources in the Essentials section.
+1. Select **Remove from Azure** to remove the vCenter resource from Azure.
+1. Go to vCenter resource in Azure and delete it.
+1. Go to the Custom location resource and select **Delete**.
+1. Go to the Azure Arc Resource bridge resources and select **Delete**.
+
+At this point, all of your Arc-enabled VMware vSphere resources have been removed from Azure.
+
+## Delete Arc resources from vCenter
+
+For the final step, you'll need to delete the resource bridge VM and the VM template that were created during the onboarding process. Once that step is done, Arc won't work on the Azure VMware Solution SDDC. When you delete Arc resources from vCenter, it won't affect the Azure VMware Solution private cloud for the customer.
+
+## Preview FAQ
+
+**How do you onboard a customer?**
+
+Fill in the [Customer Enrollment form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR0SUP-7nYapHr1Tk0MFNflVUNEJQNzFONVhVOUlVTVk3V1hNTjJPVDM5WS4u) and we'll be in touch.
+
+**How does support work?**
+
+Standard support process for Azure VMware Solution has been enabled to support customers.
+
+**Does Arc for Azure VMware Solution support private end point?**
+
+Yes. Arc for Azure VMware Solution will support private end point for general audience. However, it's not currently supported.
+
+**Is enabling internet the only option to enable Arc for Azure VMware Solution?**
+
+Yes
+
+**Is DHCP support available?**
+
+DHCP support is not available to customers at this time, we only support static IP.
+
+>[!NOTE]
+> This is Azure VMware Solution 2.0 only. It's not available for Azure VMware Solution by Cloudsimple.
+
+## Debugging tips for known issues
+
+Use the following tips as a self-help guide.
+
+**What happens if I face an error related to Azure CLI?**
+
+- For windows jumpbox, if you have 32-bit Azure CLI installed, verify that your current version of Azure CLI has been uninstalled. Verification can be done from the Control Panel.
+- To ensure it's uninstalled, try the `az` version to check if it's still installed.
+- If you already installed Azure CLI using MSI, `az` installed by MSI and pip will conflict on PATH. In this case, it's recommended that you uninstall the current Azure CLI version.
+
+**My script stopped because it timed-out, what should I do?**
+
+- Retry the script for `create`. A prompt will ask you to select **Y** and rerun it.
+- It could be a cluster extension issue that would result in adding the extension in the pending state.
+- Verify you have the correct script version.
+- Verify the VMware pod is running correctly on the system in running state.
+
+**Basic trouble-shooting steps if the script run was unsuccessful.**
+
+- Follow the directions provided in the [Prerequisites](#prerequisites) section of this article to verify that the feature and resource providers are registered.
+
+**What happens if the Arc for VMware section shows no data?**
+
+- If the Azure Arc VMware resources in the Azure UI show no data, verify your subscription was added in the global default subscription filter.
+
+**I see the error:** "`ApplianceClusterNotRunning` Appliance Cluster: `<resource-bridge-id>` expected states to be Succeeded found: Succeeded and expected status to be Running and found: Connected".
+
+- Run the script again.
+
+**I'm unable to install extensions on my virtual machine.**
+
+- Check that **guest management** has been successfully installed.
+- **VMtools** should be installed on the VM.
+
+**I'm facing Network related issues during on-boarding.**
+
+- Look for an IP conflict. You need IPs with no conflict or from free pool.
+- Verify the internet is enabled for the network segment.
+
+**Where can I find more information related to Azure Arc resource bridge?**
+
+- For more information, go to [Azure Arc resource bridge (preview) overview](/azure/azure-arc/resource-bridge/overview)
+
+## Appendices
+
+Appendix 1 shows proxy URLs required by the Azure Arc-enabled private cloud. The URLs will get pre-fixed when the script runs and can be run from the jumpbox VM to ping them.
++
+| **Azure Arc Service** | **URL** |
+| :-- | :-- |
+| Microsoft container registry | `https://mcr.microsoft.com` |
+| Azure Arc Identity service | `https://*.his.arc.azure.com` |
+| Azure Arc configuration service | `https://*.dp.kubernetesconfiguration.azure.com` |
+| Cluster connect | `https://*.servicebus.windows.net` |
+| Guest Notification service | `https://guestnotificationservice.azure.com` |
+| Resource bridge (appliance) Dataplate service | `https://*.dp.prod.appliances.azure.com` |
+| Resource bridge (appliance) container image download | `https://ecpacr.azurecr.io` |
+| Resource bridge (appliance) image download | `https://.blob.core.windows.net https://*.dl.delivery.mp.microsoft.com https://*.do.dsp.mp.microsoft.com` |
+| Azure Resource Manager | `https://management.azure.com` |
+| Azure Active Directory | `https://login.mirosoftonline.com` |
++
+**Additional URL resources**
+
+- [Google Container Registry](http://gcr.io/)
+- [Red Hat Quay.io](http://quay.io/)
+++
azure-web-pubsub Reference Server Sdk Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-csharp.md
Use this library to:
[Source code](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/webpubsub/Azure.Messaging.WebPubSub/src) | [Package](https://www.nuget.org/packages/Azure.Messaging.WebPubSub) |
-[API reference documentation](https://docs.microsoft.com/dotnet/api/overview/azure/messaging.webpubsub-readme-pre) |
+[API reference documentation](/dotnet/api/overview/azure/messaging.webpubsub-readme-pre) |
[Product documentation](./index.yml) | [Samples][samples_ref]
azure-web-pubsub Reference Server Sdk Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-js.md
You can use this library in your app server side to manage the WebSocket client
[Source code](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/web-pubsub/web-pubsub) | [Package (NPM)](https://www.npmjs.com/package/@azure/web-pubsub) |
-[API reference documentation](https://docs.microsoft.com/javascript/api/overview/azure/webpubsub) |
+[API reference documentation](/javascript/api/overview/azure/webpubsub) |
[Product documentation](./index.yml) | [Samples][samples_ref]
When a WebSocket connection connects, the Web PubSub service transforms the conn
[Source code](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/web-pubsub/web-pubsub-express) | [Package (NPM)](https://www.npmjs.com/package/@azure/web-pubsub-express) |
-[API reference documentation](https://docs.microsoft.com/javascript/api/overview/azure/web-pubsub-express-readme?view=azure-node-latest) |
+[API reference documentation](/javascript/api/overview/azure/web-pubsub-express-readme?view=azure-node-latest) |
[Product documentation](./index.yml) | [Samples][samples_ref]
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
Azure Bastion doesn't move or store customer data out of the region it's deploye
### Can I use Azure Bastion with Azure Private DNS Zones?
-Azure Bastion needs to be able to communicate with certain internal endpoints to successfully connect to target resources. Therefore, you *can* use Azure Bastion with Azure Private DNS Zones as long as the zone name you select doesn't overlap with the naming of these internal endpoints. Before you deploy your Azure Bastion resource, please make sure that the host virtual network isn't linked to a private DNS zone with the following in the name:
+Azure Bastion needs to be able to communicate with certain internal endpoints to successfully connect to target resources. Therefore, you *can* use Azure Bastion with Azure Private DNS Zones as long as the zone name you select doesn't overlap with the naming of these internal endpoints. Before you deploy your Azure Bastion resource, please make sure that the host virtual network is not linked to a private DNS zone with the following exact names:
+* blob.core.windows.net
+* vault.azure.com
* core.windows.net * azure.com
-* vault.azure.net
-If you are using a Private endpoint integrated Azure Private DNS Zone, the [recommended DNS zone name](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration) for several Azure services overlap with the names listed above. The use of Azure Bastion *is not* supported with these setups.
+You may use a private DNS zone ending with one of the names listed above (ex: dummy.blob.core.windows.net) as long as it is not one of the recommended DNS zone names for an Azure service listed [here](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration).
The use of Azure Bastion is also not supported with Azure Private DNS Zones in national clouds.
bastion Bastion Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-overview.md
# Customer intent: As someone with a basic network background, but is new to Azure, I want to understand the capabilities of Azure Bastion so that I can securely connect to my Azure virtual machines. Previously updated : 10/12/2021 Last updated : 03/17/2022 # What is Azure Bastion?
-Azure Bastion is a service you deploy that lets you connect to a virtual machine using your browser and the Azure portal. The Azure Bastion service is a fully platform-managed PaaS service that you provision inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly from the Azure portal over TLS. When you connect via Azure Bastion, your virtual machines do not need a public IP address, agent, or special client software.
+Azure Bastion is a service you deploy that lets you connect to a virtual machine using your browser and the Azure portal. The Azure Bastion service is a fully platform-managed PaaS service that you provision inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly from the Azure portal over TLS. When you connect via Azure Bastion, your virtual machines don't need a public IP address, agent, or special client software.
Bastion provides secure RDP and SSH connectivity to all of the VMs in the virtual network in which it is provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH.
Bastion provides secure RDP and SSH connectivity to all of the VMs in the virtua
## <a name="key"></a>Key benefits
-* **RDP and SSH directly in Azure portal:** You can get to the RDP and SSH session directly in the Azure portal using a single click seamless experience.
-* **Remote Session over TLS and firewall traversal for RDP/SSH:** Azure Bastion uses an HTML5 based web client that is automatically streamed to your local device. You get your RDP/SSH session over TLS on port 443, enabling you to traverse corporate firewalls securely.
-* **No Public IP required on the Azure VM:** Azure Bastion opens the RDP/SSH connection to your Azure virtual machine using private IP on your VM. You don't need a public IP on your virtual machine.
-* **No hassle of managing Network Security Groups (NSGs)**: Azure Bastion is a fully managed platform PaaS service from Azure that is hardened internally to provide you secure RDP/SSH connectivity. You don't need to apply any NSGs to the Azure Bastion subnet. Because Azure Bastion connects to your virtual machines over private IP, you can configure your NSGs to allow RDP/SSH from Azure Bastion only. This removes the hassle of managing NSGs each time you need to securely connect to your virtual machines. For more information about NSGs, see [Network Security Groups](../virtual-network/network-security-groups-overview.md#security-rules).
-* **Protection against port scanning:** Because you do not need to expose your virtual machines to the public Internet, your VMs are protected against port scanning by rogue and malicious users located outside your virtual network.
-* **Protect against zero-day exploits. Hardening in one place only:** Azure Bastion is a fully platform-managed PaaS service. Because it sits at the perimeter of your virtual network, you donΓÇÖt need to worry about hardening each of the virtual machines in your virtual network. The Azure platform protects against zero-day exploits by keeping the Azure Bastion hardened and always up to date for you.
+|Benefit |Description|
+|--|--|
+|RDP and SSH through the Azure portal|You can get to the RDP and SSH session directly in the Azure portal using a single-click seamless experience.|
+|Remote Session over TLS and firewall traversal for RDP/SSH|Azure Bastion uses an HTML5 based web client that is automatically streamed to your local device. Your RDP/SSH session is over TLS on port 443. This enables the traffic to traverse firewalls more securely.|
+|No Public IP address required on the Azure VM| Azure Bastion opens the RDP/SSH connection to your Azure VM by using the private IP address on your VM. You don't need a public IP address on your virtual machine.|
+|No hassle of managing Network Security Groups (NSGs)| You don't need to apply any NSGs to the Azure Bastion subnet. Because Azure Bastion connects to your virtual machines over private IP, you can configure your NSGs to allow RDP/SSH from Azure Bastion only. This removes the hassle of managing NSGs each time you need to securely connect to your virtual machines. For more information about NSGs, see [Network Security Groups](../virtual-network/network-security-groups-overview.md#security-rules).|
+|No need to manage a separate bastion host on a VM |Azure Bastion is a fully managed platform PaaS service from Azure that is hardened internally to provide you secure RDP/SSH connectivity.|
+|Protection against port scanning|Your VMs are protected against port scanning by rogue and malicious users because you don't need to expose the VMs to the internet.|
+|Hardening in one place only|Azure Bastion sits at the perimeter of your virtual network, so you donΓÇÖt need to worry about hardening each of the VMs in your virtual network.|
+|Protection against zero-day exploits |The Azure platform protects against zero-day exploits by keeping the Azure Bastion hardened and always up to date for you.|
## <a name="sku"></a>SKUs
certification How To Test Pnp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/how-to-test-pnp.md
To install the Azure IoT Extension, run the following command:
az extension add --name azure-iot ```
-To learn more, see [Azure CLI for Azure IoT](https://docs.microsoft.com/cli/azure/iot/product?view=azure-cli-latest).
+To learn more, see [Azure CLI for Azure IoT](/cli/azure/iot/product?view=azure-cli-latest).
### Create a new product test
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/ReleaseNotes.md
The Azure Face service is updated on an ongoing basis. Use this article to stay
## February 2022 ### New Quality Attribute in Detection_01 and Detection_03
-* To help system builders and their customers capture high quality images which are necessary for high quality outputs from Face API, weΓÇÖre introducing a new quality attribute **QualityForRecognition** to help decide whether an image is of sufficient quality to attempt face recognition. The value is an informal rating of low, medium, or high. The new attribute is only available when using any combinations of detection models `detection_01` or `detection_03`, and recognition models `recognition_03` or `recognition_04`. Only "high" quality images are recommended for person enrollment and quality above "medium" is recommended for identification scenarios. To learn more about the new quality attribute, see [Face detection and attributes](concepts/face-detection.md) and see how to use it with [QuickStart](https://docs.microsoft.com/azure/cognitive-services/face/quickstarts/client-libraries?tabs=visual-studio&pivots=programming-language-csharp).
+* To help system builders and their customers capture high quality images which are necessary for high quality outputs from Face API, weΓÇÖre introducing a new quality attribute **QualityForRecognition** to help decide whether an image is of sufficient quality to attempt face recognition. The value is an informal rating of low, medium, or high. The new attribute is only available when using any combinations of detection models `detection_01` or `detection_03`, and recognition models `recognition_03` or `recognition_04`. Only "high" quality images are recommended for person enrollment and quality above "medium" is recommended for identification scenarios. To learn more about the new quality attribute, see [Face detection and attributes](concepts/face-detection.md) and see how to use it with [QuickStart](./quickstarts/client-libraries.md?pivots=programming-language-csharp&tabs=visual-studio).
## July 2021
cognitive-services Orchestration Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/how-to/orchestration-projects.md
ms. Previously updated : 01/06/2022 Last updated : 03/08/2022 # Combine LUIS and question answering capabilities
As an example, if your chat bot receives the text "How do I get to the Human Res
Orchestration helps you connect more than one project and service together. Each connection in the orchestration is represented by a type and relevant data. The intent needs to have a name, a project type (LUIS, question answering, or conversational language understanding, and a project you want to connect to by name.
-You can use conversational language understanding to create a new orchestration project, See the [conversational language understanding documentation](../../language-service/conversational-language-understanding/how-to/create-project.md#create-an-orchestration-workflow-project).
+You can use conversational language understanding to create a new orchestration project, See the [conversational language understanding documentation](../../language-service/orchestration-workflow/how-to/create-project.md).
## Set up orchestration between Cognitive Services features To use an orchestration project to connect LUIS, question answering, and conversational language understanding, you need: * A language resource in [Language Studio](https://language.azure.com/) or the Azure portal.
-* To change your LUIS authoring resource to the Language resource. You can also optionally export your application from LUIS, and then [import it into conversational language understanding](../../language-service/conversational-language-understanding/how-to/create-project.md#import-a-project).
+* To change your LUIS authoring resource to the Language resource. You can also optionally export your application from LUIS, and then [import it into conversational language understanding](../../language-service/orchestration-workflow/how-to/create-project.md#export-and-import-a-project).
>[!Note] >LUIS can be used with Orchestration projects in West Europe only, and requires the authoring resource to be a Language resource. You can either import the application in the West Europe Language resource or change the authoring resource from the portal.
You need to follow the following steps to change LUIS authoring resource to a La
## Next steps
-[Conversational language understanding documentation](../../language-service/conversational-language-understanding/how-to/create-project.md#create-an-orchestration-workflow-project).
+* [Conversational language understanding documentation](../../language-service/conversational-language-understanding/how-to/create-project.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
The following table lists the prebuilt neural voices supported in each language.
| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaomoNeural` | General, multiple role-play and styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoqiuNeural` | Optimized for narrating | | Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoruiNeural` | Senior voice, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoshuangNeural` | Child voice,optimized for child story and chat; multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles)|
+| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoshuangNeural` | Child voice, optimized for child story and chat; multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles)|
| Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoxiaoNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoxuanNeural` | General, multiple role-play and styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | Chinese (Mandarin, Simplified) | `zh-CN` | Female | `zh-CN-XiaoyanNeural` | Optimized for customer service |
The following table lists the prebuilt neural voices supported in each language.
| French (Canada) | `fr-CA` | Female | `fr-CA-SylvieNeural` | General | | French (Canada) | `fr-CA` | Male | `fr-CA-AntoineNeural` | General | | French (Canada) | `fr-CA` | Male | `fr-CA-JeanNeural` | General |
-| French (France) | `fr-FR` | Female | `fr-FR-DeniseNeural` | General |
+| French (France) | `fr-FR` | Female | `fr-FR-DeniseNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) <sup>Public preview</sup> |
| French (France) | `fr-FR` | Male | `fr-FR-HenriNeural` | General | | French (Switzerland) | `fr-CH` | Female | `fr-CH-ArianeNeural` | General | | French (Switzerland) | `fr-CH` | Male | `fr-CH-FabriceNeural` | General |
The following table lists the prebuilt neural voices supported in each language.
> [!IMPORTANT] > The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021.+ > If you're using container Neural TTS, [download](speech-container-howto.md#get-the-container-image-with-docker-pull) and deploy the latest version. Starting from October 30,2021, all requests with previous versions will be rejected.
+> Two styles for `fr-FR-DeniseNeural` now are available for public preview: `cheerful` and `sad` in 3 regions: East US, West Europe, and Southeast Asia.
+ ### Prebuilt neural voices in preview The following neural voices are in public preview.
The following neural voices are in public preview.
| German (Germany) | `de-DE` | Male | `de-DE-RalfNeural` <sup>New</sup> | General | > [!IMPORTANT]
-> Voices in public preview are only available in three service regions: East US, West Europe, and Southeast Asia.
-
-The `en-US-JennyNeuralMultilingual` voice supports multiple languages. Check the [voices list API](rest-text-to-speech.md#get-a-list-of-voices) for a supported languages list.
+> Voices/Styles in public preview are only available in three service regions: East US, West Europe, and Southeast Asia.
-For more information about regional availability, see [regions](regions.md#prebuilt-neural-voices).
+> The `en-US-JennyNeuralMultilingual` voice supports multiple languages. Check the [voices list API](rest-text-to-speech.md#get-a-list-of-voices) for a supported languages list.
-To learn how you can configure and adjust neural voices, such as Speaking Styles, see [Speech Synthesis Markup Language](speech-synthesis-markup.md#adjust-speaking-styles).
+> For more information about regional availability, see [regions](regions.md#prebuilt-neural-voices).
-> [!IMPORTANT]
-> The `en-US-JessaNeural` voice has changed to `en-US-AriaNeural`. If you were using "Jessa" before, convert to "Aria."
+> To learn how you can configure and adjust neural voices, such as Speaking Styles, see [Speech Synthesis Markup Language](speech-synthesis-markup.md#adjust-speaking-styles).
-You can continue to use the full service name mapping like "Microsoft Server Speech Text to Speech Voice (en-US, AriaNeural)" in your speech synthesis requests.
+> The `en-US-JessaNeural` voice has changed to `en-US-AriaNeural`. If you were using "Jessa" before, convert to "Aria." You can continue to use the full service name mapping like "Microsoft Server Speech Text to Speech Voice (en-US, AriaNeural)" in your speech synthesis requests.
### Voice styles and roles
Use the following table to determine supported styles and roles for each neural
|en-US-GuyNeural|`newscast`||| |en-US-JennyNeural|`assistant`, `chat`,`customerservice`, `newscast`||| |en-US-SaraNeural|`angry`, `cheerful`, `sad`|||
+|fr-FR-DeniseNeural |`cheerful` <sup>Public preview</sup>, `sad`<sup>Public preview</sup>|||
|ja-JP-NanamiNeural|`chat`, `cheerful`, `customerservice`||| |pt-BR-FranciscaNeural|`calm`||| |zh-CN-XiaohanNeural|`affectionate`, `angry`, `calm`, `cheerful`, `disgruntled`, `embarrassed`, `fearful`, `gentle`, `sad`, `serious`|Supported||
Use the following table to determine supported styles and roles for each neural
|zh-CN-YunyangNeural|`customerservice`, `narration-professional`, `newscast-casual`|Supported|| |zh-CN-YunyeNeural|`angry`, `calm`, `cheerful`, `disgruntled`, `embarrassed`, `fearful`, `sad`, `serious`|Supported|Supported|
+> [!IMPORTANT]
+> Voices/Styles in public preview are only available in three service regions: East US, West Europe, and Southeast Asia.
+ ### Custom Neural Voice Custom Neural Voice lets you create synthetic voices that are rich in speaking styles. You can create a unique brand voice in multiple languages and styles by using a small set of recording data.
cognitive-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/managed-identity.md
The following headers are included with each Document Translator API request:
**Tutorial** > [!div class="nextstepaction"]
-> [Access Azure Storage from a web app using managed identities](/azure/app-service/scenario-secure-app-access-storage?toc=/azure/cognitive-services/translator/toc.json&bc=/azure/cognitive-services/translator/breadcrumb/toc.json)
+> [Access Azure Storage from a web app using managed identities](../../../app-service/scenario-secure-app-access-storage.md?bc=%2fazure%2fcognitive-services%2ftranslator%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fcognitive-services%2ftranslator%2ftoc.json)
cognitive-services Backwards Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/backwards-compatibility.md
Previously updated : 11/02/2021 Last updated : 03/03/2022 # Backwards compatibility with LUIS applications
-You can reuse some of the content of your existing LUIS applications in Conversational Language Understanding. When working with Conversational Language Understanding projects, you can:
+You can reuse some of the content of your existing LUIS applications in [Conversational Language Understanding](../overview.md). When working with Conversational Language Understanding projects, you can:
* Create CLU conversation projects from LUIS application JSON files.
-* Create LUIS applications that can be connected to orchestration workflow projects.
+* Create LUIS applications that can be connected to [orchestration workflow](../../orchestration-workflow/overview.md) projects.
> [!NOTE] > This guide assumes you have created a Language resource. If you're getting started with the service, see the [quickstart article](../quickstart.md).
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/faq.md
Use this article to quickly get the answers to common questions about conversati
See the [quickstart](./quickstart.md) to quickly create your first project, or the [how-to article](./how-to/create-project.md) for more details.
-## How do I connect other service applications in orchestration workflow projects?
+## How do I connect conversation language projects to other service applications?
-See the [Build schema article](./how-to/build-schema.md#build-project-schema-for-orchestration-workflow-projects) for information on connecting another project as an intent.
-
-## Which LUIS applications can I connect to in orchestration workflow projects?
-
-LUIS applications that use the Language resource as their authoring resource will be available for connection. You can only connect to LUIS applications that are owned by the same resource. This option will only be available for resources in West Europe, as it's the only common available region between LUIS and CLU. See [region limits](./service-limits.md#region-limits) for more information.
+See the [orchestration workflow documentation](../orchestration-workflow/overview.md) for more information.
## Training is taking a long time, is this expected? For conversation projects, long training times are expected. Based on the number of examples you have your training times may vary from 5 minutes to 1 hour or more.
-## Can I add entities to orchestration workflow projects?
-
-No. Orchestration projects are only enabled for intents that can be connected to other projects for routing.
- ## How do I use entity components? See the [entity components](./concepts/entity-components.md) article.
cognitive-services Build Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/build-schema.md
Previously updated : 11/02/2021 Last updated : 03/03/2022
They might create an intent to represent each of these actions. They might also
* Date * Meeting durations -
-For **orchestration workflow** projects, you can only create intents. The orchestration workflow project is intended to route to other target services that may be enabled with entity extraction to complete the conversation flow. You can add new intents that are connected to other services _or_ create intents that aren't connected to any service (a disconnected intent).
-
-By adding a disconnected intent, you allow the orchestrator to route to that intent, and return without calling into an additional service. You must provide training examples for disconnected intents. You can only connect to projects that are owned by the same Azure resource.
-
-Continuing the example from earlier, the developers for a bot might realize that for each skill of their bot (which includes: calendar actions, email actions, and a company FAQ), they need an intent that connects to each of those skills.
- ## Build project schema for conversation projects To build a project schema for conversation projects:
To build a project schema for conversation projects:
:::image type="content" source="../media/entity-details.png" alt-text="A screenshot showing the entity details page for conversation projects in Language Studio." lightbox="../media/entity-details.png":::
-## Build project schema for orchestration workflow projects
-
-To build a project schema for orchestration workflow projects:
-
-1. Select **Add** in the **Build Schema** page. You will be prompted for a name and to define a connection for the intent, if any. If you would like to connect an intent you must provide:
- 1. **Service Type**: LUIS, Custom Question Answering (QnA), or Conversational Language Understanding.
- 2. **Project Name**: The project you want the intent to connect to.
- 3. **Version for utterances** (Only for LUIS): which LUIS version should be used to train the orchestrator classification model.
-
- :::image type="content" source="../media/orchestration-intent.png" alt-text="A screenshot showing the intent creation modal for orchestration projects in Language Studio." lightbox="../media/orchestration-intent.png":::
-
-> [!IMPORTANT]
-> * Connected intents cannot be selected because you cannot add training examples to a connected intent, as it already uses the target project's data to train its intent classification.
-> * You will only be able to connect to target services that are owned by the same resource.
- ## Next Steps * [Tag utterances](tag-utterances.md)
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/create-project.md
Previously updated : 11/02/2021 Last updated : 03/03/2022 # How to create projects in Conversational Language Understanding
-Conversational Language Understanding allows you to create two types of projects: **Conversation** and **Orchestration Workflow** projects.
+Conversational Language Understanding allows you to create conversation projects. To create orchestration projects, see the [orchestration workflow](../../orchestration-workflow/overview.md) documentation.
## Sign in to Language Studio To get started, you have to first sign in to [Language Studio](https://aka.ms/languageStudio) and create a Language resource. Select **Done** once selection is complete.
After selecting conversation, you need to provide the following details:
Once you're done, click next, review the details, and then click create project to complete the process.
-## Create an orchestration workflow project
-
-After selecting orchestration, you need to provide the following details:
-- Name: Project name-- Description: Optional project description-- Text primary language: The primary language of your project. Your training data should be mainly be in this language.-- Enable multiple languages: Whether you would like to enable your project to support multiple languages at once.-
-Once you're done, you now have the option to connect to the other projects and services you wish to orchestrate to. Each connection is represented by its type and relevant data. The intent needs to have a **name**, a **project type** (LUIS, custom question answering (QnA), or Conversational Language Understanding), and then selecting the project you want to connect to by name.
-
-> [!NOTE]
-> The list of projects you can connect to are only projects that are owned by the same Language resource you are using to create the orchestration project.
-
-This step is optional and you will still have the option to add intent connections after you create the project.
-- ## Import a project You can export a Conversational Language Understanding project as a JSON file at any time by going to the conversation projects page, selecting a project, and pressing **Export**.
cognitive-services Deploy Query Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/deploy-query-model.md
ms.devlang: csharp, python
-# Deploy and test model
+# Deploy and test a conversational language understanding model
After you have [trained a model](./train-model.md) on your dataset, you're ready to deploy it. After deploying your model, you'll be able to query it for predictions.
To delete a deployment, select the deployment you want to delete and click on **
> [!NOTE] > You can only have ten deployment names.
-### Orchestration workflow projects deployments
-
-1. Click on **Add deployment** to submit a new deployment job.
-
- Like conversation projects, In the window that appears, you can create a new deployment name by giving the deployment a name or override an existing deployment name. Then, you can add a trained model to this deployment name and press next.
-
- :::image type="content" source="../media/create-deployment-job-orch.png" alt-text="A screenshot showing deployment job creation in Language Studio." lightbox="../media/create-deployment-job-orch.png":::
-
-2. If you're connecting one or more LUIS applications or conversational language understanding projects, specify the deployment name.
-
- No configurations are required for custom question answering or unlinked intents.
-
- LUIS projects **must be published** to the slot configured during the Orchestration deployment, and custom question answering KBs must also be published to their Production slots.
-
- :::image type="content" source="../media/deploy-connected-services.png" alt-text="A screenshot showing the deployment screen for orchestration workflow projects." lightbox="../media/deploy-connected-services.png":::
- ## Send a Conversational Language Understanding request Once your model is deployed, you can begin using the deployed model for predictions. Outside of the test model page, you can begin calling your deployed model via API requests to your provided custom endpoint. This endpoint request obtains the intent and entity predictions defined within the model.
In a conversations project, you'll get predictions for both your intents and ent
- Its start location denoted by an offset value - The length of the entity text denoted by a length value.
-## API response for an orchestration Workflow Project
-
-Orchestration workflow projects return with the response of the top scoring intent, and the response of the service it is connected to.
-- Within the intent, the *targetKind* parameter lets you determine the type of response that was returned by the orchestrator's top intent (conversation, LUIS, or QnA Maker).-- You will get the response of the connected service in the *result* parameter. -
-Within the request, you can specify additional parameters for each connected service, in the event that the orchestrator routes to that service.
-- Within the project parameters, you can optionally specify a different query to the connected service. If you don't specify a different query, the original query will be used.-- The *direct target* parameter allows you to bypass the orchestrator's routing decision and directly target a specific connected intent to force a response for it.- ## Next steps * [Conversational Language Understanding overview](../overview.md)
cognitive-services Tag Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/tag-utterances.md
For example, to improve a calender bot's performance with users, a developer mig
* "_Reply as **tentative** to the **weekly update** meeting._" (English) * "_Cancelar mi **pr├│xima** reuni├│n_." (Spanish)
-In Orchestration Workflow projects, the data used to train connected intents isn't provided within the project. Instead, the project pulls the data from the connected service (such as connected LUIS applications, Conversational Language Understanding projects, or Custom Question Answering knowledge bases) during training. However, if you create intents that are not connected to any service, you still need to add utterances to those intents.
-
-For example, a developer might create an intent for each of their skills, and connect it to a respective calendar project, email project, and company FAQ knowledge base.
- ## Tag utterances :::image type="content" source="../media/tag-utterances.png" alt-text="A screenshot of the page for tagging utterances in Language Studio." lightbox="../media/tag-utterances.png":::
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/train-model.md
After you have completed [tagging your utterances](./tag-utterances.md), you can
You can create and train multiple models within the same project. However, if you re-train a specific model it overwrites the last state.
-The training times can be anywhere from a few seconds when dealing with orchestration workflow projects, up to a couple of hours when you reach the [maximum limit](../service-limits.md) of utterances. Before training, you will have the option to enable evaluation, which lets you view how your model performs.
+The training times can be anywhere from a few seconds up to a couple of hours when you reach the [maximum limit](../service-limits.md) of utterances. Before training, you will have the option to enable evaluation, which lets you view how your model performs.
## Train model
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/language-support.md
The supported languages for conversation projects are:
| Czech | `cs` | | Welsh | `cy` | | Danish | `da` |
-| German | `de`
+| German | `de` |
| Greek | `el` | | English (US) | `en-us` | | English (Uk) | `en-gb` |
The supported languages for conversation projects are:
When you enable multiple languages in a project, you can add data in multiple languages to your project. You can also train the project in one language and immediately predict it in other languages. The quality of predictions may vary between languages ΓÇô and certain language groups work better than others with respect to multilingual predictions. -
-## Supported languages for orchestration workflow projects
-
-|Language| Language code |
-|||
-| Brazilian Portuguese | `pt-br` |
-| English | `en-us` |
-| French | `fr-fr` |
-| German | `de-de` |
-| Italian | `it-it` |
-| Spanish | `es-es` |
-
-Orchestration workflow projects are not available for use in multiple languages.
- ## Next steps [Conversational language understanding overview](overview.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/overview.md
Conversational language understanding applies custom machine-learning intelligen
* Robust and semantically aware classification and extraction models. * Simplified model building experience, using Language Studio. * Natively multilingual models that let you to train in one language, and test in others.
-* Orchestration project types that allow you to connect services including other Conversational Language Understanding projects, custom question answering knowledge bases, and LUIS applications.
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/service-limits.md
Learn about the data, region, and throughput limits for the Conversational Langu
## Region limits - Conversational Language Understanding is only available in 2 regions: **West US 2** and **West Europe**.
- - Orchestration workflow projects will enable **Conversation projects**, **QnA Maker** and **LUIS** connections in West Europe.
- - Orchestration workflow projects will enable **Conversation projects** and **QnA Maker** connections only in West US 2. There is no authoring West US 2 region for LUIS.
-- The only available SKU to access CLU is the **Language** resource with the **S** sku.
+- The only available SKU to access Conversational Language Understanding is the **Language** resource with the **S** sku.
## Data limits
The following limits are observed for the Conversational Language Understanding
|Item|Limit| | | |
-|Utterances|15,000 per project*|
+|Utterances|15,000 per project|
|Intents|500 per project| |Entities|100 per project| |Utterance length|500 characters|
The following limits are observed for the Conversational Language Understanding
|Projects|500 per resource| |Synonyms|20,000 per list component|
-\**Only includes utterances added by the user. Data pulled in for orchestration workflow projects do not count towards the total.*
-- ## Throughput limits |Item | Limit |
cognitive-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/concepts/evaluation-metrics.md
+
+ Title: Orchestration workflow model evaluation metrics
+
+description: Learn about evaluation metrics in orchestration workflow
++++++ Last updated : 03/17/2022++++
+# Evaluation metrics for orchestration workflow models
+
+Model evaluation in orchestration workflow uses the following metrics:
+
+|Metric |Description |Calculation |
+||||
+|Precision | The ratio of successful recognitions to all attempted recognitions. This shows how many times the model's entity recognition is truly a good recognition. | `Precision = #True_Positive / (#True_Positive + #False_Positive)` |
+|Recall | The ratio of successful recognitions to the actual number of entities present. | `Recall = #True_Positive / (#True_Positive + #False_Negatives)` |
+|F1 score | The combination of precision and recall. | `F1 Score = 2 * Precision * Recall / (Precision + Recall)` |
+
+## Confusion matrix
+
+A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of intents.
+The matrix compares the actual tags with the tags predicted by the model.
+This gives a holistic view of how well the model is performing and what kinds of errors it is making.
+
+You can use the Confusion matrix to identify intents that are too close to each other and often get mistaken (ambiguity). In this case consider merging these intents together. If that isn't possible, consider adding more tagged examples of both intents to help the model differentiate between them.
+
+You can calculate the model-level evaluation metrics from the confusion matrix:
+
+* The *true positive* of the model is the sum of *true Positives* for all intents.
+* The *false positive* of the model is the sum of *false positives* for all intents.
+* The *false Negative* of the model is the sum of *false negatives* for all intents.
+
+## Next steps
+
+[Train a model in Language Studio](../how-to/train-model.md)
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/faq.md
+
+ Title: Frequently Asked Questions for orchestration projects
+
+description: Use this article to quickly get the answers to FAQ about orchestration projects
++++++ Last updated : 01/10/2022++++
+# Frequently asked questions for orchestration workflows
+
+Use this article to quickly get the answers to common questions about orchestration workflows
+
+## How do I create a project?
+
+See the [quickstart](./quickstart.md) to quickly create your first project, or the [how-to article](./how-to/create-project.md) for more details.
+
+## How do I connect other service applications in orchestration workflow projects?
+
+See [How to create projects and build schemas](./how-to/create-project.md) for information on connecting another project as an intent.
+
+## Which LUIS applications can I connect to in orchestration workflow projects?
+
+LUIS applications that use the Language resource as their authoring resource will be available for connection. You can only connect to LUIS applications that are owned by the same resource. This option will only be available for resources in West Europe, as it's the only common available region between LUIS and CLU.
+
+## Training is taking a long time, is this expected?
+
+For orchestration projects, long training times are expected. Based on the number of examples you have your training times may vary from 5 minutes to 1 hour or more.
+
+## Can I add entities to orchestration workflow projects?
+
+No. Orchestration projects are only enabled for intents that can be connected to other projects for routing.
+
+<!--
+## Which languages are supported in this feature?
+
+See the [language support](./language-support.md) article.
+-->
+## How do I get more accurate results for my project?
+
+Take a look at the [recommended guidelines](./how-to/create-project.md) for information on improving accuracy.
+<!--
+## How many intents, and utterances can I add to a project?
+
+See the [service limits](./service-limits.md) article.
+-->
+## Can I label the same word as 2 different entities?
+
+Unlike LUIS, you cannot label the same text as 2 different entities. Learned components across different entities are mutually exclusive, and only one learned span is predicted for each set of characters.
+
+## Is there any SDK support?
+
+Yes, only for predictions, and [samples are available](https://aka.ms/cluSampleCode). There is currently no authoring support for the SDK.
+
+## Are there APIs for this feature?
+
+Yes, all the APIs [are available](https://aka.ms/clu-apis).
+
+## Next steps
+
+[Orchestration workflow overview](overview.md)
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/create-project.md
+
+ Title: How to create projects and build schema in orchestration workflow
+
+description: Use this article to learn how to create projects in orchestration workflow
++++++ Last updated : 11/02/2021++++
+# How to create projects in orchestration workflow
+
+Orchestration workflow allows you to create projects that connect your applications to:
+* Custom Language Understanding
+* Question Answering
+* LUIS
+* QnA maker
+
+## Sign in to Language Studio
+
+To get started, you have to first sign in to [Language Studio](https://aka.ms/languageStudio) and create a Language resource. Select **Done** once selection is complete.
+
+In language studio, find the **Understand questions and conversational language** section, and select **Orchestration workflow**.
+
+You will see the orchestration workflow projects page.
+
+<!--:::image type="content" source="../media/projects-page.png" alt-text="A screenshot showing the Conversational Language Understanding projects page." lightbox="../media/projects-page.png":::-->
+
+## Create an orchestration workflow project
+
+Select **Create new project**. When creating your workflow project, you need to provide the following details:
+- Name: Project name
+- Description: Optional project description
+- Utterances primary language: The primary language of your utterances.
+
+## Building schema and adding intents
+
+Once you're done creating a project, you can connect it to the other projects and services you want to orchestrate to. Each connection is represented by its type and relevant data.
+
+To create a new intent, click on *+Add* button and start by giving your intent a **name**. You will see two options, to connect to a project or not. You can connect to (LUIS, question answering (QnA), or Conversational Language Understanding) projects, or choose the **no** option.
+
+> [!NOTE]
+> The list of projects you can connect to are only projects that are owned by the same Language resource you are using to create the orchestration project.
+++
+In Orchestration Workflow projects, the data used to train connected intents isn't provided within the project. Instead, the project pulls the data from the connected service (such as connected LUIS applications, Conversational Language Understanding projects, or Custom Question Answering knowledge bases) during training. However, if you create intents that are not connected to any service, you still need to add utterances to those intents.
+
+## Export and import a project
+
+You can export an orchestration workflow project as a JSON file at any time by going to the projects page, selecting a project, and pressing **Export**.
+That project can be reimported as a new project. If you import a project with the exact same name, it replaces the project's data with the newly imported project's data.
+
+To import a project, select the arrow button on the projects page next to **Create a new project** and select **Import**. Then select the orchestration workflow JSON file.
+
+## Next Steps
+
+[Build schema](./train-model.md)
cognitive-services Deploy Query Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/deploy-query-model.md
+
+ Title: How to send a API requests to an orchestration workflow project
+
+description: Learn about sending a request to orchestration workflow projects.
++++++ Last updated : 03/03/2022+
+ms.devlang: csharp, python
+++
+# Deploy and test model
+
+After you have [trained a model](./train-model.md) on your dataset, you're ready to deploy it. After deploying your model, you'll be able to query it for predictions.
+
+> [!Tip]
+> Before deploying a model, make sure to view the model details to make sure that the model is performing as expected.
+> You can only have ten deployment names.
+
+## Orchestration workflow model deployments
+
+Deploying a model hosts and makes it available for predictions through an endpoint.
+
+When a model is deployed, you will be able to test the model directly in the portal or by calling the API associated to it.
+
+1. From the left side, click on **Deploy model**.
+
+2. Click on **Add deployment** to submit a new deployment job.
+
+ In the window that appears, you can create a new deployment name by giving the deployment a name or override an existing deployment name. Then, you can add a trained model to this deployment name and press next.
+
+3. If you're connecting one or more LUIS applications or conversational language understanding projects, you have to specify the deployment name.
+
+ No configurations are required for custom question answering or unlinked intents.
+
+ LUIS projects **must be published** to the slot configured during the Orchestration deployment, and custom question answering KBs must also be published to their Production slots.
+
+1. Click on **Add deployment** to submit a new deployment job.
+
+ Like conversation projects, In the window that appears, you can create a new deployment name by giving the deployment a name or override an existing deployment name. Then, you can add a trained model to this deployment name and press next.
+
+ <!--:::image type="content" source="../media/create-deployment-job-orch.png" alt-text="A screenshot showing deployment job creation in Language Studio." lightbox="../media/create-deployment-job-orch.png":::-->
+
+2. If you're connecting one or more LUIS applications or conversational language understanding projects, specify the deployment name.
+
+ No configurations are required for custom question answering or unlinked intents.
+
+ LUIS projects **must be published** to the slot configured during the Orchestration deployment, and custom question answering KBs must also be published to their Production slots.
+
+ :::image type="content" source="../media/deploy-connected-services.png" alt-text="A screenshot showing the deployment screen for orchestration workflow projects." lightbox="../media/deploy-connected-services.png":::
+
+## Send a request to your model
+
+Once your model is deployed, you can begin using the deployed model for predictions. Outside of the test model page, you can begin calling your deployed model via API requests to your provided custom endpoint. This endpoint request obtains the intent and entity predictions defined within the model.
+
+You can get the full URL for your endpoint by going to the **Deploy model** page, selecting your deployed model, and clicking on "Get prediction URL".
++
+Add your key to the `Ocp-Apim-Subscription-Key` header value, and replace the query and language parameters.
+
+> [!TIP]
+> As you construct your requests, see the [quickstart](../quickstart.md?pivots=rest-api#query-model) and REST API [reference documentation](https://aka.ms/clu-apis) for more information.
+
+### Use the client libraries (Azure SDK)
+
+You can also use the client libraries provided by the Azure SDK to send requests to your model.
+
+> [!NOTE]
+> The client library for Orchestration workflow is only available for:
+> * .NET
+> * Python
+
+1. Go to your resource overview page in the [Azure portal](https://portal.azure.com/#home)
+
+2. From the menu on the left side, select **Keys and Endpoint**. Use endpoint for the API requests and you will need the key for `Ocp-Apim-Subscription-Key` header.
+
+ :::image type="content" source="../../custom-classification/media/get-endpoint-azure.png" alt-text="Get the Azure endpoint" lightbox="../../custom-classification/media/get-endpoint-azure.png":::
+
+3. Download and install the client library package for your language of choice:
+
+ |Language |Package version |
+ |||
+ |.NET | [5.2.0-beta.2](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.2) |
+ |Python | [5.2.0b2](https://pypi.org/project/azure-ai-textanalytics/5.2.0b2/) |
+
+4. After you've installed the client library, use the following samples on GitHub to start calling the API.
+
+ * [C#](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples)
+ * [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cognitivelanguage/azure-ai-language-conversations/samples)
+
+5. See the following reference documentation for more information:
+
+ * [C#](/dotnet/api/azure.ai.language.conversations?view=azure-dotnet-preview&preserve-view=true)
+ * [Python](/python/api/azure-ai-language-conversations/azure.ai.language.conversations?view=azure-python-preview&preserve-view=true)
+
+## API response for an orchestration workflow project
+
+Orchestration workflow projects return with the response of the top scoring intent, and the response of the service it is connected to.
+- Within the intent, the *targetKind* parameter lets you determine the type of response that was returned by the orchestrator's top intent (conversation, LUIS, or QnA Maker).
+- You will get the response of the connected service in the *result* parameter.
+
+Within the request, you can specify additional parameters for each connected service, in the event that the orchestrator routes to that service.
+- Within the project parameters, you can optionally specify a different query to the connected service. If you don't specify a different query, the original query will be used.
+- The *direct target* parameter allows you to bypass the orchestrator's routing decision and directly target a specific connected intent to force a response for it.
+
+## Next steps
+
+* [Orchestration project overview](../overview.md)
cognitive-services Tag Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/tag-utterances.md
+
+ Title: How to tag utterances in an orchestration workflow project
+
+description: Use this article to tag utterances
++++++ Last updated : 03/03/2022++++
+# How to tag utterances in orchestration workflow projects
+
+Once you have [built a schema](create-project.md) for your project, you should add training utterances to your project. The utterances should be similar to what your users will use when interacting with the project. When you add an utterance you have to assign which intent it belongs to. You can only add utterances to the created intents within the project and not the connected intents.
+
+## Filter Utterances
+
+Clicking on **Filter** lets you view only the utterances associated to the intents you select in the filter pane.
+When clicking on an intent in the [build schema](./create-project.md) page then you'll be moved to the **Tag Utterances** page, with that intent filtered automatically.
+
+## Next Steps
+* [Train and Evaluate Model](./train-model.md)
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/train-model.md
+
+ Title: How to train and evaluate models in orchestration workflow projects
+
+description: Use this article to train an orchestration model and view its evaluation details to make improvements.
++++++ Last updated : 03/03/2022++++
+# Train and evaluate orchestration workflow models
+
+After you have completed [tagging your utterances](./tag-utterances.md), you can train your model. Training is the act of converting the current state of your project's training data to build a model that can be used for predictions. Every time you train, you have to name your training instance.
+
+You can create and train multiple models within the same project. However, if you re-train a specific model it overwrites the last state.
+
+The training times can be anywhere from a few seconds, up to a couple of hours when you reach high numbers of utterances.
+
+## Train model
+
+Select **Train model** on the left of the screen. Select **Start a training job** from the top menu.
+
+Enter a new model name or select an existing model from the **Model Name** dropdown.
++
+Click the **Train** button and wait for training to complete. You will see the training status of your model in the view model details page. Only successfully completed jobs will generate models.
+
+## Evaluate model
+
+After model training is completed, you can view your model details and see how well it performs against the test set in the training step. Observing how well your model performed is called evaluation. The test set is composed of 20% of your utterances, and this split is done at random before training. The test set consists of data that was not introduced to the model during the training process. For the evaluation process to complete there must be at least 10 utterances in your training set.
+
+In the **view model details** page, you'll be able to see all your models, with their current training status, and the date they were last trained.
++
+* Click on the model name for more details. A model name is only clickable if you've enabled evaluation before hand.
+* In the **Overview** section you can find the macro precision, recall and F1 score for the collective intents.
+* Under the **Intents** tab you can find the micro precision, recall and F1 score for each intent separately.
+
+> [!NOTE]
+> If you don't see any of the intents you have in your model displayed here, it is because they weren't in any of the utterances that were used for the test set.
+
+You can view the [confusion matrix](../concepts/evaluation-metrics.md) for intents by clicking on the **Test set confusion matrix** tab at the top fo the screen.
+
+## Next steps
+* [Model evaluation metrics](../concepts/evaluation-metrics.md)
+* [Deploy and query the model](./deploy-query-model.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/overview.md
+
+ Title: Orchestration workflows - Azure Cognitive Services
+
+description: Learn how to use Orchestration workflows.
++++++ Last updated : 03/03/2022++++
+# What are orchestration workflows?
+
+Orchestration workflow is a cloud-based service that enables you to train language models to connect your applications in:
+* Custom Language Understanding
+* Question Answering
+* LUIS
+* QnA maker
++
+The API is a part of [Azure Cognitive Services](../../index.yml), a collection of machine learning and AI algorithms in the cloud for your development projects. You can use these features with the REST API, or the client libraries.
+
+## Features
+
+* Advanced natural language understanding technology using advanced neural networks.
+* Orchestration project types that allow you to connect services including other Conversational Language Understanding projects, custom question answering knowledge bases, and LUIS applications.
+
+## Reference documentation and code samples
+
+As you use text summarization in your applications, see the following reference documentation and samples for Azure Cognitive Services for Language:
+
+|Development option / language |Reference documentation |Samples |
+||||
+|REST API | [REST API documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-2-Preview-2/operations/Analyze) | |
+|C# | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics/samples) |
+| Java | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) |
+|JavaScript | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) |
+|Python | [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) |
+
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which itΓÇÖs deployed. Read the following articles for more information:
+
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/quickstart.md
+
+ Title: Quickstart - create an orchestration workflow project
+
+description: Use this article to quickly get started with orchestration workflows
++++++ Last updated : 01/27/2022++
+zone_pivot_groups: usage-custom-language-features
++
+# Quickstart: Orchestration workflow (preview)
+
+Use this article to get started with Orchestration workflow projects using Language Studio and the REST API. Follow these steps to try out an example.
+++++++
+## Next steps
+
+* [Learn about orchestration workflows](overview.md)
communication-services Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/privacy.md
The list of geographies you can choose from includes:
- India - Japan - Korea
+- Norway
+- Switzerland
+- United Arab Emirates
- United Kingdom - United States
communication-services Certified Session Border Controllers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/certified-session-border-controllers.md
Microsoft works with each vendor to:
- Establish a joint support process with the SBC vendors. [!INCLUDE [Public Preview](../../includes/public-preview-include-document.md)] Media bypass is not yet supported by Azure Communication Services. Early media is not supported by a web-based client.
communication-services Direct Routing Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-infrastructure.md
# Azure direct routing infrastructure requirements [!INCLUDE [Public Preview](../../includes/public-preview-include-document.md)]- This article describes infrastructure, licensing, and Session Border Controller (SBC) connectivity details that you want to keep in mind as your plan your Azure direct routing deployment.
communication-services Direct Routing Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-provisioning.md
Azure Communication Services direct routing enables you to connect your existing telephony infrastructure to Azure. The article lists the high-level steps required for connecting a supported Session Border Controller (SBC) to direct routing and how voice routing works for the enabled Communication resource. [!INCLUDE [Public Preview](../../includes/public-preview-include-document.md)] For information about whether Azure Communication Services direct routing is the right solution for your organization, see [Azure telephony concepts](./telephony-concept.md). For information about prerequisites and planning your deployment, see [Communication Services direct routing infrastructure requirements](./direct-routing-infrastructure.md).
communication-services Telephony Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/telephony-concept.md
# Telephony concepts [!INCLUDE [Regional Availability Notice](../../includes/regional-availability-include.md)] Azure Communication Services Calling SDKs can be used to add telephony and Public Switched Telephone Network access to your applications. This page summarizes key telephony concepts and capabilities. See the [calling library](../../quickstarts/voice-video-calling/getting-started-with-calling.md) to learn more about specific SDK languages and capabilities.
communication-services Closed Captions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/closed-captions.md
The private preview will be available on all platforms.
## Next steps -- Get started with a [Closed Captions Quickstart](/azure/communication-services/quickstarts/voice-video-calling/get-started-with-closed-captions?pivots=platform-iosBD)
+- Get started with a [Closed Captions Quickstart](../../quickstarts/voice-video-calling/get-started-with-closed-captions.md?pivots=platform-iosBD)
communication-services Trusted Auth Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/trusted-auth-sample.md
Since this sample only focuses on the server APIs, the client application is not
To be able to run this sample, you will need to: -- Register a Client and Server (Web API) applications in Azure Active Directory as part of [On Behalf Of workflow](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-on-behalf-of-flow). Follow instructions on [registrations set up guideline](https://github.com/Azure-Samples/communication-services-authentication-hero-csharp/blob/main/docs/deployment-guides/set-up-app-registrations.md)
+- Register a Client and Server (Web API) applications in Azure Active Directory as part of [On Behalf Of workflow](../../active-directory/develop/v2-oauth2-on-behalf-of-flow.md). Follow instructions on [registrations set up guideline](https://github.com/Azure-Samples/communication-services-authentication-hero-csharp/blob/main/docs/deployment-guides/set-up-app-registrations.md)
- A deployed Azure Communication Services resource. [Create an Azure Communication Services resource](../quickstarts/create-communication-resource.md?tabs=linux&pivots=platform-azp). - Update the Server (Web API) application with information from the app registrations.
To be able to run this sample, you will need to:
::: zone pivot="programming-language-csharp" [!INCLUDE [C# Auth hero](./includes/csharp-auth-hero.md)]
connectors Connectors Create Api Sqlazure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md
The first time that you add either a [SQL trigger](#add-sql-trigger) or [SQL act
<a name="trigger-recurrence-shift-drift"></a>
-### Trigger recurrence shift and drift
+## Trigger recurrence shift and drift (daylight saving time)
-Connection-based triggers where you need to create a connection first, such as the SQL trigger, differ from built-in triggers that run natively in Azure Logic Apps, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md). In recurring connection-based triggers, the recurrence schedule isn't the only driver that controls execution, and the time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, *and* other factors that might cause run times to drift or produce unexpected behavior, for example, not maintaining the specified schedule when daylight saving time (DST) starts and ends. To make sure that the recurrence time doesn't shift when DST takes effect, manually adjust the recurrence so that your logic app continues to run at the expected time. Otherwise, the start time shifts one hour forward when DST starts and one hour backward when DST ends. For more information, see [Recurrence for connection-based triggers](../connectors/apis-list.md#recurrence-for-connection-based-triggers).
+Recurring connection-based triggers where you need to create a connection first, such as the managed SQL Server trigger, differ from built-in triggers that run natively in Azure Logic Apps, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md). In recurring connection-based triggers, the recurrence schedule isn't the only driver that controls execution, and the time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, *and* other factors that might cause run times to drift or produce unexpected behavior. For example, unexpected behavior can include failure to maintain the specified schedule when daylight saving time (DST) starts and ends.
+
+To make sure that the recurrence time doesn't shift when DST takes effect, manually adjust the recurrence. That way, your workflow continues to run at the expected or specified start time. Otherwise, the start time shifts one hour forward when DST starts and one hour backward when DST ends. For more information, see [Recurrence for connection-based triggers](../connectors/apis-list.md#recurrence-for-connection-based-triggers).
<a name="add-sql-action"></a>
connectors Connectors Native Recurrence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-recurrence.md
The following example shows how to update the trigger definition so that the tri
<a name="daylight-saving-standard-time"></a>
-## Trigger recurrence shift between daylight saving time and standard time
+## Trigger recurrence shift and drift (daylight saving time)
-Recurring built-in triggers honor the schedule that you set, including any time zone that you specify. If you don't select a time zone, daylight saving time (DST) might affect when triggers run, for example, shifting the start time one hour forward when DST starts and one hour backward when DST ends.
+To schedule jobs, Azure Logic Apps puts the message for processing into the queue and specifies when that message becomes available, based on the UTC time when the last job ran and the UTC time when the next job is scheduled to run. If you specify a start time with your recurrence, *make sure that you select a time zone* so that your logic app workflow runs at the specified start time. That way, the UTC time for your logic app also shifts to counter the seasonal time change. Recurring triggers honor the schedule that you set, including any time zone that you specify.
+
+Otherwise, if you don't select a time zone, daylight saving time (DST) events might affect when triggers run. For example, the start time shifts one hour forward when DST starts and one hour backward when DST ends. However, some time windows might cause problems when the time shifts. For more information and examples, see [Recurrence for daylight saving time and standard time](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#daylight-saving-standard-time).
-To avoid this shift so that your logic app runs at your specified start time, make sure that you select a time zone. That way, the UTC time for your logic app also shifts to counter the seasonal time change. However, some time windows might cause problems when the time shifts. For more information and examples, see [Recurrence for daylight saving time and standard time](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#daylight-saving-standard-time).
## Next steps
connectors Connectors Sftp Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-sftp-ssh.md
When a trigger finds a new file, the trigger checks that the new file is complet
<a name="trigger-recurrence-shift-drift"></a>
-### Trigger recurrence shift and drift
+## Trigger recurrence shift and drift (daylight saving time)
-Connection-based triggers where you need to create a connection first, such as the SFTP-SSH trigger, differ from built-in triggers that run natively in Azure Logic Apps, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md). In connection-based recurrence triggers, the schedule isn't the only driver that controls execution, and the time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, *and* other factors that might cause run times to drift or produce unexpected behavior. For example, unexpected behavior can include failure to maintain the specified schedule when daylight saving time (DST) starts and ends. To make sure that the recurrence time doesn't shift when DST takes effect, manually adjust the recurrence. That way, your workflow continues to run at the expected time. Otherwise, the start time shifts one hour forward when DST starts and one hour backward when DST ends. For more information, see [Recurrence for connection-based triggers](../connectors/apis-list.md#recurrence-for-connection-based-triggers).
+Recurring connection-based triggers where you need to create a connection first, such as the managed SFTP-SSH trigger, differ from built-in triggers that run natively in Azure Logic Apps, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md). In recurring connection-based triggers, the recurrence schedule isn't the only driver that controls execution, and the time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, *and* other factors that might cause run times to drift or produce unexpected behavior. For example, unexpected behavior can include failure to maintain the specified schedule when daylight saving time (DST) starts and ends.
+
+To make sure that the recurrence time doesn't shift when DST takes effect, manually adjust the recurrence. That way, your workflow continues to run at the expected time or specified start time. Otherwise, the start time shifts one hour forward when DST starts and one hour backward when DST ends. For more information, see [Recurrence for connection-based triggers](../connectors/apis-list.md#recurrence-for-connection-based-triggers).
<a name="convert-to-openssh"></a>
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
Currently the point in time restore functionality has the following limitations:
* Unique indexes in API for MongoDB can't be added or updated when you create a continuous backup mode account or migrate an account from periodic to continuous mode.
+* Continuous mode restore may not restore throughput setting valid as of restore point.
+ ## Next steps * Provision continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template).
cosmos-db Graph Partitioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/graph-partitioning.md
The following guidelines describe how the partitioning strategy in Azure Cosmos
g.V(['partitionKey_value', 'vertex_id']) ```
- - Specifying an **array of tuples of partition key values and IDs**:
-
- ```java
- g.V(['partitionKey_value0', 'verted_id0'], ['partitionKey_value1', 'vertex_id1'], ...)
- ```
- - Selecting a set of vertices with their IDs and **specifying a list of partition key values**: ```java
cosmos-db Performance Tips Dotnet Sdk V3 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-tips-dotnet-sdk-v3-sql.md
If you're testing at high throughput levels, or at rates that are greater than 5
## <a id="metadata-operations"></a> Metadata operations
-Do not verify a Database and/or Container exists by calling `Create...IfNotExistsAsync` and/or `Read...Async` in the hot path and/or before doing an item operation. The validation should only be done on application startup when it is necessary, if you expect them to be deleted (otherwise it's not needed). These metadata operations will generate extra end-to-end latency, have no SLA, and their own separate [limitations](https://aka.ms/CosmosDB/sql/errors/metadata-429) that do not scale like data operations.
+Do not verify a Database and/or Container exists by calling `Create...IfNotExistsAsync` and/or `Read...Async` in the hot path and/or before doing an item operation. The validation should only be done on application startup when it is necessary, if you expect them to be deleted (otherwise it's not needed). These metadata operations will generate extra end-to-end latency, have no SLA, and their own separate [limitations](/azure/cosmos-db/sql/troubleshoot-request-rate-too-large#rate-limiting-on-metadata-requests) that do not scale like data operations.
## <a id="logging-and-tracing"></a> Logging and tracing
The request charge (that is, the request-processing cost) of a specified operati
## Next steps For a sample application that's used to evaluate Azure Cosmos DB for high-performance scenarios on a few client machines, see [Performance and scale testing with Azure Cosmos DB](performance-testing.md).
-To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
+To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
cosmos-db Performance Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-tips.md
If you're testing at high throughput levels (more than 50,000 RU/s), the client
## <a id="metadata-operations"></a> Metadata operations
-Do not verify a Database and/or Collection exists by calling `Create...IfNotExistsAsync` and/or `Read...Async` in the hot path and/or before doing an item operation. The validation should only be done on application startup when it is necessary, if you expect them to be deleted (otherwise it's not needed). These metadata operations will generate extra end-to-end latency, have no SLA, and their own separate [limitations](https://aka.ms/CosmosDB/sql/errors/metadata-429) that do not scale like data operations.
+Do not verify a Database and/or Collection exists by calling `Create...IfNotExistsAsync` and/or `Read...Async` in the hot path and/or before doing an item operation. The validation should only be done on application startup when it is necessary, if you expect them to be deleted (otherwise it's not needed). These metadata operations will generate extra end-to-end latency, have no SLA, and their own separate [limitations](/azure/cosmos-db/sql/troubleshoot-request-rate-too-large#rate-limiting-on-metadata-requests) that do not scale like data operations.
## <a id="logging-and-tracing"></a> Logging and tracing
cosmos-db Troubleshoot Dot Net Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-slow-request.md
Consider the following when developing your application:
## Metadata operations
-If you need to verify that a database or container exists, don't do so by calling `Create...IfNotExistsAsync` or `Read...Async` before doing an item operation. The validation should only be done on application startup when it's necessary, if you expect them to be deleted. These metadata operations generate extra latency, have no service-level agreement (SLA), and have their own separate [limitations](https://aka.ms/CosmosDB/sql/errors/metadata-429). They don't scale like data operations.
+If you need to verify that a database or container exists, don't do so by calling `Create...IfNotExistsAsync` or `Read...Async` before doing an item operation. The validation should only be done on application startup when it's necessary, if you expect them to be deleted. These metadata operations generate extra latency, have no service-level agreement (SLA), and have their own separate [limitations](/azure/cosmos-db/sql/troubleshoot-request-rate-too-large#rate-limiting-on-metadata-requests). They don't scale like data operations.
## Slow requests on bulk mode
Contact [Azure support](https://aka.ms/azure-support).
## Next steps * [Diagnose and troubleshoot](troubleshoot-dot-net-sdk.md) problems when you use the Azure Cosmos DB .NET SDK.
-* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](performance-tips.md).
+* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](performance-tips.md).
cosmos-db Troubleshoot Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-java-sdk-v4-sql.md
The Diagnostics are returned as a string. The string changes with each version a
The following code sample shows how to read diagnostic logs using the Java V4 SDK: > [!IMPORTANT]
-> We recommend validating the minimum recommended version of the Java V4 SDK and ensure you are using this version or higher. You can check recommended version [here](/azure/cosmos-db/sql/sql-api-sdk-java-v4#recommended-version).
+> We recommend validating the minimum recommended version of the Java V4 SDK and ensure you are using this version or higher. You can check recommended version [here](./sql-api-sdk-java-v4.md#recommended-version).
# [Sync](#tab/sync)
Many connections to the Azure Cosmos DB endpoint might be in the `CLOSE_WAIT` st
[Common issues and workarounds]: #common-issues-workarounds [Enable client SDK logging]: #enable-client-sice-logging [Connection limit on a host machine]: #connection-limit-on-host
-[Azure SNAT (PAT) port exhaustion]: #snat
+[Azure SNAT (PAT) port exhaustion]: #snat
cost-management-billing Mca Request Billing Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-request-billing-ownership.md
When you're ready, use the following instructions. You can also go along with th
>[!VIDEO https://www.youtube.com/embed/gfiUI2YLsgc]
-## Request billing ownership
+## Create the transfer request
+
+The following procedure has you navigate to **Transfer requests** by selecting a **Billing scope** &gt; **Billing account** &gt; **Billing profile** &gt; **Invoice sections** to **Add a new request**. If you navigate to **Add a new request** from selecting a billing profile, you'll have to select a billing profile and then select an invoice section.
1. Sign in to the [Azure portal](https://portal.azure.com) as an invoice section owner or contributor for a billing account for Microsoft Customer Agreement. Use the same credentials that you used to accept your Microsoft Customer Agreement. 1. Search for **Cost Management + Billing**.
- ![Screenshot that shows Azure portal search for cost management + billing](./media/mca-request-billing-ownership/billing-search-cost-management-billing.png)
+ :::image type="content" source="./media/mca-request-billing-ownership/billing-search-cost-management-billing.png" alt-text="Screenshot that shows Azure portal search for Cost Management + Billing." lightbox="./media/mca-request-billing-ownership/billing-search-cost-management-billing.png" :::
1. On the billing scopes page, select **Billing scopes** and then select the billing account, which would be used to pay for Azure usage in your subscriptions. Select the billing account labeled **Microsoft Customer Agreement**.
- [![Screenshot that shows search in portal for cost management + billing](./media/mca-request-billing-ownership/list-of-scopes.png)](./media/mca-request-billing-ownership/list-of-scopes.png#lightbox)
- > [!NOTE]
- >The Azure portal remembers the last billing scope that you access and displays the scope the next time you come to Cost Management + Billing page. You won't see the billing scopes page if you have visited Cost Management + Billing earlier. If so, check that you are in the [right scope](#check-for-access). If not, [switch the scope](view-all-accounts.md#switch-billing-scope-in-the-azure-portal) to select the billing account for a Microsoft Customer Agreement.
-1. Select **Billing profiles** from the left-hand side.
- [![Screenshot that shows selecting billing profiles](./media/mca-request-billing-ownership/mca-select-profiles.png)](./media/mca-request-billing-ownership/mca-select-profiles.png#lightbox)
- > [!NOTE]
+ :::image type="content" source="./media/mca-request-billing-ownership/billing-scopes.png" alt-text="Screenshot that shows search in portal for Cost Management + Billing." lightbox="./media/mca-request-billing-ownership/billing-scopes.png" :::
+ The Azure portal remembers the last billing scope that you access and displays the scope the next time you come to Cost Management + Billing page. You won't see the billing scopes page if you have visited Cost Management + Billing earlier. If so, check that you are in the [right scope](#check-for-access). If not, [switch the scope](view-all-accounts.md#switch-billing-scope-in-the-azure-portal) to select the billing account for a Microsoft Customer Agreement.
+1. Select **Billing profiles** from the left-hand side and then select a **Billing profile** from the list. Once you take over the ownership of the subscriptions, their usage will be billed to this billing profile.
+ :::image type="content" source="./media/mca-request-billing-ownership/billing-profile.png" alt-text="Screenshot that shows selecting billing profiles." lightbox="./media/mca-request-billing-ownership/billing-profile.png" :::
+ >[!NOTE]
> If you don't see Billing profiles, you are not in the right billing scope. You need to select a billing account for a Microsoft Customer Agreement and then select Billing profiles. To learn how to change scopes, see [Switch billing scopes in the Azure portal](view-all-accounts.md#switch-billing-scope-in-the-azure-portal).
-1. Select a **Billing profile** from the list. Once you take over the ownership of the subscriptions, their usage will be billed to this billing profile.
-1. Select **Invoice sections** from the left-hand side.
- [![Screenshot that shows selecting invoice sections](./media/mca-request-billing-ownership/mca-select-invoice-sections.png)](./media/mca-request-billing-ownership/mca-select-invoice-sections.png#lightbox)
-1. Select an invoice section from the list. Each billing profile contains on invoice section by default. Select the invoice where you want to move your Azure subscription billing - that's where the Azure subscription consumption is transferred to.
-1. Select **Transfer requests** from the lower-left side and then select **Add a new request**.
- [![Screenshot that shows selecting transfer requests](./media/mca-request-billing-ownership/mca-select-transfer-requests.png)](./media/mca-request-billing-ownership/mca-select-transfer-requests.png#lightbox)
-1. Enter the email address of the user you're requesting billing ownership from. The user must have an account administrator role for the old subscriptions. Select **Send transfer request**.
- [![Screenshot that shows sending a transfer request](./media/mca-request-billing-ownership/mca-send-transfer-requests.png)](./media/mca-request-billing-ownership/mca-send-transfer-requests.png#lightbox)
+1. Select **Invoice sections** from the left-hand side and then select an invoice section from the list. Each billing profile contains on invoice section by default. Select the invoice where you want to move your Azure subscription billing - that's where the Azure subscription consumption is transferred to.
+ :::image type="content" source="./media/mca-request-billing-ownership/invoice-section.png" alt-text="Screenshot that shows selecting invoice sections." lightbox="./media/mca-request-billing-ownership/invoice-section.png" :::
+1. Select **Transfer requests** from the lower-left side and then select **Add a new request**. Enter the email address of the user you're requesting billing ownership from. The user must have an account administrator role for the old subscriptions.
+ :::image type="content" source="./media/mca-request-billing-ownership/transfer-request-add-email.png" alt-text="Screenshot that shows selecting transfer requests." lightbox="./media/mca-request-billing-ownership/transfer-request-add-email.png" :::
+1. Select **Send transfer request**.
## Review and approve transfer request
-1. The user gets an email with instructions to review your transfer request.
- ![Screenshot that shows review transfer request email](./media/mca-request-billing-ownership/mca-review-transfer-request-email.png)
-1. To approve the transfer request, the user selects the link in the email and follows the instructions.
-
- The user selects the billing account that they want to transfer Azure products from. Once selected, eligible products that can be transferred are shown. Once the Azure products to be transferred are selected, select **Validate**.
-
+1. The user gets an email with instructions to review your transfer request. Select **Review the request** to open it in the Azure portal.
+ :::image type="content" source="./media/mca-request-billing-ownership/mca-review-transfer-request-email.png" alt-text="Screenshot that shows review transfer request email." lightbox="./media/mca-request-billing-ownership/mca-review-transfer-request-email.png" :::
+1. In the Azure portal, the user selects the billing account that they want to transfer Azure products from. Then they select eligible subscriptions on the **Subscriptions** tab.
+ :::image type="content" source="./media/mca-request-billing-ownership/review-transfer-request-subscriptions-select.png" alt-text="Screenshot showing the Subscriptions tab." lightbox="./media/mca-request-billing-ownership/review-transfer-request-subscriptions-select.png" :::
>[!NOTE]
- > Disabled subscriptions can't be transferred and will show up in the "Non-transferrable Azure Products" list if applicable.
+ > Disabled subscriptions can't be transferred.
+1. If there are reservations available to transfer, select the **Reservations** tab. Then select them.
+ :::image type="content" source="./media/mca-request-billing-ownership/review-transfer-request-reservations-select.png" alt-text="Screenshot showing the Reservations tab." lightbox="./media/mca-request-billing-ownership/review-transfer-request-reservations-select.png" :::
+1. Select the **Review request** tab and verify the information about the subscriptions and reservations to transfer. If there is Warnings or Failed status messages, see the following information. When you're ready to continue, select **Transfer**.
+ :::image type="content" source="./media/mca-request-billing-ownership/review-transfer-request-complete.png" alt-text="Screenshot showing the Review request tab where you review your transfer selections." lightbox="./media/mca-request-billing-ownership/review-transfer-request-complete.png" :::
+1. You'll briefly see a `Transfer is in progress` message. When the transfer is completed successfully, you'll see the Transfer details page with the `Transfer completed successfully` message.
+ :::image type="content" source="./media/mca-request-billing-ownership/transfer-completed-successfully.png" alt-text="Screenshot showing the Transfer completed successfully page." lightbox="./media/mca-request-billing-ownership/transfer-completed-successfully.png" :::
- [![Screenshot that shows review transfer request](./media/mca-request-billing-ownership/review-transfer-requests.png)](./media/mca-request-billing-ownership/review-transfer-requests.png#lightbox)
-1. The **Transfer Validation Result** area shows the status of the Azure products that are going to be transferred. Here are the possible states:
- * **Passed** - Validation for this Azure product has passed and can be transferred.
- * **Warning** - There's a warning for the selected Azure product. While the product can still be transferred, doing so will have some consequence that the user should be aware of in case they want to take mitigating actions. For example, the Azure subscription being transferred is benefitting from an RI. After transfer, the subscription will no longer receive that benefit. To maximize savings, ensure that the RI is associated to another subscription that can use its benefits. Instead, the user can also choose to go back to the selection page and unselect this Azure subscription.
- * **Failed** - The selected Azure product can't be transferred because of an error. User will need to go back to the selection page and unselect this product to transfer the other selected Azure products.
- ![Screenshot that shows validation experience](./media/mca-request-billing-ownership/validate-transfer-request.png)
-1. After validation completes as **Passed**, select **Transfer**. You'll see a `Transfer is in progress` message and then when complete, a `Transfer completed successfully` message is shown.
+On the Review request tab, the following status messages might be displayed.
+
+* **Ready to transfer** - Validation for this Azure product has passed and can be transferred.
+* **Warnings** - There's a warning for the selected Azure product. While the product can still be transferred, doing so will have some consequence that the user should be aware of in case they want to take mitigating actions. For example, the Azure subscription being transferred is benefitting from an RI. After transfer, the subscription will no longer receive that benefit. To maximize savings, ensure that the RI is associated to another subscription that can use its benefits. Instead, the user can also choose to go back to the selection page and unselect this Azure subscription. Select **Check details** for more information.
+* **Failed** - The selected Azure product can't be transferred because of an error. User will need to go back to the selection page and unselect this product to transfer the other selected Azure products.
## Check the transfer request status
+As the user that requested the transfer:
+ 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Search for **Cost Management + Billing**.
- ![Screenshot that shows Azure portal search for cost management + billing](./media/mca-request-billing-ownership/billing-search-cost-management-billing.png)
-1. In the billing scopes page, select the billing account for which the transfer request was sent.
-1. Select **Billing profiles** from the left-hand side.
- [![Screenshot that shows selecting billing profiles](./media/mca-request-billing-ownership/mca-select-profiles.png)](./media/mca-request-billing-ownership/mca-select-profiles.png#lightbox)
-1. Select the **Billing profile** for which the transfer request was sent.
-1. Select **Invoice sections** from the left-hand side.
- [![Screenshot that shows selecting invoice sections](./media/mca-request-billing-ownership/mca-select-invoice-sections.png)](./media/mca-request-billing-ownership/mca-select-invoice-sections.png#lightbox)
-1. Select the invoice section from the list for which the transfer request was sent.
-1. Select **Transfer requests** from the lower-left side. The Transfer requests page displays the following information:
- [![Screenshot that shows list of transfer requests](./media/mca-request-billing-ownership/mca-select-transfer-requests-for-status.png)](./media/mca-request-billing-ownership/mca-select-transfer-requests-for-status.png#lightbox)
-
- |Column|Definition|
- |||
- |Request date|The date when the transfer request was sent|
- |Recipient|The email address of the user that you sent the request to transfer billing ownership|
- |Expiration date|The date when the request expires|
- |Status|The status of transfer request|
-
- The transfer request can have one of the following statuses:
-
- |Status|Definition|
- |||
- |In progress|The user hasn't accepted the transfer request|
- |Processing|The user approved the transfer request. Billing for subscriptions that the user selected is getting transferred to your invoice section|
- |Completed| The billing for subscriptions that the user selected is transferred to your invoice section|
- |Finished with errors|The request completed but billing for some subscriptions that the user selected couldn't be transferred|
- |Expired|The user didn't accept the request on time and it expired|
- |Canceled|Someone with access to the transfer request canceled the request|
- |Declined|The user declined the transfer request|
+1. Search for **Cost Management + Billing**.
+1. In the billing scopes page, select the billing account where the transfer request was started and then in the left menu, select **Transfer requests**.
+1. Select the billing profile and invoice section where the transfer request was started and review the status.
+ :::image type="content" source="./media/mca-request-billing-ownership/transfer-requests-status-completed.png" alt-text="Screenshot that shows the list of transfers with their status. " lightbox="./media/mca-request-billing-ownership/transfer-requests-status-completed.png" :::
+
+The Transfer requests page displays the following information:
+
+|Column|Definition|
+|||
+|Request date|The date when the transfer request was sent|
+|Recipient|The email address of the user that you sent the request to transfer billing ownership|
+|Expiration date|The date when the request expires|
+|Status|The status of transfer request|
+
+The transfer request can have one of the following statuses:
+
+|Status|Definition|
+|||
+|In progress|The user hasn't accepted the transfer request.|
+|Processing|The user approved the transfer request. Billing for subscriptions that the user selected is getting transferred to your invoice section.|
+|Completed| The billing for subscriptions that the user selected is transferred to your invoice section.|
+|Finished with errors|The request completed but billing for some subscriptions that the user selected couldn't be transferred.|
+|Expired|The user didn't accept the request on time and it expired.|
+|Canceled|Someone with access to the transfer request canceled the request.|
+|Declined|The user declined the transfer request.|
+
+As the user that approved the transfer:
1. Select a transfer request to view details. The transfer details page displays the following information:
- [![Screenshot that shows list of transferred subscriptions](./media/mca-request-billing-ownership/mca-transfer-completed.png)](./media/mca-request-billing-ownership/mca-transfer-completed.png#lightbox)
+ :::image type="content" source="./media/mca-request-billing-ownership/transfer-status-success-approver-view.png" alt-text="Screenshot that shows the Transfer status page with example status." lightbox="./media/mca-request-billing-ownership/transfer-status-success-approver-view.png" :::
- |Column |Definition|
- |||
- |Transfer request ID|The unique ID for your transfer request. If you submit a support request, share the ID with Azure support to speed up your support request|
- |Transfer requested on|The date when the transfer request was sent|
- |Transfer requested by|The email address of the user who sent the transfer request|
- |Transfer request expires on| The date when the transfer request expires|
- |Recipient's email address|The email address of the user that you sent the request to transfer billing ownership|
- |Transfer link sent to recipient|The url that was sent to the user to review the transfer request|
+|Column |Definition|
+|||
+|Transfer ID|The unique ID for your transfer request. If you submit a support request, share the ID with Azure support to speed up your support request. |
+|Transfer requested date|The date when the transfer request was sent. |
+|Transfer requested by|The email address of the user who sent the transfer request. |
+|Transfer request expires date| Only appears while the transfer status is `Pending`. The date when the transfer request expires. |
+|Transfer link sent to recipient| Only appears while the transfer status is `Pending`. The URL that was sent to the user to review the transfer request. |
+|Transfer completed date|Only appears when the transfer status is `Completed`. The date and time that the transfer was completed. |
## Supported subscription types You can request billing ownership of the subscription types listed below. -- [Action pack](https://azure.microsoft.com/offers/ms-azr-0025p/)\*-- [Azure in Open Licensing](https://azure.microsoft.com/offers/ms-azr-0111p/)\*-- [Azure Pass Sponsorship](https://azure.microsoft.com/offers/azure-pass/)\*
+- [Action pack](https://azure.microsoft.com/offers/ms-azr-0025p/)<sup>1</sup>
+- [Azure in Open Licensing](https://azure.microsoft.com/offers/ms-azr-0111p/)<sup>1</sup>
+- [Azure Pass Sponsorship](https://azure.microsoft.com/offers/azure-pass/)<sup>1</sup>
- [Enterprise Dev/Test](https://azure.microsoft.com/offers/ms-azr-0148p/)-- [Free Trial](https://azure.microsoft.com/offers/ms-azr-0044p/)\*
+- [Free Trial](https://azure.microsoft.com/offers/ms-azr-0044p/)<sup>1</sup>
- [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0003p/) - [Pay-As-You-Go Dev/Test](https://azure.microsoft.com/offers/ms-azr-0023p/)-- [Microsoft Azure Plan](https://azure.microsoft.com/offers/ms-azr-0017g/)\*\*-- [Microsoft Azure Sponsored Offer](https://azure.microsoft.com/offers/ms-azr-0036p/)\*
+- [Microsoft Azure Plan](https://azure.microsoft.com/offers/ms-azr-0017g/)<sup>2</sup>
+- [Microsoft Azure Sponsored Offer](https://azure.microsoft.com/offers/ms-azr-0036p/)<sup>1</sup>
- [Microsoft Enterprise Agreement](https://azure.microsoft.com/pricing/enterprise-agreement/) - [Microsoft Customer Agreement](https://azure.microsoft.com/pricing/purchase-options/microsoft-customer-agreement/)-- [Microsoft Partner Network](https://azure.microsoft.com/offers/ms-azr-0025p/)\*-- [MSDN Platforms](https://azure.microsoft.com/offers/ms-azr-0062p/)\*-- [Visual Studio Enterprise (BizSpark) subscribers](https://azure.microsoft.com/offers/ms-azr-0064p/)\*-- [Visual Studio Enterprise (MPN) subscribers](https://azure.microsoft.com/offers/ms-azr-0029p/)\*-- [Visual Studio Enterprise subscribers](https://azure.microsoft.com/offers/ms-azr-0063p/)\*-- [Visual Studio Professional](https://azure.microsoft.com/offers/ms-azr-0059p/)\*-- [Visual Studio Test Professional subscribers](https://azure.microsoft.com/offers/ms-azr-0060p/)\*
+- [Microsoft Partner Network](https://azure.microsoft.com/offers/ms-azr-0025p/)<sup>1</sup>
+- [MSDN Platforms](https://azure.microsoft.com/offers/ms-azr-0062p/)<sup>1</sup>
+- [Visual Studio Enterprise (BizSpark) subscribers](https://azure.microsoft.com/offers/ms-azr-0064p/)<sup>1</sup>
+- [Visual Studio Enterprise (MPN) subscribers](https://azure.microsoft.com/offers/ms-azr-0029p/)<sup>1</sup>
+- [Visual Studio Enterprise subscribers](https://azure.microsoft.com/offers/ms-azr-0063p/)<sup>1</sup>
+- [Visual Studio Professional](https://azure.microsoft.com/offers/ms-azr-0059p/)<sup>1</sup>
+- [Visual Studio Test Professional subscribers](https://azure.microsoft.com/offers/ms-azr-0060p/)<sup>1</sup>
-\* Any credit available on the subscription won't be available in the new account after the transfer.
+<sup>1</sup> Any credit available on the subscription won't be available in the new account after the transfer.
-\*\* Only supported for subscriptions in accounts that are created during sign-up on the Azure website.
+<sup>2</sup> Only supported for subscriptions in accounts that are created during sign-up on the Azure website.
## Additional information
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
tags: billing
Previously updated : 11/17/2021 Last updated : 03/01/2022
Dev/Test products aren't shown in the following table. Transfers for Dev/Test pr
| Source (current) product agreement type | Destination (future) product agreement type | Notes | | | | | | EA | MOSP (PAYG) | <ul><li>Transfer from an EA enrollment to a MOSP subscription requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/). <li> Reservations don't automatically transfer and transferring them isn't supported. |
-| EA | MCA - individual | <ul><li>For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Reservations don't automatically transfer so you must open a [billing support ticket](https://azure.microsoft.com/support/create-ticket/) to transfer them. |
-| EA | EA | <ul><li>Transferring between EA enrollments requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/). <li>Reservations don't automatically transfer between enrollments so you must open a [billing support ticket](https://azure.microsoft.com/support/create-ticket/) to transfer them. <li> Transfer within the same enrollment is the same action as changing the account owner. For details, see [Change EA subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership). |
-| EA | MCA - Enterprise | <ul><li> Transferring all enrollment products is completed as part of the MCA transition process from an EA. For more information, see [Complete Enterprise Agreement tasks in your billing account for a Microsoft Customer Agreement](mca-enterprise-operations.md). <li> If you want to transfer specific products, not all of the products in an enrollment, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). |
+| EA | MCA - individual | <ul><li>For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. |
+| EA | EA | <ul><li>Transferring between EA enrollments requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/). <li> Self-service reservation transfers are supported. <li> Transfer within the same enrollment is the same action as changing the account owner. For details, see [Change EA subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership). |
+| EA | MCA - Enterprise | <ul><li> Transferring all enrollment products is completed as part of the MCA transition process from an EA. For more information, see [Complete Enterprise Agreement tasks in your billing account for a Microsoft Customer Agreement](mca-enterprise-operations.md). <li> If you want to transfer specific products, not all of the products in an enrollment, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. |
| EA | MPA | <ul><li> Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Direct Enterprise Agreement (EA). For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program. <li> There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-subscriptions-to-a-csp-partner). | | MCA - individual | MOSP (PAYG) | <ul><li> For details, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md). <li> Reservations don't automatically transfer and transferring them isn't supported. | | MCA - individual | MCA - individual | <ul><li>For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. | | MCA - individual | EA | <ul><li> For details, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea). <li> Self-service reservation transfers are supported. | | MCA - individual | MCA - Enterprise | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<li> Self-service reservation transfers are supported. | | MCA - Enterprise | MOSP | <ul><li> Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/). <li> Reservations don't automatically transfer and transferring them isn't supported. |
-| MCA - Enterprise | MCA - individual | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Reservations don't automatically transfer so you must open a [billing support ticket](https://azure.microsoft.com/support/create-ticket/) to transfer them. |
-| MCA - Enterprise | MCA - Enterprise | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Reservations don't automatically transfer so you must open a [billing support ticket](https://azure.microsoft.com/support/create-ticket/) to transfer them. |
+| MCA - Enterprise | MCA - individual | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. |
+| MCA - Enterprise | MCA - Enterprise | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. |
+| MCA - Enterprise | MPA | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. |
| Previous Azure offer in CSP | Previous Azure offer in CSP | <ul><li> Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/). <li> Reservations don't automatically transfer and transferring them isn't supported. | | Previous Azure offer in CSP | MPA | For details, see [Transfer a customer's Azure subscriptions to a different CSP (under an Azure plan)](/partner-center/transfer-azure-subscriptions-under-azure-plan). | | MPA | EA | <ul><li> Automatic transfer isn't supported. Any transfer requires resources to move from the existing MPA product manually to a newly created or an existing EA product. <li> Use the information in the [Perform resource transfers](#perform-resource-transfers) section. <li> Reservations don't automatically transfer and transferring them isn't supported. |
-| MPA | MPA | <ul><li> For details, see [Transfer a customer's Azure subscriptions to a different CSP (under an Azure plan)](/partner-center/transfer-azure-subscriptions-under-azure-plan). <li> Reservations don't automatically transfer and transferring them isn't supported. |
+| MPA | MPA | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. |
| MOSP (PAYG) | MOSP (PAYG) | <ul><li> If you're changing the billing owner of the subscription, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md). <li> Reservations don't automatically transfer so you must open a [billing support ticket](https://azure.microsoft.com/support/create-ticket/) to transfer them. | | MOSP (PAYG) | MCA - individual | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. | | MOSP (PAYG) | EA | For details, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea). |
Read the following sections to learn more about other considerations before you
### Transfer terms and conditions
-When you send or accept a transfer you agree to terms and conditions. The following information provides additional details.
+When you send or accept a transfer, you agree to terms and conditions. The following information provides more details.
#### Send transfer
-When you send a transfer request, must select the **Send transfer request** option. By making the selection you also agree to the following terms and conditions:
+When you send a transfer request, must select the **Send transfer request** option. By making the selection, you also agree to the following terms and conditions:
-_By sending this transfer request, you acknowledge and agree that the selected items will transfer to your account as of the Transition Date (date when the transfer completed successfully). You will be responsible to Microsoft for all ongoing, scheduled billings related to the transfer items as of the Transition Date; provided that Microsoft will move any prepaid subscriptions (including reserved instances) to your account. You agree that you may not cancel any prepaid subscriptions transferred to your account._
+`By sending this transfer request, you acknowledge and agree that the selected items will transfer to your account as of the Transition Date (date when the transfer completed successfully). You will be responsible to Microsoft for all ongoing, scheduled billings related to the transfer items as of the Transition Date; provided that Microsoft will move any prepaid subscriptions (including reserved instances) to your account. You agree that you may not cancel any prepaid subscriptions transferred to your account.`
#### Transfer acceptance
-When you accept a transfer, must select the **Review + validate** option. By making the selection you also agree to the following terms and conditions:
+When you accept a transfer, must select the **Review + validate** option. By making the selection, you also agree to the following terms and conditions:
-_By accepting this transfer request, you acknowledge and agree that the indicated items will transfer to the nominated destination account as of the Transition Date (date when the transfer completed successfully). Any prepaid subscriptions, if selected, (including reserved instances) will be moved to the destination account and, as of the Transition Date, you will no longer be responsible to Microsoft for ongoing payment obligations (if any) related to the transfer items._
+`By accepting this transfer request, you acknowledge and agree that the indicated items will transfer to the nominated destination account as of the Transition Date (date when the transfer completed successfully). Any prepaid subscriptions, if selected, (including reserved instances) will be moved to the destination account and, as of the Transition Date, you will no longer be responsible to Microsoft for ongoing payment obligations (if any) related to the transfer items.`
### Resources transfer with subscriptions
data-factory Control Flow Web Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-web-activity.md
To use a Web activity in a pipeline, complete the following steps:
"typeProperties":{ "method":"Post", "url":"<URLEndpoint>",
+ "httpRequestTimeout": "00:01:00"
"connectVia": { "referenceName": "<integrationRuntimeName>", "type": "IntegrationRuntimeReference"
Property | Description | Allowed values | Required
name | Name of the web activity | String | Yes type | Must be set to **WebActivity**. | String | Yes method | REST API method for the target endpoint. | String. <br/><br/>Supported Types: "GET", "POST", "PUT" | Yes
-url | Target endpoint and path | String (or expression with resultType of string). The activity will timeout at 1 minute with an error if it does not receive a response from the endpoint. | Yes
+url | Target endpoint and path | String (or expression with resultType of string). The activity will timeout at 1 minute with an error if it does not receive a response from the endpoint. You can increase this response timeout up to 10 mins by updating the httpRequestTimeout property | Yes
+httpRequestTimeout | Response timeout duration | hh:mm:ss with the max value as 00:10:00. If not explicitly specified defaults to 00:01:00 | No
headers | Headers that are sent to the request. For example, to set the language and type on a request: `"headers" : { "Accept-Language": "en-us", "Content-Type": "application/json" }`. | String (or expression with resultType of string) | Yes, Content-type header is required. `"headers":{ "Content-Type":"application/json"}` body | Represents the payload that is sent to the endpoint. | String (or expression with resultType of string). <br/><br/>See the schema of the request payload in [Request payload schema](#request-payload-schema) section. | Required for POST/PUT methods. authentication | Authentication method used for calling the endpoint. Supported Types are "Basic, or ClientCertificate." For more information, see [Authentication](#authentication) section. If authentication is not required, exclude this property. | String (or expression with resultType of string) | No
linkedServices | List of linked services passed to endpoint. | Array of linked s
connectVia | The [integration runtime](./concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | The integration runtime reference. | No > [!NOTE]
-> REST endpoints that the web activity invokes must return a response of type JSON. The activity will timeout at 1 minute with an error if it does not receive a response from the endpoint.
+> REST endpoints that the web activity invokes must return a response of type JSON. The activity will timeout at 1 minute with an error if it does not receive a response from the endpoint. You can extend this timeout period to a higher value up to 10 minute by updating the 'httpRequestTimeout' Property in the activity settings.
The following table shows the requirements for JSON content:
data-factory Format Delimited Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-delimited-text.md
Previously updated : 11/25/2021 Last updated : 03/16/2022
For a full list of sections and properties available for defining datasets, see
| type | The type property of the dataset must be set to **DelimitedText**. | Yes | | location | Location settings of the file(s). Each file-based connector has its own location type and supported properties under `location`. | Yes | | columnDelimiter | The character(s) used to separate columns in a file. <br>The default value is **comma `,`**. When the column delimiter is defined as empty string, which means no delimiter, the whole line is taken as a single column.<br>Currently, column delimiter as empty string is only supported for mapping data flow but not Copy activity. | No |
-| rowDelimiter | The single character or "\r\n" used to separate rows in a file. <br>The default value is any of the following values **on read: ["\r\n", "\r", "\n"]**, and **"\n" or "\r\n" on write** by mapping data flow and Copy activity respectively. <br>When the row delimiter is set to no delimiter (empty string), the column delimiter must be set as no delimiter (empty string) as well, which means to treat the entire content as a single value.<br>Currently, row delimiter as empty string is only supported for mapping data flow but not Copy activity. | No |
+| rowDelimiter | For Copy activity, the single character or "\r\n" used to separate rows in a file. The default value is any of the following values **on read: ["\r\n", "\r", "\n"]**; **on write: "\r\n"**. "\r\n" is only supported in copy command.<br>For Mapping data flow, the single or two characters used to separate rows in a file. The default value is any of the following values **on read: ["\r\n", "\r", "\n"]**; **on write: "\n"**.<br>When the row delimiter is set to no delimiter (empty string), the column delimiter must be set as no delimiter (empty string) as well, which means to treat the entire content as a single value.<br>Currently, row delimiter as empty string is only supported for mapping data flow but not Copy activity. | No |
| quoteChar | The single character to quote column values if it contains column delimiter. <br>The default value is **double quotes** `"`. <br>When `quoteChar` is defined as empty string, it means there is no quote char and column value is not quoted, and `escapeChar` is used to escape the column delimiter and itself. | No | | escapeChar | The single character to escape quotes inside a quoted value.<br>The default value is **backslash `\`**. <br>When `escapeChar` is defined as empty string, the `quoteChar` must be set as empty string as well, in which case make sure all column values don't contain delimiters. | No | | firstRowAsHeader | Specifies whether to treat/make the first row as a header line with names of columns.<br>Allowed values are **true** and **false** (default).<br>When first row as header is false, note UI data preview and lookup activity output auto generate column names as Prop_{n} (starting from 0), copy activity requires [explicit mapping](copy-activity-schema-and-type-mapping.md#explicit-mapping) from source to sink and locates columns by ordinal (starting from 1), and mapping data flow lists and locates columns with name as Column_{n} (starting from 1). | No |
data-factory Tutorial Managed Virtual Network Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-sql-managed-instance.md
Previously updated : 05/06/2021 Last updated : 03/17/2022 # Tutorial: How to access SQL Managed Instance from Data Factory Managed VNET using Private Endpoint
the page.
>|**SQL MI 1**|1433 |1433 |sudo ./ip_fwd.sh -i eth0 -f 1433 -a <FQDN/IP> -b 1433| >|**SQL MI 2**|1434 |1434 |sudo ./ip_fwd.sh -i eth0 -f 1434 -a <FQDN/IP> -b 1433|
+ >[!Note]
+ > Run the script again every time you restart the VMs behind the load balancer.
+ ## Create a Private Endpoint to Private Link Service 1. Select All services in the left-hand menu, select All resources, and then select your
databox-online Azure Stack Edge Pro 2 Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-safety.md
To reduce the risk of bodily injury, electrical shock, fire, and equipment damag
![Safety warning](./media/azure-stack-edge-pro-2-safety/icon-safety-warning.png)**DANGER:** * Before you begin to unpack the equipment, to prevent hazardous situations resulting in death, serious injury and/or property damage, read, and follow all warnings and instructions.
-* Inspect the as-received equipment for damages. If the equipment enclosure is damaged, [contact Microsoft Support](https://aka.ms/CONTACT-ASE-SUPPORT) to obtain a replacement. DonΓÇÖt attempt to operate the device.
+* Inspect the as-received equipment for damages. If the equipment enclosure is damaged, [contact Microsoft Support](./index.yml) to obtain a replacement. DonΓÇÖt attempt to operate the device.
![Safety warning](./media/azure-stack-edge-pro-2-safety/icon-safety-warning.png)**CAUTION:**
-* If you suspect the device is malfunctioning, [contact Microsoft Support](https://aka.ms/CONTACT-ASE-SUPPORT) to obtain a replacement. DonΓÇÖt attempt to service the equipment.
+* If you suspect the device is malfunctioning, [contact Microsoft Support](./index.yml) to obtain a replacement. DonΓÇÖt attempt to service the equipment.
* Always wear the appropriate clothing to protect skin from sharp metal edges and avoid sliding any metal edges against skin. Always wear appropriate eye protection to avoid injury from objects that may become airborne. * Laser peripherals or devices may be present. To avoid risk or radiation exposure and/or personal injury, donΓÇÖt open the enclosure of any laser peripheral or device. Laser peripherals or devices arenΓÇÖt serviceable. Only use certified and rated Laser Class I for optical transceiver products.
To reduce the risk of bodily injury, electrical shock, fire, and equipment damag
* This equipment is not to be used as shelves or work spaces. Do not place objects on top of the equipment. Adding any type of load to a rack or wall mounted equipment can create a potential tip or crush hazard which could lead to injury, death, or product damage. ![Safety warning](./media/azure-stack-edge-pro-2-safety/icon-safety-warning.png)![Electric shock hazard icon](./media/azure-stack-edge-pro-2-safety/icon-safety-electric-shock.png)![Do not access](./media/azure-stack-edge-pro-2-safety/icon-safety-do-not-access.png)**CAUTION:**
-* Parts enclosed within panels containing this symbol ![Do not access 2](./media/azure-stack-edge-pro-2-safety/icon-safety-do-not-access-tiny.png) contain no user-serviceable parts. Hazardous voltage, current, and energy levels are present inside. DonΓÇÖt open. Return to manufacturer for servicing. </br>Open a ticket with [Microsoft Support](https://aka.ms/CONTACT-ASE-SUPPORT).
+* Parts enclosed within panels containing this symbol ![Do not access 2](./media/azure-stack-edge-pro-2-safety/icon-safety-do-not-access-tiny.png) contain no user-serviceable parts. Hazardous voltage, current, and energy levels are present inside. DonΓÇÖt open. Return to manufacturer for servicing. </br>Open a ticket with [Microsoft Support](./index.yml).
* The equipment contains coin cell batteries. ThereΓÇÖs a risk of explosion if the battery is replaced by an incorrect type. Dispose of used batteries according to the instructions. ![Safety warning](./media/azure-stack-edge-pro-2-safety/icon-safety-warning.png)![Hot component surface](./media/azure-stack-edge-pro-2-safety/icon-hot-component-surface.png)**CAUTION:**
defender-for-cloud Adaptive Network Hardening https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-network-hardening.md
To modify an adaptive network hardening rule:
> [!NOTE] > After selecting **Save**, you have successfully changed the rule. *However, you have not applied it to the NSG.* To apply it, you must select the rule in the list, and select **Enforce** (as explained in the next step).
- ![Selecting Save.](./media/adaptive-network-hardening/edit-hard-rule3.png)
+ ![Selecting Save.](./media/adaptive-network-hardening/edit-hard-rule-3.png)
3. To apply the updated rule, from the list, select the updated rule and select **Enforce**.
defender-for-iot Tutorial Investigate Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-investigate-security-alerts.md
There are no resources to clean up.
## Next steps > [!div class="nextstepaction"]
-> Learn how to [integrate Microsoft Sentinel and Microsoft Defender for IoT](/azure/sentinel/iot-solution?toc=%2Fazure%2Fdefender-for-iot%2Forganizations%2Ftoc.json&bc=%2Fazure%2Fdefender-for-iot%2Fbreadcrumb%2Ftoc.json&tabs=use-out-of-the-box-analytics-rules-recommended)
+> Learn how to [integrate Microsoft Sentinel and Microsoft Defender for IoT](../../sentinel/iot-solution.md?bc=%2fazure%2fdefender-for-iot%2fbreadcrumb%2ftoc.json&tabs=use-out-of-the-box-analytics-rules-recommended&toc=%2fazure%2fdefender-for-iot%2forganizations%2ftoc.json)
devtest-labs Devtest Lab Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-concepts.md
For more information, see [Add an artifact repository to a lab](add-artifact-rep
### Roles
-[Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/overview) defines DevTest Labs access and roles. DevTest Labs has three roles that define lab member permissions: Owner, Contributor, and DevTest Labs User.
+[Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) defines DevTest Labs access and roles. DevTest Labs has three roles that define lab member permissions: Owner, Contributor, and DevTest Labs User.
- Lab Owners can do all lab tasks, such as reading or writing to lab resources, managing users, setting policies and configurations, and adding repositories and base images. - Because Azure subscription owners have access to all resources in a subscription, which include labs, virtual networks, and VMs, a subscription owner automatically inherits the lab Owner role.
For more information about the differences between custom images and formulas, s
In DevTest Labs, an environment is a collection of Azure platform-as-a-service (PaaS) resources, such as an Azure Web App or a SharePoint farm. You can create environments in labs by using ARM templates. For more information, see [Use ARM templates to create DevTest Labs environments](devtest-lab-create-environment-from-arm.md). For more information about ARM template structure and properties, see [Template format](../azure-resource-manager/templates/syntax.md#template-format). -
devtest-labs Devtest Lab Configure Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-configure-vnet.md
If you allowed VM creation in one of the subnets, you can now create lab VMs in
## Next steps -- For more information about how to set up, use, and manage virtual networks, see the [Azure virtual network documentation](/azure/virtual-network).-- You can deploy [Azure Bastion](https://azure.microsoft.com/services/azure-bastion) in a new or existing virtual network to enable browser connection to your lab VMs. For more information, see [Enable browser connection to DevTest Labs VMs with Azure Bastion](enable-browser-connection-lab-virtual-machines.md).
+- For more information about how to set up, use, and manage virtual networks, see the [Azure virtual network documentation](../virtual-network/index.yml).
+- You can deploy [Azure Bastion](https://azure.microsoft.com/services/azure-bastion) in a new or existing virtual network to enable browser connection to your lab VMs. For more information, see [Enable browser connection to DevTest Labs VMs with Azure Bastion](enable-browser-connection-lab-virtual-machines.md).
devtest-labs Devtest Lab Create Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-lab.md
This quickstart walks you through creating a lab in Azure DevTest Labs by using
## Prerequisite -- At least [Contributor](/azure/role-based-access-control/built-in-roles#contributor) access to an Azure subscription. If you don't have an Azure account, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- At least [Contributor](../role-based-access-control/built-in-roles.md#contributor) access to an Azure subscription. If you don't have an Azure account, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
## Create a lab
If you created a resource group for the lab, you can now delete the resource gro
To learn how to add VMs to your lab, go on to the next article: > [!div class="nextstepaction"]
-> [Create and add virtual machines to a lab in Azure DevTest Labs](devtest-lab-add-vm.md)
+> [Create and add virtual machines to a lab in Azure DevTest Labs](devtest-lab-add-vm.md)
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
Azure Database Migration Service prerequisites that are common across all suppor
- SSIS packages - Server roles - Server audit-- When migrating to SQL Server on Azure Virtual Machines, SQL Server 2014 and below are not supported.
+- When migrating to SQL Server on Azure Virtual Machines, SQL Server 2014 and below as target versions are not supported currently.
- Migrating to Azure SQL Database isn't supported. - Azure storage accounts secured by specific firewall rules or configured with a private endpoint are not supported for migrations. - You can't use an existing self-hosted integration runtime created from Azure Data Factory for database migrations with DMS. Initially, the self-hosted integration runtime should be created using the Azure SQL Migration extension in Azure Data Studio and can be reused for further database migrations.
event-grid Advanced Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/advanced-filtering.md
description: Advanced filtering in Event Grid on IoT Edge.
Previously updated : 05/10/2021 Last updated : 02/15/2022
Event Grid allows specifying filters on any property in the json payload. These
* `Key` - The json path to the property on which to apply the filter. * `Value` - The reference value against which the filter is run (or) `Values` - The set of reference values against which the filter is run.
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
++ ## JSON syntax The JSON syntax for an advanced filter is as follows:
The `Key` property can either be a well-known top-level property, or be a json p
Event Grid doesn't have any special meaning for the `$` character in the Key, unlike the JSONPath specification.
-### Event grid schema
+### Event Grid schema
For events in the Event Grid schema:
event-grid Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/api.md
Previously updated : 05/10/2021 Last updated : 02/15/2022
# REST API This article describes the REST APIs of Azure Event Grid on IoT Edge
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
++ ## Common API behavior ### Base URL
event-grid Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/concepts.md
Previously updated : 05/10/2021 Last updated : 02/15/2022
This article describes the main concepts in Azure Event Grid.
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
++ ## Events An event is the smallest amount of information that fully describes something that happened in the system. Every event has common information like: source of the event, time the event took place, and unique identifier. Every event also has specific information that is only relevant to the specific type of event. The support for an event of size up to 1 MB is currently in preview.
event-grid Configure Api Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure-api-protocol.md
Previously updated : 05/10/2021 Last updated : 02/15/2022
This guide gives examples of the possible protocol configurations of an Event Gr
See [Security and authentication](security-authentication.md) guide for all the possible configurations.
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
++ ## Expose HTTPS to IoT Modules on the same edge network ```json
event-grid Configure Client Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure-client-auth.md
Previously updated : 05/10/2021 Last updated : 02/15/2022
This guide gives examples of the possible client authentication configurations f
See [Security and authentication](security-authentication.md) guide for all the possible configurations.
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+++ ## Enable certificate-based client authentication, no self-signed certificates ```json
event-grid Configure Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure-event-grid.md
Previously updated : 05/10/2021 Last updated : 02/15/2022
Event Grid provides many configurations that can be modified per environment. The following section is a reference to all the available options and their defaults.
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+++ ## TLS configuration To learn about client authentication in general, see [Security and Authentication](security-authentication.md). Examples of its usage can be found in [this article](configure-api-protocol.md).
event-grid Configure Identity Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure-identity-auth.md
Previously updated : 05/10/2021 Last updated : 02/15/2022
This article gives shows how to configure identity for Grid on Edge. By default,
See [Security and authentication](security-authentication.md) guide for all the possible configurations.
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+++ ## Always present identity certificate Here's an example configuration for always presenting an identity certificate on outgoing calls.
event-grid Configure Webhook Subscriber Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure-webhook-subscriber-auth.md
Previously updated : 05/10/2021 Last updated : 02/15/2022
This guide gives examples of the possible webhook subscriber configurations for an Event Grid module. By default, only HTTPS endpoints are accepted for webhook subscribers. The Event Grid module will reject if the subscriber presents a self-signed certificate.
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+++ ## Allow only HTTPS subscriber ```json
event-grid Delivery Output Batching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/delivery-output-batching.md
description: Output batching in Event Grid on IoT Edge.
Previously updated : 05/10/2021 Last updated : 02/15/2022
Event Grid has support to deliver more than one event in a single delivery request. This feature makes it possible to increase the overall delivery throughput without paying the HTTP per-request overheads. Batching is turned off by default and can be turned on per-subscription.
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
++ > [!WARNING] > The maximum allowed duration to process each delivery request does not change, even though the subscriber code potentially has to do more work per batched request. Delivery timeout defaults to 60 seconds.
event-grid Delivery Retry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/delivery-retry.md
Previously updated : 05/10/2021 Last updated : 02/15/2022
Event Grid provides durable delivery. It tries to deliver each message at least once for each matching subscription immediately. If a subscriber's endpoint doesn't acknowledge receipt of an event or if there is a failure, Event Grid retries delivery based on a fixed **retry schedule** and **retry policy**. By default, the Event Grid module delivers one event at a time to the subscriber. The payload is however an array with a single event. You can have the module deliver more than one event at a time by enabling the output batching feature. For details about this feature, see [output batching](delivery-output-batching.md).
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
++ > [!IMPORTANT] >There is no persistence support for event data. This means redeploying or restart of the Event Grid module will cause you to lose any events that aren't yet delivered.
event-grid Event Handlers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/event-handlers.md
Title: Event Handlers and destinations - Azure Event Grid IoT Edge | Microsoft Docs description: Event Handlers and destinations in Event Grid on Edge Previously updated : 05/10/2021 Last updated : 02/15/2022
An event handler is the place where the event for further action or to process the event. With the Event Grid on Edge module, the event handler can be on the same edge device, another device, or in the cloud. You may can use any WebHook to handle events, or send events to one of the native handlers like Azure Event Grid.
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+++ This article provides information on how to configure each. ## WebHook
event-grid Event Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/event-schemas.md
Previously updated : 05/10/2021 Last updated : 02/15/2022
You can configure the schema that a publisher must conform to during topic creat
Subscribers can also configure the schema in which they want the events delivered. If unspecified, default is topic's schema. Currently subscriber delivery schema has to match its topic's input schema.
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+++ ## EventGrid schema EventGrid schema consists of a set of required properties that a publishing entity must conform to. Each publisher has to populate the top-level fields.
event-grid Forward Events Event Grid Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/forward-events-event-grid-cloud.md
Previously updated : 05/10/2021 Last updated : 02/15/2022
This article walks through all the steps needed to forward edge events to Event
To complete this tutorial, you need have an understanding of Event Grid concepts on [edge](concepts.md) and [Azure](../concepts.md). For additional destination types, see [event handlers](event-handlers.md).
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+++ ## Prerequisites In order to complete this tutorial, you will need:
event-grid Forward Events Iothub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/forward-events-iothub.md
Previously updated : 05/10/2021 Last updated : 02/15/2022
This article walks through all the steps needed to forward Event Grid events to
* Continue to use any existing investments already in place with edgeHub's routing * Prefer to route all events from a device only via IoT Hub +
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
++ To complete this tutorial, you need to understand the following concepts: - [Event Grid Concepts](concepts.md)
event-grid Monitor Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/monitor-topics-subscriptions.md
Event Grid on Edge exposes a number of metrics for topics and event subscriptions in the [Prometheus exposition format](https://prometheus.io/docs/instrumenting/exposition_formats/). This article describes the available metrics and how to enable them.
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+++ ## Enable metrics Configure the module to emit metrics by setting the `metrics__reporterType` environment variable to `prometheus` in the container create options:
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/overview.md
Title: Event driven architectures on edge ΓÇö Azure Event Grid on IoT Edge description: Use Azure Event Grid as a module on IoT Edge for forward events between modules, edge devices, and the cloud. Previously updated : 05/10/2021 Last updated : 02/15/2022
event-grid Persist State Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/persist-state-windows.md
Previously updated : 05/10/2021 Last updated : 02/15/2022
By default only metadata is persisted and events are still stored in-memory for
This article provides the steps needed to deploy Event Grid module with persistence in Windows deployments.
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+++ > [!NOTE] >The Event Grid module runs as a low-privileged user **ContainerUser** in Windows.
event-grid Pub Sub Events Webhook Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/pub-sub-events-webhook-cloud.md
Previously updated : 05/10/2021 Last updated : 02/15/2022 ms.devlang: csharp
This article walks through all the steps needed to publish and subscribe to even
See [Event Grid Concepts](concepts.md) to understand what an event grid topic and subscription are before proceeding.
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+++ ## Prerequisites In order to complete this tutorial, you will need:
event-grid Pub Sub Events Webhook Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/pub-sub-events-webhook-local.md
Previously updated : 05/10/2021 Last updated : 02/15/2022
This article walks you through all the steps needed to publish and subscribe to events using Event Grid on IoT Edge.
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+++ > [!NOTE] > To learn about Azure Event Grid topics and subscriptions, see [Event Grid Concepts](concepts.md).
event-grid React Blob Storage Events Locally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/react-blob-storage-events-locally.md
Previously updated : 05/10/2021 Last updated : 02/15/2022
This article shows you how to deploy the Azure Blob Storage on IoT module, which
For an overview of the Azure Blob Storage on IoT Edge, see [Azure Blob Storage on IoT Edge](../../iot-edge/how-to-store-data-blob.md) and its features.
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+++ > [!WARNING] > Azure Blob Storage on IoT Edge integration with Event Grid is in Preview
event-grid Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/release-notes.md
Title: Release Notes - Azure Event Grid IoT Edge | Microsoft Docs description: Azure Event Grid on IoT Edge Release Notes Previously updated : 09/15/2021 Last updated : 02/15/2022 # Release Notes: Azure Event Grid on IoT Edge
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+++ ## 1.0.0-preview1 Initial release of Azure Event Grid on IoT Edge. Included features:
event-grid Security Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/security-authentication.md
Previously updated : 05/10/2021 Last updated : 02/15/2022
Security and authentication is an advanced concept and it requires familiarity with Event Grid basics first. Start [here](concepts.md) if you are new to Event Grid on IoT Edge. Event Grid module builds on the existing security infrastructure on IoT Edge. Refer to [this documentation](../../iot-edge/security.md) for details and setup.
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
++ The following sections describe in detail how these settings are secured and authenticated: * TLS configuration
event-grid Transition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/transition.md
+
+ Title: Transition from Event Grid on Azure IoT Edge to Azure IoT Edge
+description: This article explains transition from Event Grid on Azure IoT Edge to Azure IoT Edge MQTT Broker or IoT Hub message routing.
+ Last updated : 02/16/2022+++
+# Transition from Event Grid on Azure IoT Edge to Azure IoT Edge native capabilities
+
+On March 31, 2023, Event Grid on Azure IoT Edge will be retired, so make sure to transition to IoT Edge native capabilities prior to that date.
+
+## Why are we retiring?
+There are multiple reasons for deciding to retire Event Grid on IoT Edge, which is currently in Preview, in March 2023.
+
+- Event Grid has been evolving in the cloud native space to provide more robust capabilities not only in Azure but also in on-prem scenarios with [Kubernetes with Azure Arc](../kubernetes/overview.md).
+- We've seen an increase of adoption of MQTT brokers in the IoT space, this adoption has been the motivation to allow IoT Edge team to build a new native MQTT broker that provides a better integration for pub/sub messaging scenarios. With the new MQTT broker provided natively on IoT Edge, you'll be able to connect to this broker, publish, and subscribe to messages over user-defined topics, and use IoT Hub messaging primitives. The IoT Edge MQTT broker is built in the IoT Edge hub.
+
+Here's the list of the features that will be removed with the retirement of Event Grid on Azure IoT Edge and a list of the new IoT Edge native capabilities.
+
+| Event Grid on Azure IoT Edge | MQTT broker on Azure IoT Edge |
+| - | -- |
+| - Publishing and subscribing to events locally/cloud<br/>- Forwarding events to Event Grid<br/>- Forwarding events to IoT Hub<br/>- React to Blob Storage events locally | - Connectivity to IoT Edge hub<br/>- Publish and subscribe on user-defined topics<br/>- Publish and subscribe on IoT Hub topics<br/>- Publish and subscribe between MQTT brokers |
++
+## How to transition to Azure IoT Edge features
+
+To transition to use the Azure IoT Edge features, follow these steps.
+
+1. Learn about the feature differences between [Event Grid on Azure IoT Edge](overview.md#when-to-use-event-grid-on-iot-edge) and [Azure IoT Edge](../../iot-edge/how-to-publish-subscribe.md).
+2. Identify your scenario based on the feature table in the next section.
+3. Follow the documentation to change your architecture and make code changes based on the scenario you want to transition.
+4. Validate your updated architecture by sending and receiving messages/events.
+
+## Event Grid on Azure IoT Edge vs. Azure IoT Edge
+
+The following table highlights the key differences during this transition.
+
+| Event Grid on Azure IoT Edge | Azure IoT Edge |
+| | -- |
+| Publish, subscribe and forward events locally or cloud | You can use Azure IoT Edge MQTT broker to publish and subscribe messages. To learn how to connect to this broker, publish and subscribe to messages over user-defined topics, and use IoT Hub messaging primitives, see [publish and subscribe with Azure IoT Edge](../../iot-edge/how-to-publish-subscribe.md). The IoT Edge MQTT broker is built in the IoT Edge hub. For more information, see [the brokering capabilities of the IoT Edge hub](../../iot-edge/iot-edge-runtime.md). </br> </br> If you're subscribing to IoT Hub, itΓÇÖs possible to create an event to publish to Event Grid if you need. For details, see [Azure IoT Hub and Event Grid](../../iot-hub/iot-hub-event-grid.md). |
+| Forward events to IoT Hub | You can use IoT Hub message routing to send device-cloud messages to different endpoints. For details, see [Understand Azure IoT Hub message routing](../../iot-hub/iot-hub-devguide-messages-d2c.md). |
+| React to Blob Storage events on IoT Edge (Preview) | You can use Azure Function Apps to react to blob storage events on cloud when a blob is created or updated. For more information, see [Azure Blob storage trigger for Azure Functions](../../azure-functions/functions-bindings-storage-blob-trigger.md) and [Tutorial: Deploy Azure Functions as modules - Azure IoT Edge](../../iot-edge/tutorial-deploy-function.md). Blob triggers in IoT Edge blob storage module aren't supported. |
event-grid Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/troubleshoot.md
Previously updated : 05/10/2021 Last updated : 02/15/2022
If you experience issues using Azure Event Grid on IoT Edge in your environment, use this article as a guide for troubleshooting and resolution.
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+++ ## View Event Grid module logs To troubleshoot, you might need to access Event Grid module logs. On the VM where the module is deployed run the following command:
event-grid Twin Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/twin-json.md
description: Configuration via Module Twin.
Previously updated : 05/10/2021 Last updated : 02/15/2022
Event Grid on IoT Edge integrates with the IoT Edge ecosystem and supports creating topics and subscriptions via the Module Twin. It also reports the current state of all the topics and event subscriptions to the reported properties on the Module Twin.
+> [!IMPORTANT]
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+++ > [!WARNING] > Because of limitations in the IoT Edge ecosystem, all array elements in the following json example have been encoded as json strings. See `EventSubscription.Filter.EventTypes` and `EventSubscription.Filter.AdvancedFilters` keys in the following example.
event-grid Event Schema Azure Health Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-azure-health-data-services.md
This section contains examples of what events message data would look like for e
> [!Note] > Events data looks similar to these examples with the `metadataVersion` property set to a value of `1`. >
-> For more information, see [Azure Event Grid event schema properties](/azure/event-grid/event-schema#event-properties).
+> For more information, see [Azure Event Grid event schema properties](./event-schema.md#event-properties).
### FhirResourceCreated event
This section contains examples of what events message data would look like for e
* For an introduction to Azure Event Grid, see [What is Event Grid?](overview.md) * For more information about creating an Azure Event Grid subscription, see [Event Grid subscription schema](subscription-creation-schema.md).
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Previously updated : 03/09/2022 Last updated : 03/17/2022 # Customer intent: As an administrator, I want to evaluate Azure Firewall so I can determine if I want to use it.
Untrusted customer signed certificates|Customer signed certificates are not trus
|Certificate Propagation|After a CA certificate is applied on the firewall, it may take between 5-10 minutes for the certificate to take effect.|A fix is being investigated.| |TLS 1.3 support|TLS 1.3 is partially supported. The TLS tunnel from client to the firewall is based on TLS 1.2, and from the firewall to the external Web server is based on TLS 1.3.|Updates are being investigated.| |KeyVault Private Endpoint|KeyVault supports Private Endpoint access to limit its network exposure. Trusted Azure Services can bypass this limitation if an exception is configured as described in the [KeyVault documentation](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services). Azure Firewall is not currently listed as a trusted service and can't access the Key Vault.|A fix is being investigated.|
-|IDPS Bypass list|If you enable IDPS (either ΓÇÿAlertΓÇÖ or ΓÇÿAlert and DenyΓÇÖ mode) and actively delete one or more existing rules in IDPS Bypass list, you may be subject to packet loss which is correlated to the deleted rules source/destination IP addresses. |A fix is being investigated.<br><br>You may respond to this issue by taking one of the following actions:<br><br>- Do a start/stop procedure as explained [here](firewall-faq.yml#how-can-i-stop-and-start-azure-firewall).<br>- Open a support ticket and we will re-image your effected firewall virtual machines.|
+|IDPS Bypass list|If you enable IDPS (either ΓÇÿAlertΓÇÖ or ΓÇÿAlert and DenyΓÇÖ mode) and actively delete one or more existing rules in IDPS Bypass list, you may be subject to packet loss which is correlated to the deleted rules source/destination IP addresses. |A fix is being investigated.<br><br>You may respond to this issue by taking one of the following actions:<br><br>- Do a start/stop procedure as explained [here](firewall-faq.yml#how-can-i-stop-and-start-azure-firewall).<br>- Open a support ticket and we will re-image your affected firewall virtual machines.|
|Availability Zones for Firewall Premium in the Southeast Asia region|You can't currently deploy Azure Firewall Premium with Availability Zones in the Southeast Asia region.|Deploy the firewall in Southeast Asia without Availability Zones, or deploy in a region that supports Availability Zones.|
governance Guest Configuration Desired State Configuration Extension Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/guest-configuration-desired-state-configuration-extension-migration.md
before you can create a guest configuration package.
#### Update deployment templates If your deployment templates include the DSC extension
-(see [examples](/azure/virtual-machines/extensions/dsc-template)),
+(see [examples](../../../virtual-machines/extensions/dsc-template.md)),
there are two changes required. First, replace the DSC extension with the
-[extension for the guest configuration feature](/azure/virtual-machines/extensions/guest-configuration).
+[extension for the guest configuration feature](../../../virtual-machines/extensions/guest-configuration.md).
Then, add a [guest configuration assignment](../concepts/guest-configuration-assignments.md)
Use the `Remove.py` script as documented in
- [Assign your custom policy definition](../assign-policy-portal.md) using Azure portal. - Learn how to view
- [compliance details for guest configuration](./determine-non-compliance.md#compliance-details-for-guest-configuration) assignments.
+ [compliance details for guest configuration](./determine-non-compliance.md#compliance-details-for-guest-configuration) assignments.
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/overview.md
Azure Resource Graph is a service in Azure that is designed to extend Azure Resource Management by providing efficient and performant resource exploration with the ability to query at scale across a given set of subscriptions so that you can effectively govern your environment. These queries
-provide the following features:
+provide the following abilities:
-- Ability to query resources with complex filtering, grouping, and sorting by resource properties.-- Ability to iteratively explore resources based on governance requirements.-- Ability to assess the impact of applying policies in a vast cloud environment.-- Ability to [query changes made to resource properties](./how-to/get-resource-changes.md)
+- Query resources with complex filtering, grouping, and sorting by resource properties.
+- Explore resources iteratively based on governance requirements.
+- Assess the impact of applying policies in a vast cloud environment.
+- [Query changes made to resource properties](./how-to/get-resource-changes.md)
(preview). In this documentation, you'll go over each feature in detail. > [!NOTE]
-> Azure Resource Graph powers Azure portal's search bar, the new browse 'All resources' experience,
+> Azure Resource Graph powers Azure portal's search bar, the new browse **All resources** experience,
> and Azure Policy's [Change history](../policy/how-to/determine-non-compliance.md#change-history) > _visual diff_. It's designed to help customers manage large-scale environments.
In this documentation, you'll go over each feature in detail.
## How does Resource Graph complement Azure Resource Manager
-Resource Manager currently supports queries over basic resource fields, specifically - Resource
-name, ID, Type, Resource Group, Subscription, and Location. Resource Manager also provides
+Resource Manager currently supports queries over basic resource fields, specifically:
+
+- Resource name
+- ID
+- Type
+- Resource Group
+- Subscription
+- Location
+
+Resource Manager also provides
facilities for calling individual resource providers for detailed properties one resource at a time. With Azure Resource Graph, you can access these properties the resource providers return without
ensures that Resource Graph data is current if there are missed notifications or
updated outside of Resource Manager. > [!NOTE]
-> Resource Graph uses a `GET` to the latest non-preview API of each resource provider to gather
+> Resource Graph uses a `GET` to the latest non-preview application programming interface (API) of each resource provider to gather
> properties and values. As a result, the property expected may not be available. In some cases, the > API version used has been overridden to provide more current or widely used properties in the > results. See the [Show API version for each resource type](./samples/advanced.md#apiversion)
access to without any indication that the result may be partial. If there are no
the subscription list that the user has appropriate rights to, the response is a _403_ (Forbidden). > [!NOTE]
-> In the **preview** REST API version `2020-04-01-preview`, the subscription list may be ommitted.
+> In the **preview** REST API version `2020-04-01-preview`, the subscription list may be omitted.
> When both the `subscriptions` and `managementGroupId` properties aren't defined in the request, > the _scope_ is set to the tenant. For more information, see > [Scope of the query](./concepts/query-language.md#query-scope).
the subscription list that the user has appropriate rights to, the response is a
As a free service, queries to Resource Graph are throttled to provide the best experience and response time for all customers. If your organization wants to use the Resource Graph API for
-large-scale and frequent queries, use portal 'Feedback' from the
+large-scale and frequent queries, use portal **Feedback** from the
[Resource Graph portal page](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyMenuBlade/ResourceGraph).
-Provide your business case and select the 'Microsoft can email you about your feedback' checkbox in
+Provide your business case and select the **Microsoft can email you about your feedback** checkbox in
order for the team to contact you. Resource Graph throttles queries at the user level. The service response contains the following HTTP
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
HDInsight will no longer use Azure Virtual Machine Scale Sets to provision the c
### Scaling of Azure HDInsight HBase workloads will now be supported only using manual scale
-Starting from March 01, 2022, HDInsight will only support manual scale for HBase, there's no impact on running clusters. New HBase clusters won't be able to enable schedule based Autoscaling. For more information on how to  manually scale your HBase cluster, refer our documentation on [Manually scaling Azure HDInsight clusters](https://docs.microsoft.com/azure/hdinsight/hdinsight-scaling-best-practices)
+Starting from March 01, 2022, HDInsight will only support manual scale for HBase, there's no impact on running clusters. New HBase clusters won't be able to enable schedule based Autoscaling. For more information on how to  manually scale your HBase cluster, refer our documentation on [Manually scaling Azure HDInsight clusters](./hdinsight-scaling-best-practices.md)
## HDInsight 3.6 end of support extension
HDInsight 3.6 end of support is extended until September 30, 2022.
Starting from September 30, 2022, customers can't create new HDInsight 3.6 clusters. Existing clusters will run as is without the support from Microsoft. Consider moving to HDInsight 4.0 to avoid potential system/support interruption.
-Customers who are on Azure HDInsight 3.6 clusters will continue to get [Basic support](https://docs.microsoft.com/azure/hdinsight/hdinsight-component-versioning#support-options-for-hdinsight-versions) until September 30, 2022. After September 30, 2022 customers won't be able to create new HDInsight 3.6 clusters.
+Customers who are on Azure HDInsight 3.6 clusters will continue to get [Basic support](./hdinsight-component-versioning.md#support-options-for-hdinsight-versions) until September 30, 2022. After September 30, 2022 customers won't be able to create new HDInsight 3.6 clusters.
hdinsight Apache Storm Scp Programming Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/storm/apache-storm-scp-programming-guide.md
Here `microsoft.scp.example.HybridTopology.Generator` is the name of the Java sp
### Specify the Java classpath in a runSpec command
-If you want to submit topology that contains Java spouts or bolts, first compile them to produce JAR files. Then specify the java classpath that contains the JAR files when you submit topology. Here's an example:
+If you want to submit topology that contains Java spouts or bolts, first compile them to produce JAR files. Then specify the Java classpath that contains the JAR files when you submit topology. Here's an example:
```csharp bin\runSpec.cmd examples\HybridTopology\HybridTopology.spec specs examples\HybridTopology\net\Target -cp examples\HybridTopology\java\target\*
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/copy-to-synapse.md
Previously updated : 02/28/2022 Last updated : 03/16/2022
In this article, you'll learn three ways to copy data from Azure API for FHIR to
> [!Note] > [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
-The **FHIR to Synapse Sync Agent** is a Microsoft OSS project released under MIT License. It's an Azure function that extracts data from a FHIR server using FHIR Resource APIs, converts it to hierarchical Parquet files, and writes it to Azure Data Lake in near real time. This also contains a script to create external tables and views in [Synapse Serverless SQL pool](../../synapse-analytics/sql/on-demand-workspace-overview.md) pointing to the Parquet files.
+The **FHIR to Synapse Sync Agent** is a Microsoft OSS project released under MIT License. It's an Azure function that extracts data from a FHIR server using FHIR Resource APIs, converts it to hierarchical Parquet files, and writes it to Azure Data Lake in near real time. This also contains a script to create external tables and views in [Synapse Serverless SQL pool](https://docs.microsoft.com/azure/synapse-analytics/sql/on-demand-workspace-overview) pointing to the Parquet files.
This solution enables you to query against the entire FHIR data with tools such as Synapse Studio, SSMS, and Power BI. You can also access the Parquet files directly from a Synapse Spark pool. You should consider this solution if you want to access all of your FHIR data in near real time, and want to defer custom transformation to downstream systems.
Follow the OSS [documentation](https://github.com/microsoft/FHIR-Analytics-Pipel
> [!Note] > [FHIR to CDM pipeline generator](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
-The **FHIR to CDM pipeline generator** is a Microsoft OSS project released under MIT License. It's a tool to generate an ADF pipeline for copying a snapshot of data from a FHIR server using $export API, transforming it to csv format, and writing to a [CDM folder](https://docs.microsoft.com/common-data-model/data-lake) in Azure Data Lake Storage Gen 2. The tool requires a user-created configuration file containing instructions to project and flatten FHIR Resources and fields into tables. You can also follow the instructions for creating a downstream pipeline in Synapse workspace to move data from CDM folder to Synapse dedicated SQL pool.
+The **FHIR to CDM pipeline generator** is a Microsoft OSS project released under MIT License. It's a tool to generate an ADF pipeline for copying a snapshot of data from a FHIR server using $export API, transforming it to csv format, and writing to a [CDM folder](/common-data-model/data-lake) in Azure Data Lake Storage Gen 2. The tool requires a user-created configuration file containing instructions to project and flatten FHIR Resources and fields into tables. You can also follow the instructions for creating a downstream pipeline in Synapse workspace to move data from CDM folder to Synapse dedicated SQL pool.
This solution enables you to transform the data into tabular format as it gets written to CDM folder. You should consider this solution if you want to transform FHIR data into a custom schema after it's extracted from the FHIR server.
In this article, you learned three different ways to copy your FHIR data into Sy
Next, you can learn about how you can de-identify your FHIR data while exporting it to Synapse in order to protect PHI. >[!div class="nextstepaction"]
->[Exporting de-identified data](de-identified-export.md)
---------
+>[Exporting de-identified data](de-identified-export.md)
healthcare-apis Deploy Dicom Services In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/deploy-dicom-services-in-azure.md
To deploy DICOM service, you must have a workspace created in the Azure portal.
In this quickstart, you learned how to deploy DICOM service using the Azure portal. For information about assigning roles for the DICOM service, see >[!div class="nextstepaction"]
->[Assign roles for the DICOM service](https://docs.microsoft.com/azure/healthcare-apis/configure-azure-rbac#assign-roles-for-the-dicom-service)
+>[Assign roles for the DICOM service](../configure-azure-rbac.md#assign-roles-for-the-dicom-service)
For more information about how to use the DICOMweb&trade; Standard APIs with the DICOM service, see >[!div class="nextstepaction"]
->[Using DICOMweb&trade;Standard APIs with DICOM services](dicomweb-standard-apis-with-dicom-services.md)
+>[Using DICOMweb&trade;Standard APIs with DICOM services](dicomweb-standard-apis-with-dicom-services.md)
healthcare-apis References For Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/references-for-dicom-service.md
This article describes our open-source projects on GitHub that provide source co
### Access imaging study resources on Power BI, Power Apps, and Dynamics 365 Customer Insights
-* [Connect to a FHIR service from Power Query Desktop](https://docs.microsoft.com/power-query/connectors/fhir/fhir): After provisioning DICOM service, FHIR service and synchronizing imaging study for a given patient via DICOM cast, you can use the POWER Query connector for FHIR to import and shape data from the FHIR server including imaging study resource.
+* [Connect to a FHIR service from Power Query Desktop](/power-query/connectors/fhir/fhir): After provisioning DICOM service, FHIR service and synchronizing imaging study for a given patient via DICOM cast, you can use the POWER Query connector for FHIR to import and shape data from the FHIR server including imaging study resource.
### Convert imaging study data to hierarchical parquet files
healthcare-apis Events Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-deploy-portal.md
It's important that you have the following prerequisites completed before you be
> - [Event delivery with a managed identity](../../event-grid/managed-service-identity.md) > >For more information about managed identities, see
- > - [What are managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/overview)
+ > - [What are managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md)
> >For more information about Azure role-based access control (Azure RBAC), see
- > - [What is Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/overview)
+ > - [What is Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md)
## Next steps
To learn how to export Event Grid system diagnostic logs and metrics, see
>[!div class="nextstepaction"] >[How to export Events diagnostic logs and metrics](./events-display-metrics.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/copy-to-synapse.md
Title: Copy data from FHIR service to Azure Synapse Analytics
+ Title: Copy data from FHIR service in Azure Health Data Services to Azure Synapse Analytics
description: This article describes copying FHIR data into Synapse Previously updated : 03/01/2022 Last updated : 03/16/2022 # Copy data from FHIR service to Azure Synapse Analytics
-In this article, youΓÇÖll learn a couple of ways to copy data from the FHIR service to [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), which is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics.
+In this article, youΓÇÖll learn three ways to copy data from the FHIR service in Azure Health Data Services to [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), which is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics.
-Copying data from FHIR server to Synapse involves exporting the data using the FHIR `$export` operation followed by a series of steps to transform and load the data to Synapse. This article will walk you through two of the several approaches, both of which will show how to convert FHIR resources into tabular formats while copying them into Synapse.
+* Use the [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md) OSS tool
+* Use the [FHIR to CDM pipeline generator](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) OSS tool
+* Use $export and load data to Synapse using T-SQL
-* **Load exported data to Synapse using T-SQL:** Use `$export` operation to copy FHIR resources into a **Azure Data Lake Gen 2 (ADL Gen 2) blob storage** in `NDJSON` format. Load the data from the storage into **serverless or dedicated SQL pools** in Synapse using T-SQL. Convert these steps into a robust data movement pipeline using [Synapse pipelines](../../synapse-analytics/get-started-pipelines.md).
-* **Use the tools from the FHIR Analytics Pipelines OSS repo:** The [FHIR Analytics Pipeline](https://github.com/microsoft/FHIR-Analytics-Pipelines) repo contains tools that can create an **Azure Data Factory (ADF) pipeline** to copy FHIR data into a **Common Data Model (CDM) folder**, and from the CDM folder to Synapse.
+## Using the FHIR to Synapse Sync Agent OSS tool
-## Load exported data to Synapse using T-SQL
+> [!Note]
+> [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
+
+The **FHIR to Synapse Sync Agent** is a Microsoft OSS project released under MIT License. It's an Azure function that extracts data from a FHIR server using FHIR Resource APIs, converts it to hierarchical Parquet files, and writes it to Azure Data Lake in near real time. This also contains a script to create external tables and views in [Synapse Serverless SQL pool](https://docs.microsoft.com/azure/synapse-analytics/sql/on-demand-workspace-overview) pointing to the Parquet files.
+
+This solution enables you to query against the entire FHIR data with tools such as Synapse Studio, SSMS, and Power BI. You can also access the Parquet files directly from a Synapse Spark pool. You should consider this solution if you want to access all of your FHIR data in near real time, and want to defer custom transformation to downstream systems.
+
+Follow the OSS [documentation](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md) for installation and usage instructions.
+
+## Using the FHIR to CDM pipeline generator OSS tool
+
+> [!Note]
+> [FHIR to CDM pipeline generator](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
+
+The **FHIR to CDM pipeline generator** is a Microsoft OSS project released under MIT License. It's a tool to generate an ADF pipeline for copying a snapshot of data from a FHIR server using $export API, transforming it to csv format, and writing to a [CDM folder](https://docs.microsoft.com/common-data-model/data-lake) in Azure Data Lake Storage Gen 2. The tool requires a user-created configuration file containing instructions to project and flatten FHIR Resources and fields into tables. You can also follow the instructions for creating a downstream pipeline in Synapse workspace to move data from CDM folder to Synapse dedicated SQL pool.
+
+This solution enables you to transform the data into tabular format as it gets written to CDM folder. You should consider this solution if you want to transform FHIR data into a custom schema after it's extracted from the FHIR server.
-### `$export` for moving FHIR data into Azure Data Lake Gen 2 storage
+Follow the OSS [documentation](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) for installation and usage instructions.
+## Loading exported data to Synapse using T-SQL
-#### Configure your FHIR server to support `$export`
+In this approach, you use the FHIR `$export` operation to copy FHIR resources into a **Azure Data Lake Gen 2 (ADL Gen 2) blob storage** in `NDJSON` format. Subsequently, you load the data from the storage into **serverless or dedicated SQL pools** in Synapse using T-SQL. You can convert these steps into a robust data movement pipeline using [Synapse pipelines](../../synapse-analytics/get-started-pipelines.md).
-Azure API for FHIR implements the `$export` operation defined by the FHIR specification to export all or a filtered subset of FHIR data in `NDJSON` format. In addition, it supports [de-identified export](./de-identified-export.md) to anonymize FHIR data during the export. If you use `$export`, you get de-identification feature by default its capability is already integrated in `$export`.
-To export FHIR data to Azure blob storage, you first need to configure your FHIR server to export data to the storage account. YouΓÇÖll need to (1) enable Managed Identity, (2) go to Access Control in the storage account and add role assignment, (3) select your storage account for `$export`. More step-by-step instructions can be found [here](./configure-export-data.md).
+### Using `$export` to copy data
+
+#### Configuring `$export` in the FHIR server
+
+The FHIR server in Azure Health Data Services implements the `$export` operation defined by the FHIR specification to export all or a filtered subset of FHIR data in `NDJSON` format. In addition, it supports [de-identified export](./de-identified-export.md) to anonymize FHIR data during the export.
+
+To export FHIR data to Azure blob storage, you first need to configure your FHIR server to export data to the storage account. You'll need to (1) enable Managed Identity, (2) go to Access Control in the storage account and add role assignment, (3) select your storage account for `$export`. More step by step can be found [here](./configure-export-data.md).
You can configure the server to export the data to any kind of Azure storage account, but we recommend exporting to ADL Gen 2 for best alignment with Synapse.
After configuring your FHIR server, you can follow the [documentation](./export-
https://{{FHIR service base URL}}/Group/{{GroupId}}/$export?_container={{BlobContainer}} ```
-You can also use `_type` parameter in the `$export` call above to restrict the resources you want to export. For example, the following call will export only `Patient`, `MedicationRequest`, and `Observation` resources:
+You can also use `_type` parameter in the `$export` call above to restrict the resources that you want to export. For example, the following call will export only `Patient`, `MedicationRequest`, and `Observation` resources:
```rest https://{{FHIR service base URL}}/Group/{{GroupId}}/$export?_container={{BlobContainer}}&
_type=Patient,MedicationRequest,Condition
For more information on the different parameters supported, check out our `$export` page section on the [query parameters](./export-data.md#settings-and-parameters).
-### Create a Synapse workspace
+### Using Synapse for Analytics
+
+#### Creating a Synapse workspace
-Before using Synapse, youΓÇÖll need a Synapse workspace. YouΓÇÖll create an Azure Synapse Analytics service on Azure portal. More step-by-step guide can be found [here](../../synapse-analytics/get-started-create-workspace.md). You need an `ADLSGEN2` account to create a workspace. Your Azure Synapse workspace will use this storage account to store your Synapse workspace data.
+Before using Synapse, you'll need a Synapse workspace. You'll create an Azure Synapse Analytics service on Azure portal. More step-by-step guide can be found [here](../../synapse-analytics/get-started-create-workspace.md). You need an `ADLSGEN2` account to create a workspace. Your Azure Synapse workspace will use this storage account to store your Synapse workspace data.
-After creating a workspace, you can view your workspace on Synapse Studio by signing into your workspace on https://web.azuresynapse.net, or launching Synapse Studio in the Azure portal.
+After creating a workspace, you can view your workspace in Synapse Studio by signing into your workspace on [https://web.azuresynapse.net](https://web.azuresynapse.net), or launching Synapse Studio in the Azure portal.
#### Creating a linked service between Azure storage and Synapse
-To copy your data to Synapse, you need to create a linked service that connects your Azure Storage account with Synapse. More step-by-step instructions can be found [here](../../synapse-analytics/data-integration/data-integration-sql-pool.md#create-linked-services).
+To copy your data to Synapse, you need to create a linked service that connects your Azure Storage account, where you've exported your data, with Synapse. More step-by-step instructions can be found [here](../../synapse-analytics/data-integration/data-integration-sql-pool.md#create-linked-services).
-1. On Synapse Studio, navigate to the **Manage** tab, and under **External connections**, select **Linked services**.
+1. In Synapse Studio, browse to the **Manage** tab and under **External connections**, select **Linked services**.
2. Select **New** to add a new linked service. 3. Select **Azure Data Lake Storage Gen2** from the list and select **Continue**. 4. Enter your authentication credentials. Select **Create** when finished.
-Now that you have a linked service between your ADL Gen 2 storage and Synapse, youΓÇÖre ready to use Synapse SQL pools to load and analyze your FHIR data.
+Now that you have a linked service between your ADL Gen 2 storage and Synapse, you're ready to use Synapse SQL pools to load and analyze your FHIR data.
-### Decide between serverless and dedicated SQL pool
+#### Decide between serverless and dedicated SQL pool
Azure Synapse Analytics offers two different SQL pools, serverless SQL pool and dedicated SQL pool. Serverless SQL pool gives the flexibility of querying data directly in the blob storage using the serverless SQL endpoint without any resource provisioning. Dedicated SQL pool has the processing power for high performance and concurrency, and is recommended for enterprise-scale data warehousing capabilities. For more details on the two SQL pools, check out the [Synapse documentation page](../../synapse-analytics/sql/overview-architecture.md) on SQL architecture. #### Using serverless SQL pool
-Since itΓÇÖs serverless, there's no infrastructure to setup or clusters to maintain. You can start querying data from Synapse Studio as soon as the workspace is created.
+Since it's serverless, there's no infrastructure to setup or clusters to maintain. You can start querying data from Synapse Studio as soon as the workspace is created.
For example, the following query can be used to transform selected fields from `Patient.ndjson` into a tabular structure:
OPENROWSET(bulk 'https://{{youraccount}}.blob.core.windows.net/{{yourcontainer}}
Dedicated SQL pool supports managed tables and a hierarchical cache for in-memory performance. You can import big data with simple T-SQL queries, and then use the power of the distributed query engine to run high-performance analytics.
-The simplest and fastest way to load data from your storage to a dedicated SQL pool is to use the **`COPY`** command in T-SQL, which can read CSV, Parquet, and ORC files. As in the example query below, use the `COPY` command to load the `NDJSON` rows into a tabular structure.
+The simplest and fastest way to load data from your storage to a dedicated SQL pool is to use the **`COPY`** command in T-SQL, which can read CSV, Parquet, and ORC files. As in the example query below, use the `COPY` command to load the `NDJSON` rows into a tabular structure.
```sql -- Create table with HEAP, which is not indexed and does not have a column width limitation of NVARCHAR(4000)
FIELDTERMINATOR = '0x00'
GO ```
-Once you have the JSON rows in the `StagingPatient` table above, you can create different tabular formats of the data using the `OPENJSON` function and storing the results into tables. HereΓÇÖs a sample SQL query to create a `Patient` table by extracting a few fields from the `Patient` resource:
+Once you have the JSON rows in the `StagingPatient` table above, you can create different tabular formats of the data using the `OPENJSON` function and storing the results into tables. Here's a sample SQL query to create a `Patient` table by extracting a few fields from the `Patient` resource:
```sql SELECT RES.*
INTO Patient
FROM StagingPatient CROSS APPLY OPENJSON(Resource) WITH (
- ResourceId VARCHAR(64) '$.id',
- FullName VARCHAR(100) '$.name[0].text',
- FamilyName VARCHAR(50) '$.name[0].family',
- GivenName VARCHAR(50) '$.name[0].given[0]',
- Gender VARCHAR(20) '$.gender',
- DOB DATETIME2 '$.birthDate',
- MaritalStatus VARCHAR(20) '$.maritalStatus.coding[0].display',
- LanguageOfCommunication VARCHAR(20) '$.communication[0].language.text'
+ ResourceId VARCHAR(64) '$.id',
+ FullName VARCHAR(100) '$.name[0].text',
+ FamilyName VARCHAR(50) '$.name[0].family',
+ GivenName VARCHAR(50) '$.name[0].given[0]',
+ Gender VARCHAR(20) '$.gender',
+ DOB DATETIME2 '$.birthDate',
+ MaritalStatus VARCHAR(20) '$.maritalStatus.coding[0].display',
+ LanguageOfCommunication VARCHAR(20) '$.communication[0].language.text'
) AS RES GO ```
-## Use FHIR Analytics Pipelines OSS tools
--
-> [!Note]
-> [FHIR Analytics pipeline](https://github.com/microsoft/FHIR-Analytics-Pipelines) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
-
-### ADF pipeline for moving FHIR data into CDM folder
-
-Common Data Model (CDM) folder is a folder in a data lake that conforms to well-defined and standardized metadata structures and self-describing data. These folders facilitate metadata interoperability between data producers and data consumers. Before you copy FHIR data into CDM folder, you can transform your data into a table configuration.
-
-### Generating table configuration
-
-Clone the repo to get all the scripts and source code. Use `npm install` to install the dependencies. Run the following command from the `Configuration-Generator` folder to generate a table configuration folder using YAML format instructions:
-
-```bash
-Configuration-Generator> node .\generate_from_yaml.js -r {resource configuration file} -p {properties group file} -o {output folder}
-```
-
-You may use the sample `YAML` files, `resourcesConfig.yml` and `propertiesGroupConfig.yml` provided in the repo.
-
-### Generating ADF pipeline
-
-Now you can use the content of the generated table configuration and a few other configurations to generate an ADF pipeline. This ADF pipeline, when triggered, exports the data from the FHIR server using `$export` API and writes to a CDM folder along with associated CDM metadata.
-
-1. Create an Azure Active Directory (AD) application and service principal. The ADF pipeline uses an Azure batch service to do the transformation, and needs an Azure AD application for the batch service. Follow [Azure AD documentation](../../active-directory/develop/howto-create-service-principal-portal.md).
-2. Grant access for export storage location to the service principal. In the `Access Control` of the export storage, grant `Storage Blob Data Contributor` role to the Azure AD application.
-3. Deploy the egress pipeline. Use the template `fhirServiceToCdm.json` for a custom deployment on Azure. This step will create the following Azure resources:
- - An ADF pipeline with the name `{pipelinename}-df`.
- - A key vault with the name `{pipelinename}-kv` to store the client secret.
- - A batch account with the name `{pipelinename}batch` to run the transformation.
- - A storage account with the name `{pipelinename}storage`.
-4. Grant access to the Azure Data Factory. In the access control panel of your FHIR service, grant `FHIR data exporter` and `FHIR data reader` roles to the data factory, `{pipelinename}-df`.
-5. Upload the content of the table configuration folder to the configuration container.
-6. Go to `{pipelinename}-df`, and trigger the pipeline. You should see the exported data in the CDM folder on the storage account `{pipelinename}storage`. You should see one folder for each table having a CSV file.
-
-### From CDM folder to Synapse
-
-Once you have the data exported in a CDM format and stored in your ADL Gen 2 storage, you can now copy your data in the CDM folder to Synapse.
-
-You can create CDM to Synapse pipeline using a configuration file, which would look something like this:
-
-```json
-{
- "ResourceGroup": "",
- "TemplateFilePath": "../Templates/cdmToSynapse.json",
- "TemplateParameters": {
- "DataFactoryName": "",
- "SynapseWorkspace": "",
- "DedicatedSqlPool": "",
- "AdlsAccountForCdm": "",
- "CdmRootLocation": "cdm",
- "StagingContainer": "adfstaging",
- "Entities": ["LocalPatient", "LocalPatientAddress"]
- }
-}
-```
-
-Run this script with the configuration file above:
-
-```bash
-.\DeployCdmToSynapsePipeline.ps1 -Config: config.json
-```
-
-Add ADF Managed Identity as a SQL user into SQL database. HereΓÇÖs a sample SQL script to create a user and an assign role:
-
-```sql
-CREATE USER [datafactory-name] FROM EXTERNAL PROVIDER
-GO
-EXEC sp_addrolemember db_owner, [datafactory-name]
-GO
-```
- ## Next steps
-In this article, you learned two different ways to copy your FHIR data into Synapse: (1) using `$export` to copy data into ADL Gen 2 blob storage then loading the data into Synapse SQL pools, and (2) using ADF pipeline for moving FHIR data into CDM folder then into Synapse.
+In this article, you learned three different ways to copy your FHIR data into Synapse.
-Next, you can learn about anonymization of your FHIR data while copying data to Synapse to ensure your healthcare information is protected:
+Next, you can learn about how you can de-identify your FHIR data while exporting it to Synapse in order to protect PHI.
>[!div class="nextstepaction"] >[Exporting de-identified data](./de-identified-export.md)
healthcare-apis Healthcare Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-overview.md
Previously updated : 03/15/2022 Last updated : 03/17/2022
Azure Health Data Services enables you to:
## What are the key differences between Azure Health Data Services and Azure API for FHIR?
-**Linked Services**
+**Linked services**
Azure Health Data Services now supports multiple health data standards for the exchange of structured data. A single collection of Azure Health Data Services enables you to deploy multiple instances of different service types (FHIR, DICOM, and MedTech) that seamlessly work with one another. Services deployed within a workspace also share a compliance boundary and common configuration settings. The product scales automatically to meet the varying demands of your workloads, so you spend less time managing infrastructure and more time generating insights from health data.
Azure Health Data Services now supports multiple health data standards for the e
Azure Health Data Services now includes support for DICOM service. DICOM enables the secure exchange of image data and its associated metadata. DICOM is the international standard to transmit, store, retrieve, print, process, and display medical imaging information, and is the primary medical imaging standard accepted across healthcare. For more information about the DICOM service, see [Overview of DICOM](./dicom/dicom-services-overview.md).
-**Incremental changes to the FHIR Service**
+**Incremental changes to the FHIR service**
For the secure exchange of FHIR data, Azure Health Data Services offers a few incremental capabilities that aren't available in Azure API for FHIR.
-* **Support for Transactions**: In Azure Health Data Services, the FHIR service supports transaction bundles. For more information about transaction bundles, visit [HL7.org](http://www.hl7.org/) and refer to batch/transaction interactions.
+* **Support for transactions**: In Azure Health Data Services, the FHIR service supports transaction bundles. For more information about transaction bundles, visit [HL7.org](http://www.hl7.org/) and refer to batch/transaction interactions.
* [Chained Search Improvements](./././fhir/overview-of-search.md#chained--reverse-chained-searching): Chained Search & Reserve Chained Search are no longer limited by 100 items per sub query. * The $convert-data operation can now transform JSON objects to FHIR R4. * Events: Trigger new workflows when resources are created, updated, or deleted in a FHIR service.
iot-develop Concepts Azure Rtos Security Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-azure-rtos-security-practices.md
Microsoft recommends an approach based on the principle of *Zero Trust* when des
- **Least-privileged access** Devices should enforce least-privileged access control on local resources across workloads. For example, a firmware component that reports battery level shouldn't be able to access a camera component. -- **Continual updates** A device should enable the Over-the-Air (OTA) feature, such as the [Device Update for IoT Hub](/azure/iot-hub-device-update/device-update-azure-real-time-operating-system) to push the firmware that contains the patches or bug fixes.
+- **Continual updates** A device should enable the Over-the-Air (OTA) feature, such as the [Device Update for IoT Hub](../iot-hub-device-update/device-update-azure-real-time-operating-system.md) to push the firmware that contains the patches or bug fixes.
-- **Security monitoring and responses** A device should be able to proactively report the security postures for the solution builder to monitor the potential threats for a large number of devices. The [Microsoft Defender for IoT](/azure/defender-for-iot/device-builders/concept-rtos-security-module) can be used for that purpose.
+- **Security monitoring and responses** A device should be able to proactively report the security postures for the solution builder to monitor the potential threats for a large number of devices. The [Microsoft Defender for IoT](../defender-for-iot/device-builders/concept-rtos-security-module.md) can be used for that purpose.
## Embedded security components - cryptography
Use cloud resources to record and analyze device failures remotely. Aggregate er
**Azure RTOS**: No specific Azure RTOS requirements but consider logging Azure RTOS API return codes to look for specific problems with lower-level protocols (for example, TLS alert causes, TCP failures) that may indicate problems.
-**Application**: Make use of logging libraries and your cloud service's client SDK to push error logs to the cloud where they can be stored and analyzed safely without using valuable device storage space. Integration with [Microsoft Defender for IoT](https://azure.microsoft.com/services/azure-defender-for-iot/) would provide this functionality and more. Microsoft Defender for IoT provides agent-less monitoring of devices in an IoT solution. Monitoring can be enhanced by including the [Microsoft Defender for IOT micro-agent for Azure RTOS](/azure/defender-for-iot/device-builders/iot-security-azure-rtos) on your device. For more information, see the [Runtime security monitoring and threat detection](#runtime-security-monitoring-and-threat-detection) recommendation.
+**Application**: Make use of logging libraries and your cloud service's client SDK to push error logs to the cloud where they can be stored and analyzed safely without using valuable device storage space. Integration with [Microsoft Defender for IoT](https://azure.microsoft.com/services/azure-defender-for-iot/) would provide this functionality and more. Microsoft Defender for IoT provides agent-less monitoring of devices in an IoT solution. Monitoring can be enhanced by including the [Microsoft Defender for IOT micro-agent for Azure RTOS](../defender-for-iot/device-builders/iot-security-azure-rtos.md) on your device. For more information, see the [Runtime security monitoring and threat detection](#runtime-security-monitoring-and-threat-detection) recommendation.
### Disable unused protocols and features
Connected IoT devices may not have the necessary resources to implement all secu
**Azure RTOS**: Azure RTOS supports [Microsoft Defender for IoT](https://azure.microsoft.com/services/azure-defender-for-iot/).
-**Application**: The [Microsoft Defender for IOT micro-agent for Azure RTOS](/azure/defender-for-iot/device-builders/iot-security-azure-rtos) provides a comprehensive security solution for Azure RTOS devices. The module provides security services via a small software agent that is built into your deviceΓÇÖs firmware and comes as part of Azure RTOS. The service includes detection of malicious network activities, device behavior baselining based on custom alerts, and recommendations that will help to improve the security hygiene of your devices. Whether you're using Azure RTOS in combination with Azure Sphere or not, the Microsoft Defender for IoT micro-agent provides an additional layer of security that is built right into the RTOS by default.
+**Application**: The [Microsoft Defender for IOT micro-agent for Azure RTOS](../defender-for-iot/device-builders/iot-security-azure-rtos.md) provides a comprehensive security solution for Azure RTOS devices. The module provides security services via a small software agent that is built into your deviceΓÇÖs firmware and comes as part of Azure RTOS. The service includes detection of malicious network activities, device behavior baselining based on custom alerts, and recommendations that will help to improve the security hygiene of your devices. Whether you're using Azure RTOS in combination with Azure Sphere or not, the Microsoft Defender for IoT micro-agent provides an additional layer of security that is built right into the RTOS by default.
## Azure RTOS IoT application security checklist
The SMM is a framework for building customized IoT security models, allowing IoT
ISO 27000 is a collection of standards regarding the management and security of information assets, providing baseline guarantees about the security of digital information in certified products. - [FIPS 140-2/3](https://csrc.nist.gov/publications/detail/fips/140/3/final)
-FIPS 140 is a US government program that standardizes cryptographic algorithms and implementations used in US government and military applications. Along with documented standards, certified laboratories provide FIPS certification to guarantee specific cryptographic implementations adhere to regulations.
+FIPS 140 is a US government program that standardizes cryptographic algorithms and implementations used in US government and military applications. Along with documented standards, certified laboratories provide FIPS certification to guarantee specific cryptographic implementations adhere to regulations.
iot-dps How To Roll Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-roll-certificates.md
The mechanics of installing a new certificate on a device will often involve one
- You can trigger affected devices to send a new certificate signing request (CSR) to your PKI Certificate Authority (CA). In this case, each device will likely be able to download its new device certificate directly from the CA. -- You can retain a CSR from each device and use that to get a new device certificate from the PKI CA. In this case, you'll need to push the new certificate to each device in a firmware update using a secure OTA update service like [Device Update for IoT Hub](/azure/iot-hub-device-update/).
+- You can retain a CSR from each device and use that to get a new device certificate from the PKI CA. In this case, you'll need to push the new certificate to each device in a firmware update using a secure OTA update service like [Device Update for IoT Hub](../iot-hub-device-update/index.yml).
## Roll the certificate in the IoT hub
Once a certificate is included as part of a disabled enrollment entry, any attem
- To learn more about X.509 certificates in the Device Provisioning Service, see [X.509 certificate attestation](concepts-x509-attestation.md) - To learn about how to do proof-of-possession for X.509 CA certificates with the Azure IoT Hub Device Provisioning Service, see [How to verify certificates](how-to-verify-certificates.md)-- To learn about how to use the portal to create an enrollment group, see [Managing device enrollments with Azure portal](how-to-manage-enrollments.md).
+- To learn about how to use the portal to create an enrollment group, see [Managing device enrollments with Azure portal](how-to-manage-enrollments.md).
iot-edge How To Configure Module Build Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-module-build-options.md
+
+ Title: Configure module build options
+description: Learn how to use the module.json file to configure build and deployment options for a module
+++ Last updated : 03/11/2022+++++
+# Configure IoT Edge module build options
++
+The *module.json* file controls how modules are built and deployed. IoT Edge module Visual Studio
+and Visual Studio Code projects include the *module.json* file. The file contains IoT Edge module
+configuration details including the version and platform that is used when building an IoT Edge
+module.
+
+## *module.json* settings
+
+The *module.json* file includes the following settings:
+
+| Setting | Description |
+|||
+| image.repository | The repository of the module. |
+| image.tag.version | The version of the module. |
+| image.tag.platforms | A list of supported platforms and their corresponding dockerfile. Each entry is a platform key and dockerfile pair `<platform key>:<dockerfile>`. |
+| image.buildOptions | The build arguments used when running `docker build`. |
+| image.contextPath | The context path used when running `docker build`. By default, it's the current folder of the *module.json* file. If your Docker build needs files not included in the current folder such as a reference to an external package or project, set the **contextPath** to the root path of all necessary files. Verify the files are copied in the dockerfile. |
+| language | The programming language of the module. |
+
+For example, the following *module.json* file is for a C# IoT Edge module:
+
+```json
+{
+ "$schema-version": "0.0.1",
+ "description": "",
+ "image": {
+ "repository": "localhost:5000/edgemodule",
+ "tag": {
+ "version": "0.0.1",
+ "platforms": {
+ "amd64": "./Dockerfile.amd64",
+ "amd64.debug": "./Dockerfile.amd64.debug",
+ "arm32v7": "./Dockerfile.arm32v7",
+ "arm32v7.debug": "./Dockerfile.arm32v7.debug",
+ "arm64v8": "./Dockerfile.arm64v8",
+ "arm64v8.debug": "./Dockerfile.arm64v8.debug",
+ "windows-amd64": "./Dockerfile.windows-amd64"
+ }
+ },
+ "buildOptions": ["--add-host=docker:10.180.0.1"],
+ "contextPath": "./"
+ },
+ "language": "csharp"
+}
+```
+
+Once the module is built, the final tag of the image is combined with both version and platform as
+`<repository>:<version>-<platform key>`. For this example, the image tag for `amd64.debug` is
+`localhost:5000/csharpmod:0.0.1-amd64.debug`.
+
+## Next steps
+
+[Understand the requirements and tools for developing IoT Edge modules](module-development.md)
iot-edge Module Composition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/module-composition.md
Every module has a **settings** property that contains the module **image**, an
The edgeHub module and custom modules also have three properties that tell the IoT Edge agent how to manage them: * **Status**: Whether the module should be running or stopped when first deployed. Required.
-* **RestartPolicy**: When and if the IoT Edge agent should restart the module if it stops. Required.
+* **RestartPolicy**: When and if the IoT Edge agent should restart the module if it stops. If the module is stopped without any errors, it won't start automatically. For more information, see [Docker Docs - Start containers automatically](https://aka.ms/docker-container-restart-policy). Required.
* **StartupOrder**: *Introduced in IoT Edge version 1.0.10.* Which order the IoT Edge agent should start the modules when first deployed. The order is declared with integers, where a module given a startup value of 0 is started first and then higher numbers follow. The edgeAgent module doesn't have a startup value because it always starts first. Optional. The IoT Edge agent initiates the modules in order of the startup value, but does not wait for each module to finish starting before going to the next one.
iot-hub Iot Hub Amqp Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-amqp-support.md
To learn more about the AMQP Protocol, see the [AMQP v1.0 specification](https:/
To learn more about IoT Hub messaging, see: * [Cloud-to-device messages](./iot-hub-devguide-messages-c2d.md)
-* [Support for additional protocols](iot-hub-protocol-gateway.md)
+* [Support for additional protocols](../iot-edge/iot-edge-as-gateway.md)
* [Support for the Message Queuing Telemetry Transport (MQTT) Protocol](./iot-hub-mqtt-support.md)
iot-hub Iot Hub Dev Guide Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-dev-guide-sas.md
For a device/module to connect to your hub, you must still add it to the IoT Hub
### Comparison with a custom gateway
-The token service pattern is the recommended way to implement a custom identity registry/authentication scheme with IoT Hub. This pattern is recommended because IoT Hub continues to handle most of the solution traffic. However, if the custom authentication scheme is so intertwined with the protocol, you may require a *custom gateway* to process all the traffic. An example of such a scenario is using [Transport Layer Security (TLS) and pre-shared keys (PSKs)](https://tools.ietf.org/html/rfc4279). For more information, see the [protocol gateway](iot-hub-protocol-gateway.md) article.
-
+The token service pattern is the recommended way to implement a custom identity registry/authentication scheme with IoT Hub. This pattern is recommended because IoT Hub continues to handle most of the solution traffic. However, if the custom authentication scheme is so intertwined with the protocol, you may require a *custom gateway* to process all the traffic. An example of such a scenario is using [Transport Layer Security (TLS) and pre-shared keys (PSKs)](https://tools.ietf.org/html/rfc4279). For more information, see [How an IoT Edge device can be used as a gateway](../iot-edge/iot-edge-as-gateway.md).
## Additional reference material
iot-hub Iot Hub Java Java Schedule Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-java-java-schedule-jobs.md
This tutorial shows you how to:
* Create a back-end app that creates a job to call the **lockDoor** direct method on multiple devices. Another job sends desired property updates to multiple devices.
-At the end of this tutorial, you have a java console device app and a java console back-end app:
+At the end of this tutorial, you have a Java console device app and a Java console back-end app:
**simulated-device** that connects to your IoT hub, implements the **lockDoor** direct method, and handles desired property changes.
iot-hub Iot Hub Message Enrichments Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-message-enrichments-overview.md
Previously updated : 05/10/2019 Last updated : 03/16/2022
-#Customer intent: As a developer, I want to be able to add information to messages sent from a device to my IoT Hub, based on the destination endpoint.
+#Customer intent: As a developer, I want to be able to add information to messages sent from a device to my IoT hub, based on the destination endpoint.
# Message enrichments for device-to-cloud IoT Hub messages
-*Message enrichments* is the ability of the IoT Hub to *stamp* messages with additional information before the messages are sent to the designated endpoint. One reason to use message enrichments is to include data that can be used to simplify downstream processing. For example, enriching device telemetry messages with a device twin tag can reduce load on customers to make device twin API calls for this information.
+*Message enrichments* is the ability of an IoT hub to *stamp* messages with additional information before the messages are sent to the designated endpoint. One reason to use message enrichments is to include data that can be used to simplify downstream processing. For example, enriching device telemetry messages with a device twin tag can reduce load on customers to make device twin API calls for this information.
![Message enrichments flow](./media/iot-hub-message-enrichments-overview/message-enrichments-flow.png)
The messages can come from any data source supported by [IoT Hub message routing
* device twin change notifications -- changes in the device twin * device life-cycle events, such as when the device is created or deleted
-You can add enrichments to messages that are going to the built-in endpoint of an IoT Hub, or messages that are being routed to custom endpoints such as Azure Blob storage, a Service Bus queue, or a Service Bus topic.
+You can add enrichments to messages that are going to the built-in endpoint of an IoT hub, or to messages that are being routed to custom endpoints such as Azure Blob storage, a Service Bus queue, or a Service Bus topic.
You can also add enrichments to messages that are being published to Event Grid by first creating an event grid subscription with the device telemetry message type. Based on this subscription, we will create a default route in Azure IoT Hub for the telemetry. This single route can handle all of your Event Grid subscriptions. You can then configure enrichments for the endpoint by using the **Enrich messages** tab of the IoT Hub **Message routing** section. For information about reacting to events by using Event Grid, see [Iot Hub and Event Grid](iot-hub-event-grid.md). Enrichments are applied per endpoint. If you specify five enrichments to be stamped for a specific endpoint, all messages going to that endpoint are stamped with the same five enrichments.
-Enrichments can be configured using the the following methods:
+Enrichments can be configured using the following methods:
| **Method** | **Command** | | -- | --|
To try out message enrichments, see the [message enrichments tutorial](tutorial-
## Limitations
-* You can add up to 10 enrichments per IoT Hub for those hubs in the standard or basic tier. For IoT Hubs in the free tier, you can add up to 2 enrichments.
+* You can add up to 10 enrichments per IoT hub for those hubs in the standard or basic tier. For IoT hubs in the free tier, you can add up to 2 enrichments.
-* In some cases, if you are applying an enrichment with a value set to a tag or property in the device twin, the value will be stamped as a string value. For example, if an enrichment value is set to $twin.tags.field, the messages will be stamped with the string "$twin.tags.field" rather than the value of that field from the twin. This happens in the following cases:
+* In some cases, if you're applying an enrichment with a value set to a tag or property in the device twin, the value will be stamped with the specified device twin path. For example, if an enrichment value is set to $twin.tags.field, the messages will be stamped with the string "$twin.tags.field", rather than the value of that field from the twin. This behavior happens in the following cases:
- * Your IoT Hub is in the basic tier. Basic tier IoT hubs do not support device twins.
+ * Your IoT hub is in the basic tier. Basic tier IoT hubs do not support device twins.
- * Your IoT Hub is in the standard tier, but the device sending the message has no device twin.
+ * Your IoT hub is in the standard tier, but the device sending the message has no device twin.
- * Your IoT Hub is in the standard tier, but the device twin path used for the value of the enrichment does not exist. For example, if the enrichment value is set to $twin.tags.location, and the device twin does not have a location property under tags, the message is stamped with the string "$twin.tags.location".
+ * Your IoT hub is in the standard tier, but the device twin path used for the value of the enrichment does not exist. For example, if the enrichment value is set to $twin.tags.location, and the device twin does not have a location property under tags, the message is stamped with the string "$twin.tags.location".
+
+ * Your IoT hub is in the standard tier, but the device twin path used for the value of the enrichment resolves to an object, rather than a simple property. For example, if the enrichment value is set to $twin.tags.location, and the location property under tags is an object that contains child properties like `{"building": 43, "room": 503}`, the message is stamped with the string "$twin.tags.location".
* Updates to a device twin can take up to five minutes to be reflected in the corresponding enrichment value.
-* The total message size, including the enrichments, can't exceed 256 KB. If a message size exceeds 256 KB, the IoT Hub will drop the message. You can use [IoT Hub metrics](monitor-iot-hub-reference.md#metrics) to identify and debug errors when messages are dropped. For example, you can monitor the *telemetry messages incompatible* (*d2c.telemetry.egress.invalid*) metric in the [routing metrics](monitor-iot-hub-reference.md#routing-metrics). To learn more, see [Monitor IoT Hub](monitor-iot-hub.md).
+* The total message size, including the enrichments, can't exceed 256 KB. If a message size exceeds 256 KB, the IoT hub will drop the message. You can use [IoT Hub metrics](monitor-iot-hub-reference.md#metrics) to identify and debug errors when messages are dropped. For example, you can monitor the *telemetry messages incompatible* (*d2c.telemetry.egress.invalid*) metric in the [routing metrics](monitor-iot-hub-reference.md#routing-metrics). To learn more, see [Monitor IoT Hub](monitor-iot-hub.md).
* Message enrichments don't apply to digital twin change events.
To try out message enrichments, see the [message enrichments tutorial](tutorial-
## Pricing
-Message enrichments are available for no additional charge. Currently, you are charged when you send a message to an IoT Hub. You are only charged once for that message, even if the message goes to multiple endpoints.
+Message enrichments are available for no additional charge. Currently, you are charged when you send a message to an IoT hub. You are only charged once for that message, even if the message goes to multiple endpoints.
## Next steps
-Check out these articles for more information about routing messages to an IoT Hub:
+Check out these articles for more information about routing messages to an IoT hub:
* [Message enrichments tutorial](tutorial-message-enrichments.md)
iot-hub Iot Hub Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-support.md
To respond, the device sends a message with a valid JSON or empty body to the to
For more information, see the [Direct method developer's guide](iot-hub-devguide-direct-methods.md).
-## Additional considerations
-
-As a final consideration, if you need to customize the MQTT protocol behavior on the cloud side, you should review the [Azure IoT protocol gateway](iot-hub-protocol-gateway.md). This software enables you to deploy a high-performance custom protocol gateway that interfaces directly with IoT Hub. The Azure IoT protocol gateway enables you to customize the device protocol to accommodate brownfield MQTT deployments or other custom protocols. This approach does require, however, that you run and operate a custom protocol gateway.
- ## Next steps To learn more about the MQTT protocol, see the [MQTT documentation](https://mqtt.org/).
To learn more about the MQTT protocol, see the [MQTT documentation](https://mqtt
To learn more about planning your IoT Hub deployment, see: * [Azure Certified for IoT device catalog](https://devicecatalog.azure.com/)
-* [Support additional protocols](iot-hub-protocol-gateway.md)
+* [Support additional protocols](../iot-edge/iot-edge-as-gateway.md)
* [Compare with Event Hubs](iot-hub-compare-event-hubs.md) * [Scaling, HA, and DR](iot-hub-scaling.md)
iot-hub Iot Hub Protocol Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-protocol-gateway.md
Previously updated : 08/19/2021 Last updated : 03/17/2022 + # Support additional protocols for IoT Hub Azure IoT Hub natively supports communication over the MQTT, AMQP, and HTTPS protocols. In some cases, devices or field gateways might not be able to use one of these standard protocols and require protocol adaptation. In such cases, you can use a custom gateway. A custom gateway enables protocol adaptation for IoT Hub endpoints by bridging the traffic to and from IoT Hub. You can use the [Azure IoT protocol gateway](https://github.com/Azure/azure-iot-protocol-gateway/blob/master/README.md) as a custom gateway to enable protocol adaptation for IoT Hub.
+>[!NOTE]
+>The Azure IoT protocol gateway is no longer the recommended method for protocol adaptation. Instead, consider using Azure IoT Edge as a gateway.
+>
+>For more information, see [How an IoT Edge device can be used as a gateway](../iot-edge/iot-edge-as-gateway.md).
+ ## Azure IoT protocol gateway The Azure IoT protocol gateway is a framework for protocol adaptation that is designed for high-scale, bidirectional device communication with IoT Hub. The protocol gateway is a pass-through component that accepts device connections over a specific protocol. It bridges the traffic to IoT Hub over AMQP 1.0.
For flexibility, the Azure IoT protocol gateway and MQTT implementation are prov
To learn more about the Azure IoT protocol gateway and how to use and deploy it as part of your IoT solution, see:
-* [Azure IoT protocol gateway repository on GitHub](https://github.com/Azure/azure-iot-protocol-gateway/blob/master/README.md)
-
-* [Azure IoT protocol gateway developer guide](https://github.com/Azure/azure-iot-protocol-gateway/blob/master/docs/DeveloperGuide.md)
-
-To learn more about planning your IoT Hub deployment, see:
-
-* [Compare with Event Hubs](iot-hub-compare-event-hubs.md)
-
-* [Scaling, high availability, and disaster recovery](iot-hub-scaling.md)
-
-* [IoT Hub developer guide](iot-hub-devguide.md)
+* [Azure IoT protocol gateway repository on GitHub](https://github.com/Azure/azure-iot-protocol-gateway/blob/master/README.md)
iot-hub Tutorial Message Enrichments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-message-enrichments.md
Previously updated : 12/20/2019 Last updated : 03/16/2022 # Customer intent: As a customer using Azure IoT Hub, I want to add information to the messages that come through my IoT hub and are sent to another endpoint. For example, I'd like to pass the IoT hub name to the application that reads the messages from the final endpoint, such as Azure Storage.
At this point, the resources are all set up and the message routing is configure
| DeviceLocation | $twin.tags.location (assumes that the device twin has a location tag) | AzureStorageContainers > ContosoStorageEndpointEnriched | |customerID | 6ce345b8-1e4a-411e-9398-d34587459a3a | AzureStorageContainers > ContosoStorageEndpointEnriched |
- > [!NOTE]
- > If your device doesn't have a twin, the value you put in here will be stamped as a string for the value in the message enrichments. To see the device twin information, go to your hub in the portal and select **IoT devices**. Select your device, and then select **Device twin** at the top of the page.
- >
- > You can edit the twin information to add tags, such as location, and set it to a specific value. For more information, see [Understand and use device twins in IoT Hub](iot-hub-devguide-device-twins.md).
- 3. When you're finished, your pane should look similar to this image: ![Table with all enrichments added](./media/tutorial-message-enrichments/all-message-enrichments.png)
At this point, the resources are all set up and the message routing is configure
4. Select **Apply** to save the changes. Skip to the [Test message enrichments](#test-message-enrichments) section. ## Create and configure by using a Resource Manager template+ You can use a Resource Manager template to create and configure the resources, message routing, and message enrichments. 1. Sign in to the Azure portal. Select **+ Create a Resource** to bring up a search box. Enter *template deployment*, and search for it. In the results pane, select **Template deployment (deploy using custom template)**.
You can use a Resource Manager template to create and configure the resources, m
1. Wait for the template to be fully deployed. Select the bell icon at the top of the screen to check on the progress. When it's finished, continue to the [Test message enrichments](#test-message-enrichments) section.
+## Add location tag to the device twin
+
+One of the message enrichments configured on your IoT hub specifies a key of DeviceLocation with its value determined by the following device twin path: `$twin.tags.location`. If your device twin doesn't have a location tag, the twin path, `$twin.tags.location`, will be stamped as a string for the DeviceLocation value in the message enrichments.
+
+Follow these steps to add a location tag to your device's twin with the portal.
+
+1. Go to your IoT hub by selecting **Resource groups**. Then select the resource group set up for this tutorial (**ContosoResourcesMsgEn**). Find the IoT hub in the list, and select it. Select **Devices** on the left-pane of the IoT hub, then select your device (**Contoso-Test-Device**).
+
+1. Select the **Device twin** tab at the top of the device page and add the following line just before the closing brace at the bottom of the device twin. Then select **Save**.
+
+ ```json
+ , "tags": {"location": "Plant 43"}
+
+ ```
+
+ :::image type="content" source="./media/tutorial-message-enrichments/add-location-tag-to-device-twin.png" alt-text="Screenshot of adding location tag to device twin in Azure portal":::
+
+1. Wait about five minutes before continuing to the next section. It can take up to that long for updates to the device twin to be reflected in message enrichment values.
+
+To learn more about how device twin paths are handled with message enrichments, see [Message enrichments limitations](iot-hub-message-enrichments-overview.md#limitations). To learn more about device twins, see [Understand and use device twins in IoT Hub](iot-hub-devguide-device-twins.md).
+ ## Test message enrichments To view the message enrichments, select **Resource groups**. Then select the resource group you're using for this tutorial. Select the IoT hub from the list of resources, and go to **Messaging**. The message routing configuration and the configured enrichments appear.
The messages in the container called **enriched** have the message enrichments i
When you look at messages that have been enriched, you should see "my IoT Hub" with the hub name and the location and the customer ID, like this: ```json
-{"EnqueuedTimeUtc":"2019-05-10T06:06:32.7220000Z","Properties":{"level":"storage","my IoT Hub":"contosotesthubmsgen3276","devicelocation":"Plant 43","customerID":"6ce345b8-1e4a-411e-9398-d34587459a3a"},"SystemProperties":{"connectionDeviceId":"Contoso-Test-Device","connectionAuthMethod":"{\"scope\":\"device\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}","connectionDeviceGenerationId":"636930642531278483","enqueuedTime":"2019-05-10T06:06:32.7220000Z"},"Body":"eyJkZXZpY2VJZCI6IkNvbnRvc28tVGVzdC1EZXZpY2UiLCJ0ZW1wZXJhdHVyZSI6MjkuMjMyMDE2ODQ4MDQyNjE1LCJodW1pZGl0eSI6NjQuMzA1MzQ5NjkyODQ0NDg3LCJwb2ludEluZm8iOiJUaGlzIGlzIGEgc3RvcmFnZSBtZXNzYWdlLiJ9"}
+{"EnqueuedTimeUtc":"2019-05-10T06:06:32.7220000Z","Properties":{"level":"storage","myIotHub":"contosotesthubmsgen3276","DeviceLocation":"Plant 43","customerID":"6ce345b8-1e4a-411e-9398-d34587459a3a"},"SystemProperties":{"connectionDeviceId":"Contoso-Test-Device","connectionAuthMethod":"{\"scope\":\"device\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}","connectionDeviceGenerationId":"636930642531278483","enqueuedTime":"2019-05-10T06:06:32.7220000Z"},"Body":"eyJkZXZpY2VJZCI6IkNvbnRvc28tVGVzdC1EZXZpY2UiLCJ0ZW1wZXJhdHVyZSI6MjkuMjMyMDE2ODQ4MDQyNjE1LCJodW1pZGl0eSI6NjQuMzA1MzQ5NjkyODQ0NDg3LCJwb2ludEluZm8iOiJUaGlzIGlzIGEgc3RvcmFnZSBtZXNzYWdlLiJ9"}
``` Here's an unenriched message. Notice that "my IoT Hub," "devicelocation," and "customerID" don't show up here because these fields are added by the enrichments. This endpoint has no enrichments.
key-vault Quick Create Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-go.md
ms.devlang: golang
In this quickstart, you'll learn to use the Azure SDK for Go to manage certificates in an Azure Key Vault.
-Azure Key Vault is a cloud service that works as a secure secrets store. You can securely store keys, passwords, certificates, and other secrets. For more information on Key Vault, you may review the [Overview](/azure/key-vault/general/overview).
+Azure Key Vault is a cloud service that works as a secure secrets store. You can securely store keys, passwords, certificates, and other secrets. For more information on Key Vault, you may review the [Overview](../general/overview.md).
Follow this guide to learn how to use the [azcertificates](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/keyvault/azcertificates) package to manage your Azure Key Vault certificates using Go.
key-vault Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/private-link-service.md
Aliases: <your-key-vault-name>.vault.azure.net
## Limitations and Design Considerations
-**Limits**: See [Azure Private Link limits](https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits#private-link-limits)
+**Limits**: See [Azure Private Link limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#private-link-limits)
**Pricing**: See [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
Aliases: <your-key-vault-name>.vault.azure.net
## Next Steps - Learn more about [Azure Private Link](../../private-link/private-link-service-overview.md)-- Learn more about [Azure Key Vault](overview.md)
+- Learn more about [Azure Key Vault](overview.md)
key-vault Quick Create Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-go.md
ms.devlang: golang
In this quickstart, you'll learn to use the Azure SDK for Go to create, retrieve, update, list, and delete Azure Key Vault keys.
-Azure Key Vault is a cloud service that works as a secure secrets store. You can securely store keys, passwords, certificates, and other secrets. For more information on Key Vault, you may review the [Overview](/azure/key-vault/general/overview).
+Azure Key Vault is a cloud service that works as a secure secrets store. You can securely store keys, passwords, certificates, and other secrets. For more information on Key Vault, you may review the [Overview](../general/overview.md).
Follow this guide to learn how to use the [azkeys](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/keyvault/azkeys) package to manage your Azure Key Vault keys using Go.
load-balancer Quickstart Basic Public Load Balancer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-public-load-balancer-cli.md
+
+ Title: 'Quickstart: Create a basic public load balancer - Azure CLI'
+
+description: Learn how to create a public basic SKU Azure Load Balancer in this quickstart using the Azure CLI.
++++ Last updated : 03/16/2022+++
+# Quickstart: Create a basic public load balancer using the Azure CLI
+
+Get started with Azure Load Balancer by using the Azure portal to create a basic public load balancer and two virtual machines.
+++
+- This quickstart requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+>[!NOTE]
+>Standard SKU load balancer is recommended for production workloads. For more information about SKUs, see **[Azure Load Balancer SKUs](../skus.md)**.
+
+## Create a resource group
+
+An Azure resource group is a logical container into which Azure resources are deployed and managed.
+
+Create a resource group with [az group create](/cli/azure/group#az_group_create):
+
+```azurecli
+ az group create \
+ --name CreatePubLBQS-rg \
+ --location eastus
+```
+
+## Create a virtual network
+
+Before you deploy VMs and test your load balancer, create the supporting virtual network and subnet.
+
+Create a virtual network using [az network vnet create](/cli/azure/network/vnet#az_network_vnet_create). The virtual network and subnet will contain the resources deployed later in this article.
+
+```azurecli
+ az network vnet create \
+ --resource-group CreatePubLBQS-rg \
+ --location eastus \
+ --name myVNet \
+ --address-prefixes 10.1.0.0/16 \
+ --subnet-name myBackendSubnet \
+ --subnet-prefixes 10.1.0.0/24
+```
+
+## Create a public IP address
+
+To access your web app on the Internet, you need a public IP address for the load balancer.
+
+Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create) to create the public IP for the load balancer frontend.
+
+```azurecli
+ az network public-ip create \
+ --resource-group CreatePubLBQS-rg \
+ --name myPublicIP \
+ --sku Basic
+```
+
+## Create a load balancer
+
+This section details how you can create and configure the following components of the load balancer:
+
+ * A frontend IP pool that receives the incoming network traffic on the load balancer
+
+ * A backend IP pool where the frontend pool sends the load balanced network traffic
+
+ * A health probe that determines health of the backend VM instances
+
+ * A load balancer rule that defines how traffic is distributed to the VMs
+
+### Create the load balancer resource
+
+Create a public load balancer with [az network lb create](/cli/azure/network/lb#az_network_lb_create):
+
+```azurecli
+ az network lb create \
+ --resource-group CreatePubLBQS-rg \
+ --name myLoadBalancer \
+ --sku Basic \
+ --public-ip-address myPublicIP \
+ --frontend-ip-name myFrontEnd \
+ --backend-pool-name myBackEndPool
+```
+
+### Create the health probe
+
+A health probe checks all virtual machine instances to ensure they can send network traffic.
+
+A virtual machine with a failed probe check is removed from the load balancer. The virtual machine is added back into the load balancer when the failure is resolved.
+
+Create a health probe with [az network lb probe create](/cli/azure/network/lb/probe#az_network_lb_probe_create):
+
+```azurecli
+ az network lb probe create \
+ --resource-group CreatePubLBQS-rg \
+ --lb-name myLoadBalancer \
+ --name myHealthProbe \
+ --protocol tcp \
+ --port 80
+```
+
+### Create the load balancer rule
+
+A load balancer rule defines:
+
+* Frontend IP configuration for the incoming traffic
+
+* The backend IP pool to receive the traffic
+
+* The required source and destination port
+
+Create a load balancer rule with [az network lb rule create](/cli/azure/network/lb/rule#az_network_lb_rule_create):
+
+```azurecli
+ az network lb rule create \
+ --resource-group CreatePubLBQS-rg \
+ --lb-name myLoadBalancer \
+ --name myHTTPRule \
+ --protocol tcp \
+ --frontend-port 80 \
+ --backend-port 80 \
+ --frontend-ip-name myFrontEnd \
+ --backend-pool-name myBackEndPool \
+ --probe-name myHealthProbe \
+ --idle-timeout 15
+```
+
+## Create a network security group
+
+For a standard load balancer, the VMs in the backend address for are required to have network interfaces that belong to a network security group.
+
+Use [az network nsg create](/cli/azure/network/nsg#az_network_nsg_create) to create the network security group:
+
+```azurecli
+ az network nsg create \
+ --resource-group CreatePubLBQS-rg \
+ --name myNSG
+```
+
+### Create a network security group rule
+
+Create a network security group rule using [az network nsg rule create](/cli/azure/network/nsg/rule#az_network_nsg_rule_create):
+
+```azurecli
+ az network nsg rule create \
+ --resource-group CreatePubLBQS-rg \
+ --nsg-name myNSG \
+ --name myNSGRuleHTTP \
+ --protocol '*' \
+ --direction inbound \
+ --source-address-prefix '*' \
+ --source-port-range '*' \
+ --destination-address-prefix '*' \
+ --destination-port-range 80 \
+ --access allow \
+ --priority 200
+```
+
+## Create a bastion host
+
+In this section, you'll create te resources for Azure Bastion. Azure Bastion is used to securely manage the virtual machines in the backend pool of the load balancer.
+
+### Create a public IP address
+
+Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create) to create a public ip address for the bastion host. The public IP is used by the bastion host for secure access to the virtual machine resources.
+
+```azurecli
+ az network public-ip create \
+ --resource-group CreatePubLBQS-rg \
+ --name myBastionIP \
+ --sku Standard \
+ --zone 1 2 3
+```
+### Create a bastion subnet
+
+Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az_network_vnet_subnet_create) to create a bastion subnet. The bastion subnet is used by the bastion host to access the virtual network.
+
+```azurecli
+ az network vnet subnet create \
+ --resource-group CreatePubLBQS-rg \
+ --name AzureBastionSubnet \
+ --vnet-name myVNet \
+ --address-prefixes 10.1.1.0/27
+```
+
+### Create bastion host
+
+Use [az network bastion create](/cli/azure/network/bastion#az_network_bastion_create) to create a bastion host. The bastion host is used to connect securely to the virtual machine resources created later in this article.
+
+```azurecli
+ az network bastion create \
+ --resource-group CreatePubLBQS-rg \
+ --name myBastionHost \
+ --public-ip-address myBastionIP \
+ --vnet-name myVNet \
+ --location eastus
+```
+
+It can take a few minutes for the Azure Bastion host to deploy.
+
+## Create backend servers
+
+In this section, you create:
+
+* Two network interfaces for the virtual machines
+
+* Two virtual machines to be used as backend servers for the load balancer
+
+### Create network interfaces for the virtual machines
+
+Create two network interfaces with [az network nic create](/cli/azure/network/nic#az_network_nic_create):
+
+```azurecli
+ array=(myNicVM1 myNicVM2)
+ for vmnic in "${array[@]}"
+ do
+ az network nic create \
+ --resource-group CreatePubLBQS-rg \
+ --name $vmnic \
+ --vnet-name myVNet \
+ --subnet myBackEndSubnet \
+ --network-security-group myNSG
+ done
+```
+
+### Create availability set for virtual machines
+
+Create the availability set with [az vm availability-set create](/cli/azure/vm/availability-set#az_vm_availability_set_create):
+
+```azurecli
+ az vm availability-set create \
+ --name myAvSet \
+ --resource-group CreatePubLBQS-rg \
+ --location eastus
+
+```
+
+### Create virtual machines
+
+Create the virtual machines with [az vm create](/cli/azure/vm#az_vm_create):
+
+```azurecli
+ az vm create \
+ --resource-group CreatePubLBQS-rg \
+ --name myVM1 \
+ --nics myNicVM1 \
+ --image win2019datacenter \
+ --admin-username azureuser \
+ --availability-set myAvSet \
+ --no-wait
+```
+
+```azurecli
+ az vm create \
+ --resource-group CreatePubLBQS-rg \
+ --name myVM2 \
+ --nics myNicVM2 \
+ --image win2019datacenter \
+ --admin-username azureuser \
+ --availability-set myAvSet \
+ --no-wait
+```
+
+It may take a few minutes for the VMs to deploy. You can continue to the next steps while the VMs are creating.
++
+### Add virtual machines to load balancer backend pool
+
+Add the virtual machines to the backend pool with [az network nic ip-config address-pool add](/cli/azure/network/nic/ip-config/address-pool#az_network_nic_ip_config_address_pool_add):
+
+```azurecli
+ array=(myNicVM1 myNicVM2)
+ for vmnic in "${array[@]}"
+ do
+ az network nic ip-config address-pool add \
+ --address-pool myBackendPool \
+ --ip-config-name ipconfig1 \
+ --nic-name $vmnic \
+ --resource-group CreatePubLBQS-rg \
+ --lb-name myLoadBalancer
+ done
+```
+
+## Install IIS
+
+Use [az vm extension set](/cli/azure/vm/extension#az_vm_extension_set) to install IIS on the virtual machines and set the default website to the computer name.
+
+```azurecli
+ array=(myVM1 myVM2)
+ for vm in "${array[@]}"
+ do
+ az vm extension set \
+ --publisher Microsoft.Compute \
+ --version 1.8 \
+ --name CustomScriptExtension \
+ --vm-name $vm \
+ --resource-group CreatePubLBQS-rg \
+ --settings '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}'
+ done
+```
+
+## Test the load balancer
+
+To get the public IP address of the load balancer, use [az network public-ip show](/cli/azure/network/public-ip#az_network_public_ip_show).
+
+Copy the public IP address, and then paste it into the address bar of your browser.
+
+```azurecli
+ az network public-ip show \
+ --resource-group CreatePubLBQS-rg \
+ --name myPublicIP \
+ --query ipAddress \
+ --output tsv
+```
+
+## Clean up resources
+
+When no longer needed, use the [az group delete](/cli/azure/group#az_group_delete) command to remove the resource group, load balancer, and all related resources.
+
+```azurecli
+ az group delete \
+ --name CreatePubLBQS-rg
+```
+
+## Next steps
+
+In this quickstart:
+
+* You created a basic public load balancer
+
+* Attached two virtual machines
+
+* Configured the load balancer traffic rule and health probe
+
+* Tested the load balancer
+
+To learn more about Azure Load Balancer, continue to:
+> [!div class="nextstepaction"]
+> [What is Azure Load Balancer?](../load-balancer-overview.md)
load-balancer Quickstart Basic Public Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-public-load-balancer-portal.md
+
+ Title: 'Quickstart: Create a basic public load balancer - Azure portal'
+
+description: Learn how to create a public basic SKU Azure Load Balancer in this quickstart.
++++ Last updated : 03/15/2022+++
+# Quickstart: Create a basic public load balancer using the Azure portal
+
+Get started with Azure Load Balancer by using the Azure portal to create a basic public load balancer and two virtual machines.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+>[!NOTE]
+>Standard SKU load balancer is recommended for production workloads. For more information about SKUs, see **[Azure Load Balancer SKUs](../skus.md)**.
+
+## Sign in to Azure
+
+Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+
+## Create the virtual network
+
+In this section, you'll create a virtual network and subnet.
+
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results.
+
+2. In **Virtual networks**, select **+ Create**.
+
+3. In **Create virtual network**, enter or select the following information in the **Basics** tab:
+
+ | **Setting** | **Value** |
+ ||--|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **Create new**. </br> In **Name** enter **CreatePubLBQS-rg**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter **myVNet** |
+ | Region | Select **West US 3** |
+
+4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+
+5. In the **IP Addresses** tab, enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | IPv4 address space | Enter **10.1.0.0/16** |
+
+6. Under **Subnet name**, select the word **default**.
+
+7. In **Edit subnet**, enter the following information:
+
+ | Setting | Value |
+ |--|-|
+ | Subnet name | Enter **myBackendSubnet** |
+ | Subnet address range | Enter **10.1.0.0/24** |
+
+8. Select **Save**.
+
+9. Select the **Security** tab.
+
+10. Under **BastionHost**, select **Enable**. Enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | Bastion name | Enter **myBastionHost** |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/27** |
+ | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
++
+11. Select the **Review + create** tab or select the **Review + create** button.
+
+12. Select **Create**.
+
+## Create load balancer
+
+In this section, you create a load balancer that load balances virtual machines.
+
+During the creation of the load balancer, you'll configure:
+
+* Frontend IP address
+* Backend pool
+* Inbound load-balancing rules
+
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+2. In the **Load balancer** page, select **+ Create**.
+
+3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
+
+ | Setting | Value |
+ | | |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **CreatePubLBQS-rg**. |
+ | **Instance details** | |
+ | Name | Enter **myLoadBalancer** |
+ | Region | Select **West US 3**. |
+ | SKU | Select **Basic**. |
+ | Type | Select **Public**. |
+
+4. Select **Next: Frontend IP configuration** at the bottom of the page.
+
+5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
+
+6. Enter **myFrontend** in **Name**.
+
+7. Select **IPv4** or **IPv6** for the **IP version**.
+
+8. Select **Create new** in **Public IP address**.
+
+9. In **Add a public IP address**, enter **myPublicIP** for **Name**.
+
+10. In **Assignment**, select **Static**.
+
+11. Select **OK**.
+
+12. Select **Add**.
+
+13. Select **Next: Backend pools** at the bottom of the page.
+
+14. In the **Backend pools** tab, select **+ Add a backend pool**.
+
+15. Enter or select the following information in **Add backend pool**.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myBackendPool**. |
+ | Virtual network | Select **myVNet (CreatePubLBQS-rg)**. |
+ | Associated to | Select **Virtual machines**. |
+ | IP version | Select **IPv4** or **IPv6**. |
+
+16. Select **Add**.
+
+17. Select the **Next: Inbound rules** button at the bottom of the page.
+
+18. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
+
+19. In **Add load balancing rule**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myHTTPRule** |
+ | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
+ | Frontend IP address | Select **myFrontend**. |
+ | Backend pool | Select **myBackendPool**. |
+ | Protocol | Select **TCP**. |
+ | Port | Enter **80**. |
+ | Backend port | Enter **80**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Session persistence | Select **None**. |
+ | Idle timeout (minutes) | Enter or select **15**. |
+ | Floating IP | Select **Disabled**. |
+
+20. Select **Add**.
+
+21. Select the blue **Review + create** button at the bottom of the page.
+
+22. Select **Create**.
+
+## Create virtual machines
+
+In this section, you'll create two VMs (**myVM1** and **myVM2**).
+
+The two VMs will be added to an availability set named **myAvailabilitySet**.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. In **Virtual machines**, select **+ Create** > **Virtual machine**.
+
+3. In **Create a virtual machine**, type or select the values in the **Basics** tab:
+
+ | Setting | Value |
+ |--|-|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **CreatePubLBQS-rg** |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVM1** |
+ | Region | Select **West US 3** |
+ | Availability Options | Select **Availability set** |
+ | Availability set | Select **Create new**. </br> Enter **myAvailabilitySet** in **Name**. </br> Select **OK** |
+ | Security type | Select **Standard**. |
+ | Image | Select **Windows Server 2022 Datacenter - Gen2** |
+ | Azure Spot instance | Leave the default of unchecked. |
+ | Size | Choose VM size or take default setting |
+ | **Administrator account** | |
+ | Username | Enter a username |
+ | Password | Enter a password |
+ | Confirm password | Reenter password |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None**. |
+
+4. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+
+5. In the Networking tab, select or enter:
+
+ | Setting | Value |
+ |-|-|
+ | **Network interface** | |
+ | Virtual network | Select **myVNet** |
+ | Subnet | Select **myBackendSubnet** |
+ | Public IP | Select **None** |
+ | NIC network security group | Select **Advanced**|
+ | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> In **Source port ranges**, enter **80**. </br> In **Service**, select **HTTP**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
+ | **Load balancing** | |
+ | Place this virtual machine behind an existing load-balancing solution? | Select the box |
+ | **Load balancing settings** | |
+ | Load balancing options | Select **Azure Load Balancer**. |
+ | Select a load balancer | Select **myLoadBalancer**. |
+ | Select a backend pool | Select **myBackendPool**. |
+
+6. Select **Review + create**.
+
+7. Review the settings, and then select **Create**.
+
+8. Follow the steps 1 through 7 to create one more VM with the following values and all the other settings the same as **myVM1**:
+
+ | Setting | VM 2 |
+ | - | -- |
+ | Name | **myVM2** |
+ | Availability set | Select **myAvailabilitySet** |
+ | Network security group | Select the existing **myNSG** |
++
+## Install IIS
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. Select **myVM1**.
+
+3. On the **Overview** page, select **Connect**, then **Bastion**.
+
+4. Select **Use Bastion**.
+
+5. Enter the username and password entered during VM creation.
+
+6. Select **Connect**.
+
+7. On the server desktop, navigate to **Windows Administrative Tools** > **Windows PowerShell**.
+
+8. In the PowerShell Window, run the following commands to:
+
+ * Install the IIS server
+
+ * Remove the default iisstart.htm file
+
+ * Add a new iisstart.htm file that displays the name of the VM:
+
+ ```powershell
+ # Install IIS server role
+ Install-WindowsFeature -name Web-Server -IncludeManagementTools
+
+ # Remove default htm file
+ Remove-Item C:\inetpub\wwwroot\iisstart.htm
+
+ # Add a new htm file that displays server name
+ Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " + $env:computername)
+ ```
+
+9. Close the bastion session with **myVM1**.
+
+10. Repeat steps 1 to 9 to install IIS and the updated iisstart.htm file on **myVM2**.
+
+## Test the load balancer
+
+1. In the search box at the top of the page, enter **Load balancer**. Select **Load balancers** in the search results.
+
+2. Find the public IP address for the load balancer on the **Overview** page under **Public IP address**.
+
+3. Copy the public IP address, and then paste it into the address bar of your browser. The custom VM page of the IIS Web server is displayed in the browser.
+
+## Clean up resources
+
+When no longer needed, delete the resource group, load balancer, and all related resources. To do so, select the resource group **CreatePubLBQS-rg** that contains the resources and then select **Delete**.
+
+## Next steps
+
+In this quickstart, you:
+
+* Created a basic public load balancer.
+
+* Attached 2 VMs to the load balancer.
+
+* Tested the load balancer.
+
+To learn more about Azure Load Balancer, continue to:
+
+> [!div class="nextstepaction"]
+> [What is Azure Load Balancer?](../load-balancer-overview.md)
+
load-balancer Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-overview.md
For pricing see [Load Balancer pricing](https://azure.microsoft.com/pricing/deta
* Gateway Load Balancer doesn't work with the Global Load Balancer tier. * Cross-tenant chaining is not supported through the Azure portal.
+* Gateway Load Balancer does not currently support IPv6
## Next steps
load-balancer Quickstart Load Balancer Standard Public Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-cli.md
Title: "Quickstart: Create a public load balancer - Azure CLI" description: This quickstart shows how to create a public load balancer using the Azure CLI- -
-tags: azure-resource-manager
- Previously updated : 11/23/2020 Last updated : 03/16/2022 #Customer intent: I want to create a load balancer so that I can load balance internet traffic to VMs. + # Quickstart: Create a public load balancer to load balance VMs using Azure CLI
-Get started with Azure Load Balancer by using Azure CLI to create a public load balancer and three virtual machines.
+Get started with Azure Load Balancer by using Azure CLI to create a public load balancer and two virtual machines.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
An Azure resource group is a logical container into which Azure resources are de
Create a resource group with [az group create](/cli/azure/group#az_group_create):
-* Named **CreatePubLBQS-rg**.
-* In the **eastus** location.
-
-```azurecli-interactive
+```azurecli
az group create \ --name CreatePubLBQS-rg \ --location eastus ```--
-# [**Standard SKU**](#tab/option-1-create-load-balancer-standard)
-
->[!NOTE]
->Standard SKU load balancer is recommended for production workloads. For more information about skus, see **[Azure Load Balancer SKUs](skus.md)**.
-
-## Configure virtual network - Standard
+## Create a virtual network
-Before you deploy VMs and test your load balancer, create the supporting virtual network resources.
+Before you deploy VMs and test your load balancer, create the supporting virtual network and subnet.
-### Create a virtual network
+Create a virtual network using [az network vnet create](/cli/azure/network/vnet#az_network_vnet_create). The virtual network and subnet will contain the resources deployed later in this article.
-Create a virtual network using [az network vnet create](/cli/azure/network/vnet#az_network_vnet_createt):
-
-* Named **myVNet**.
-* Address prefix of **10.1.0.0/16**.
-* Subnet named **myBackendSubnet**.
-* Subnet prefix of **10.1.0.0/24**.
-* In the **CreatePubLBQS-rg** resource group.
-* Location of **eastus**.
-
-```azurecli-interactive
+```azurecli
az network vnet create \ --resource-group CreatePubLBQS-rg \ --location eastus \
Create a virtual network using [az network vnet create](/cli/azure/network/vnet#
--subnet-name myBackendSubnet \ --subnet-prefixes 10.1.0.0/24 ```
-### Create a public IP address
-
-Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create) to create a public ip address for the bastion host:
-
-* Create a standard zone redundant public IP address named **myBastionIP**.
-* In **CCreatePubLBQS-rg**.
-
-```azurecli-interactive
-az network public-ip create \
- --resource-group CreatePubLBQS-rg \
- --name myBastionIP \
- --sku Standard
-```
-### Create a bastion subnet
-
-Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az_network_vnet_subnet_create) to create a bastion subnet:
-
-* Named **AzureBastionSubnet**.
-* Address prefix of **10.1.1.0/24**.
-* In virtual network **myVNet**.
-* In resource group **CreatePubLBQS-rg**.
-
-```azurecli-interactive
-az network vnet subnet create \
- --resource-group CreatePubLBQS-rg \
- --name AzureBastionSubnet \
- --vnet-name myVNet \
- --address-prefixes 10.1.1.0/24
-```
-
-### Create bastion host
-
-Use [az network bastion create](/cli/azure/network/bastion#az_network_bastion_create) to create a bastion host:
-
-* Named **myBastionHost**.
-* In **CreatePubLBQS-rg**.
-* Associated with public IP **myBastionIP**.
-* Associated with virtual network **myVNet**.
-* In **eastus** location.
-
-```azurecli-interactive
-az network bastion create \
- --resource-group CreatePubLBQS-rg \
- --name myBastionHost \
- --public-ip-address myBastionIP \
- --vnet-name myVNet \
- --location eastus
-```
-
-It can take a few minutes for the Azure Bastion host to deploy.
-
-### Create a network security group
-
-For a standard load balancer, the VMs in the backend address for are required to have network interfaces that belong to a network security group.
-
-Create a network security group using [az network nsg create](/cli/azure/network/nsg#az_network_nsg_create):
-
-* Named **myNSG**.
-* In resource group **CreatePubLBQS-rg**.
-
-```azurecli-interactive
- az network nsg create \
- --resource-group CreatePubLBQS-rg \
- --name myNSG
-```
-
-### Create a network security group rule
-
-Create a network security group rule using [az network nsg rule create](/cli/azure/network/nsg/rule#az_network_nsg_rule_create):
-
-* Named **myNSGRuleHTTP**.
-* In the network security group you created in the previous step, **myNSG**.
-* In resource group **CreatePubLBQS-rg**.
-* Protocol **(*)**.
-* Direction **Inbound**.
-* Source **(*)**.
-* Destination **(*)**.
-* Destination port **Port 80**.
-* Access **Allow**.
-* Priority **200**.
-
-```azurecli-interactive
- az network nsg rule create \
- --resource-group CreatePubLBQS-rg \
- --nsg-name myNSG \
- --name myNSGRuleHTTP \
- --protocol '*' \
- --direction inbound \
- --source-address-prefix '*' \
- --source-port-range '*' \
- --destination-address-prefix '*' \
- --destination-port-range 80 \
- --access allow \
- --priority 200
-```
-
-## Create backend servers - Standard
-
-In this section, you create:
-
-* Three network interfaces for the virtual machines.
-* Three virtual machines to be used as backend servers for the load balancer.
-
-### Create network interfaces for the virtual machines
-
-Create three network interfaces with [az network nic create](/cli/azure/network/nic#az_network_nic_create):
-
-* Named **myNicVM1**, **myNicVM2**, and **myNicVM3**.
-* In resource group **CreatePubLBQS-rg**.
-* In virtual network **myVNet**.
-* In subnet **myBackendSubnet**.
-* In network security group **myNSG**.
-
-```azurecli-interactive
- array=(myNicVM1 myNicVM2 myNicVM3)
- for vmnic in "${array[@]}"
- do
- az network nic create \
- --resource-group CreatePubLBQS-rg \
- --name $vmnic \
- --vnet-name myVNet \
- --subnet myBackEndSubnet \
- --network-security-group myNSG
- done
-```
-
-### Create virtual machines
-
-Create the virtual machines with [az vm create](/cli/azure/vm#az_vm_create):
-
-### VM1
-* Named **myVM1**.
-* In resource group **CreatePubLBQS-rg**.
-* Attached to network interface **myNicVM1**.
-* Virtual machine image **win2019datacenter**.
-* In **Zone 1**.
-
-```azurecli-interactive
- az vm create \
- --resource-group CreatePubLBQS-rg \
- --name myVM1 \
- --nics myNicVM1 \
- --image win2019datacenter \
- --admin-username azureuser \
- --zone 1 \
- --no-wait
-```
-#### VM2
-* Named **myVM2**.
-* In resource group **CreatePubLBQS-rg**.
-* Attached to network interface **myNicVM2**.
-* Virtual machine image **win2019datacenter**.
-* In **Zone 2**.
-
-```azurecli-interactive
- az vm create \
- --resource-group CreatePubLBQS-rg \
- --name myVM2 \
- --nics myNicVM2 \
- --image win2019datacenter \
- --admin-username azureuser \
- --zone 2 \
- --no-wait
-```
-
-#### VM3
-* Named **myVM3**.
-* In resource group **CreatePubLBQS-rg**.
-* Attached to network interface **myNicVM3**.
-* Virtual machine image **win2019datacenter**.
-* In **Zone 3**.
-
-```azurecli-interactive
- az vm create \
- --resource-group CreatePubLBQS-rg \
- --name myVM3 \
- --nics myNicVM3 \
- --image win2019datacenter \
- --admin-username azureuser \
- --zone 3 \
- --no-wait
-```
-It may take a few minutes for the VMs to deploy.
-
-## Create a public IP address - Standard
+## Create a public IP address
To access your web app on the Internet, you need a public IP address for the load balancer.
-Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create) to:
-
-* Create a standard zone redundant public IP address named **myPublicIP**.
-* In **CreatePubLBQS-rg**.
+Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create) to create the public IP for the load balancer frontend.
-```azurecli-interactive
+```azurecli
az network public-ip create \ --resource-group CreatePubLBQS-rg \ --name myPublicIP \
- --sku Standard
+ --sku Standard \
+ --zone 1 2 3
```
-To create a zonal public IP address in Zone 1:
+To create a zonal public IP address in Zone 1, use the following command:
-```azurecli-interactive
+```azurecli
az network public-ip create \ --resource-group CreatePubLBQS-rg \ --name myPublicIP \
To create a zonal public IP address in Zone 1:
--zone 1 ```
-## Create standard load balancer
+## Create a load balancer
This section details how you can create and configure the following components of the load balancer:
- * A frontend IP pool that receives the incoming network traffic on the load balancer.
- * A backend IP pool where the frontend pool sends the load balanced network traffic.
- * A health probe that determines health of the backend VM instances.
- * A load balancer rule that defines how traffic is distributed to the VMs.
+ * A frontend IP pool that receives the incoming network traffic on the load balancer
+
+ * A backend IP pool where the frontend pool sends the load balanced network traffic
+
+ * A health probe that determines health of the backend VM instances
+
+ * A load balancer rule that defines how traffic is distributed to the VMs
### Create the load balancer resource Create a public load balancer with [az network lb create](/cli/azure/network/lb#az_network_lb_create):
-* Named **myLoadBalancer**.
-* A frontend pool named **myFrontEnd**.
-* A backend pool named **myBackEndPool**.
-* Associated with the public IP address **myPublicIP** that you created in the preceding step.
-
-```azurecli-interactive
+```azurecli
az network lb create \ --resource-group CreatePubLBQS-rg \ --name myLoadBalancer \ --sku Standard \ --public-ip-address myPublicIP \ --frontend-ip-name myFrontEnd \
- --backend-pool-name myBackEndPool
+ --backend-pool-name myBackEndPool
``` ### Create the health probe
A virtual machine with a failed probe check is removed from the load balancer. T
Create a health probe with [az network lb probe create](/cli/azure/network/lb/probe#az_network_lb_probe_create):
-* Monitors the health of the virtual machines.
-* Named **myHealthProbe**.
-* Protocol **TCP**.
-* Monitoring **Port 80**.
-
-```azurecli-interactive
+```azurecli
az network lb probe create \ --resource-group CreatePubLBQS-rg \ --lb-name myLoadBalancer \ --name myHealthProbe \ --protocol tcp \
- --port 80
+ --port 80
``` ### Create the load balancer rule A load balancer rule defines:
-* Frontend IP configuration for the incoming traffic.
-* The backend IP pool to receive the traffic.
-* The required source and destination port.
+* Frontend IP configuration for the incoming traffic
-Create a load balancer rule with [az network lb rule create](/cli/azure/network/lb/rule#az_network_lb_rule_create):
+* The backend IP pool to receive the traffic
-* Named **myHTTPRule**
-* Listening on **Port 80** in the frontend pool **myFrontEnd**.
-* Sending load-balanced network traffic to the backend address pool **myBackEndPool** using **Port 80**.
-* Using health probe **myHealthProbe**.
-* Protocol **TCP**.
-* Idle timeout of **15 minutes**.
-* Enable TCP reset.
+* The required source and destination port
+Create a load balancer rule with [az network lb rule create](/cli/azure/network/lb/rule#az_network_lb_rule_create):
-```azurecli-interactive
+```azurecli
az network lb rule create \ --resource-group CreatePubLBQS-rg \ --lb-name myLoadBalancer \
Create a load balancer rule with [az network lb rule create](/cli/azure/network/
--disable-outbound-snat true \ --idle-timeout 15 \ --enable-tcp-reset true- ```
-### Add virtual machines to load balancer backend pool
-
-Add the virtual machines to the backend pool with [az network nic ip-config address-pool add](/cli/azure/network/nic/ip-config/address-pool#az_network_nic_ip_config_address_pool_add):
-* In backend address pool **myBackEndPool**.
-* In resource group **CreatePubLBQS-rg**.
-* Associated with load balancer **myLoadBalancer**.
+## Create a network security group
-```azurecli-interactive
- array=(myNicVM1 myNicVM2 myNicVM3)
- for vmnic in "${array[@]}"
- do
- az network nic ip-config address-pool add \
- --address-pool myBackendPool \
- --ip-config-name ipconfig1 \
- --nic-name $vmnic \
- --resource-group CreatePubLBQS-rg \
- --lb-name myLoadBalancer
- done
-```
-
-## Create outbound rule configuration
-Load balancer outbound rules configure outbound SNAT for VMs in the backend pool.
-
-For more information on outbound connections, see [Outbound connections in Azure](load-balancer-outbound-connections.md).
-
-A public IP or prefix can be used for the outbound configuration.
-
-### Public IP
-
-Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create) to create a single IP for the outbound connectivity.
-
-* Named **myPublicIPOutbound**.
-* In **CreatePubLBQS-rg**.
-
-```azurecli-interactive
- az network public-ip create \
- --resource-group CreatePubLBQS-rg \
- --name myPublicIPOutbound \
- --sku Standard
-```
-
-To create a zonal redundant public IP address in Zone 1:
-
-```azurecli-interactive
- az network public-ip create \
- --resource-group CreatePubLBQS-rg \
- --name myPublicIPOutbound \
- --sku Standard \
- --zone 1
-```
-
-### Public IP Prefix
-
-Use [az network public-ip prefix create](/cli/azure/network/public-ip/prefix#az_network_public_ip_prefix_create) to create a public IP prefix for the outbound connectivity.
-
-* Named **myPublicIPPrefixOutbound**.
-* In **CreatePubLBQS-rg**.
-* Prefix length of **28**.
-
-```azurecli-interactive
- az network public-ip prefix create \
- --resource-group CreatePubLBQS-rg \
- --name myPublicIPPrefixOutbound \
- --length 28
-```
-To create a zonal redundant public IP prefix in Zone 1:
-
-```azurecli-interactive
- az network public-ip prefix create \
- --resource-group CreatePubLBQS-rg \
- --name myPublicIPPrefixOutbound \
- --length 28 \
- --zone 1
-```
-
-For more information on scaling outbound NAT and outbound connectivity, see [Scale outbound NAT with multiple IP addresses](load-balancer-outbound-connections.md).
-
-### Create outbound frontend IP configuration
-
-Create a new frontend IP configuration with [az network lb frontend-ip create
-](/cli/azure/network/lb/frontend-ip#az_network_lb_frontend_ip_create):
-
-Select the public IP or public IP prefix commands based on decision in previous step.
-
-#### Public IP
-
-* Named **myFrontEndOutbound**.
-* In resource group **CreatePubLBQS-rg**.
-* Associated with public IP address **myPublicIPOutbound**.
-* Associated with load balancer **myLoadBalancer**.
-
-```azurecli-interactive
- az network lb frontend-ip create \
- --resource-group CreatePubLBQS-rg \
- --name myFrontEndOutbound \
- --lb-name myLoadBalancer \
- --public-ip-address myPublicIPOutbound
-```
-
-#### Public IP prefix
-
-* Named **myFrontEndOutbound**.
-* In resource group **CreatePubLBQS-rg**.
-* Associated with public IP prefix **myPublicIPPrefixOutbound**.
-* Associated with load balancer **myLoadBalancer**.
-
-```azurecli-interactive
- az network lb frontend-ip create \
- --resource-group CreatePubLBQS-rg \
- --name myFrontEndOutbound \
- --lb-name myLoadBalancer \
- --public-ip-prefix myPublicIPPrefixOutbound
-```
-
-### Create outbound pool
-
-Create a new outbound pool with [az network lb address-pool create](/cli/azure/network/lb/address-pool#az_network_lb_address_pool_create):
+For a standard load balancer, the VMs in the backend address for are required to have network interfaces that belong to a network security group.
-* Named **myBackEndPoolOutbound**.
-* In resource group **CreatePubLBQS-rg**.
-* Associated with load balancer **myLoadBalancer**.
+Use [az network nsg create](/cli/azure/network/nsg#az_network_nsg_create) to create the network security group:
-```azurecli-interactive
- az network lb address-pool create \
+```azurecli
+ az network nsg create \
--resource-group CreatePubLBQS-rg \
- --lb-name myLoadBalancer \
- --name myBackendPoolOutbound
+ --name myNSG
```
-### Create outbound rule
-Create a new outbound rule for the outbound backend pool with [az network lb outbound-rule create](/cli/azure/network/lb/outbound-rule#az_network_lb_outbound_rule_create):
+### Create a network security group rule
-* Named **myOutboundRule**.
-* In resource group **CreatePubLBQS-rg**.
-* Associated with load balancer **myLoadBalancer**
-* Associated with frontend **myFrontEndOutbound**.
-* Protocol **All**.
-* Idle timeout of **15**.
-* **10000** outbound ports.
-* Associated with backend pool **myBackEndPoolOutbound**.
+Create a network security group rule using [az network nsg rule create](/cli/azure/network/nsg/rule#az_network_nsg_rule_create):
-```azurecli-interactive
- az network lb outbound-rule create \
+```azurecli
+ az network nsg rule create \
--resource-group CreatePubLBQS-rg \
- --lb-name myLoadBalancer \
- --name myOutboundRule \
- --frontend-ip-configs myFrontEndOutbound \
- --protocol All \
- --idle-timeout 15 \
- --outbound-ports 10000 \
- --address-pool myBackEndPoolOutbound
-```
-### Add virtual machines to outbound pool
-
-Add the virtual machines to the outbound pool with [az network nic ip-config address-pool add](/cli/azure/network/nic/ip-config/address-pool#az_network_nic_ip_config_address_pool_add):
--
-* In backend address pool **myBackEndPoolOutbound**.
-* In resource group **CreatePubLBQS-rg**.
-* Associated with load balancer **myLoadBalancer**.
-
-```azurecli-interactive
- array=(myNicVM1 myNicVM2 myNicVM3)
- for vmnic in "${array[@]}"
- do
- az network nic ip-config address-pool add \
- --address-pool myBackendPoolOutbound \
- --ip-config-name ipconfig1 \
- --nic-name $vmnic \
- --resource-group CreatePubLBQS-rg \
- --lb-name myLoadBalancer
- done
+ --nsg-name myNSG \
+ --name myNSGRuleHTTP \
+ --protocol '*' \
+ --direction inbound \
+ --source-address-prefix '*' \
+ --source-port-range '*' \
+ --destination-address-prefix '*' \
+ --destination-port-range 80 \
+ --access allow \
+ --priority 200
```
-# [**Basic SKU**](#tab/option-1-create-load-balancer-basic)
-
->[!NOTE]
->Standard SKU load balancer is recommended for production workloads. For more information about skus, see **[Azure Load Balancer SKUs](skus.md)**.
--
-## Configure virtual network - Basic
-
-Before you deploy VMs and test your load balancer, create the supporting virtual network resources.
-
-### Create a virtual network
+## Create a bastion host
-Create a virtual network using [az network vnet create](/cli/azure/network/vnet#az_network_vnet_create):
-
-* Named **myVNet**.
-* Address prefix of **10.1.0.0/16**.
-* Subnet named **myBackendSubnet**.
-* Subnet prefix of **10.1.0.0/24**.
-* In the **CreatePubLBQS-rg** resource group.
-* Location of **eastus**.
-
-```azurecli-interactive
- az network vnet create \
- --resource-group CreatePubLBQS-rg \
- --location eastus \
- --name myVNet \
- --address-prefixes 10.1.0.0/16 \
- --subnet-name myBackendSubnet \
- --subnet-prefixes 10.1.0.0/24
-```
+In this section, you'll create te resources for Azure Bastion. Azure Bastion is used to securely manage the virtual machines in the backend pool of the load balancer.
### Create a public IP address
-Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create) to create a public ip address for the bastion host:
-
-* Create a standard zone redundant public IP address named **myBastionIP**.
-* In **CreatePubLBQS-rg**.
+Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create) to create a public ip address for the bastion host. The public IP is used by the bastion host for secure access to the virtual machine resources.
-```azurecli-interactive
-az network public-ip create \
+```azurecli
+ az network public-ip create \
--resource-group CreatePubLBQS-rg \ --name myBastionIP \
- --sku Standard
+ --sku Standard \
+ --zone 1 2 3
``` ### Create a bastion subnet
-Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az_network_vnet_subnet_create) to create a bastion subnet:
-
-* Named **AzureBastionSubnet**.
-* Address prefix of **10.1.1.0/24**.
-* In virtual network **myVNet**.
-* In resource group **CreatePubLBQS-rg**.
+Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az_network_vnet_subnet_create) to create a bastion subnet. The bastion subnet is used by the bastion host to access the virtual network.
-```azurecli-interactive
-az network vnet subnet create \
+```azurecli
+ az network vnet subnet create \
--resource-group CreatePubLBQS-rg \ --name AzureBastionSubnet \ --vnet-name myVNet \
- --address-prefixes 10.1.1.0/24
+ --address-prefixes 10.1.1.0/27
``` ### Create bastion host
-Use [az network bastion create](/cli/azure/network/bastion#az_network_bastion_create) to create a bastion host:
+Use [az network bastion create](/cli/azure/network/bastion#az_network_bastion_create) to create a bastion host. The bastion host is used to connect securely to the virtual machine resources created later in this article.
-* Named **myBastionHost**.
-* In **CreatePubLBQS-rg**.
-* Associated with public IP **myBastionIP**.
-* Associated with virtual network **myVNet**.
-* In **eastus** location.
-
-```azurecli-interactive
-az network bastion create \
+```azurecli
+ az network bastion create \
--resource-group CreatePubLBQS-rg \ --name myBastionHost \ --public-ip-address myBastionIP \
az network bastion create \
It can take a few minutes for the Azure Bastion host to deploy.
-### Create a network security group
-
-For a standard load balancer, the VMs in the backend address for are required to have network interfaces that belong to a network security group.
-
-Create a network security group using [az network nsg create](/cli/azure/network/nsg#az_network_nsg_create):
-
-* Named **myNSG**.
-* In resource group **CreatePubLBQS-rg**.
-
-```azurecli-interactive
- az network nsg create \
- --resource-group CreatePubLBQS-rg \
- --name myNSG
-```
-
-### Create a network security group rule
-
-Create a network security group rule using [az network nsg rule create](/cli/azure/network/nsg/rule#az_network_nsg_rule_create):
-
-* Named **myNSGRuleHTTP**.
-* In the network security group you created in the previous step, **myNSG**.
-* In resource group **CreatePubLBQS-rg**.
-* Protocol **(*)**.
-* Direction **Inbound**.
-* Source **(*)**.
-* Destination **(*)**.
-* Destination port **Port 80**.
-* Access **Allow**.
-* Priority **200**.
-
-```azurecli-interactive
- az network nsg rule create \
- --resource-group CreatePubLBQS-rg \
- --nsg-name myNSG \
- --name myNSGRuleHTTP \
- --protocol '*' \
- --direction inbound \
- --source-address-prefix '*' \
- --source-port-range '*' \
- --destination-address-prefix '*' \
- --destination-port-range 80 \
- --access allow \
- --priority 200
-```
-
-## Create backend servers - Basic
+## Create backend servers
In this section, you create:
-* Three network interfaces for the virtual machines.
-* Availability set for the virtual machines
-* Three virtual machines to be used as backend servers for the load balancer.
+* Two network interfaces for the virtual machines
+* Two virtual machines to be used as backend servers for the load balancer
### Create network interfaces for the virtual machines
-Create three network interfaces with [az network nic create](/cli/azure/network/nic#az_network_nic_create):
--
-* Named **myNicVM1**, **myNicVM2**, and **myNicVM3**.
-* In resource group **CreatePubLBQS-rg**.
-* In virtual network **myVNet**.
-* In subnet **myBackendSubnet**.
-* In network security group **myNSG**.
+Create two network interfaces with [az network nic create](/cli/azure/network/nic#az_network_nic_create):
-```azurecli-interactive
- array=(myNicVM1 myNicVM2 myNicVM3)
+```azurecli
+ array=(myNicVM1 myNicVM2)
for vmnic in "${array[@]}" do az network nic create \
Create three network interfaces with [az network nic create](/cli/azure/network/
--network-security-group myNSG done ```
-### Create availability set for virtual machines
-
-Create the availability set with [az vm availability-set create](/cli/azure/vm/availability-set#az_vm_availability_set_create):
-
-* Named **myAvSet**.
-* In resource group **CreatePubLBQS-rg**.
-* Location **eastus**.
-
-```azurecli-interactive
- az vm availability-set create \
- --name myAvSet \
- --resource-group CreatePubLBQS-rg \
- --location eastus
-
-```
### Create virtual machines Create the virtual machines with [az vm create](/cli/azure/vm#az_vm_create):
-### VM1
-* Named **myVM1**.
-* In resource group **CreatePubLBQS-rg**.
-* Attached to network interface **myNicVM1**.
-* Virtual machine image **win2019datacenter**.
-* In **Zone 1**.
-
-```azurecli-interactive
+```azurecli
az vm create \ --resource-group CreatePubLBQS-rg \ --name myVM1 \ --nics myNicVM1 \ --image win2019datacenter \ --admin-username azureuser \
- --availability-set myAvSet \
+ --zone 1 \
--no-wait ```
-#### VM2
-* Named **myVM2**.
-* In resource group **CreatePubLBQS-rg**.
-* Attached to network interface **myNicVM2**.
-* Virtual machine image **win2019datacenter**.
-* In **Zone 2**.
-
-```azurecli-interactive
+
+```azurecli
az vm create \ --resource-group CreatePubLBQS-rg \ --name myVM2 \ --nics myNicVM2 \ --image win2019datacenter \ --admin-username azureuser \
- --availability-set myAvSet \
+ --zone 2 \
--no-wait ```
-#### VM3
-* Named **myVM3**.
-* In resource group **CreatePubLBQS-rg**.
-* Attached to network interface **myNicVM3**.
-* Virtual machine image **win2019datacenter**.
-* In **Zone 3**.
-
-```azurecli-interactive
- az vm create \
- --resource-group CreatePubLBQS-rg \
- --name myVM3 \
- --nics myNicVM3 \
- --image win2019datacenter \
- --admin-username azureuser \
- --availability-set myAvSet \
- --no-wait
-```
-It may take a few minutes for the VMs to deploy.
+It may take a few minutes for the VMs to deploy. You can continue to the next steps while the VMs are creating.
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)]
-## Create a public IP address - Basic
-
-To access your web app on the Internet, you need a public IP address for the load balancer.
-
-Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create) to:
+### Add virtual machines to load balancer backend pool
-* Create a standard zone redundant public IP address named **myPublicIP**.
-* In **CreatePubLBQS-rg**.
+Add the virtual machines to the backend pool with [az network nic ip-config address-pool add](/cli/azure/network/nic/ip-config/address-pool#az_network_nic_ip_config_address_pool_add):
-```azurecli-interactive
- az network public-ip create \
- --resource-group CreatePubLBQS-rg \
- --name myPublicIP \
- --sku Basic
+```azurecli
+ array=(myNicVM1 myNicVM2)
+ for vmnic in "${array[@]}"
+ do
+ az network nic ip-config address-pool add \
+ --address-pool myBackendPool \
+ --ip-config-name ipconfig1 \
+ --nic-name $vmnic \
+ --resource-group CreatePubLBQS-rg \
+ --lb-name myLoadBalancer
+ done
```
-## Create basic load balancer
+## Create NAT gateway
-This section details how you can create and configure the following components of the load balancer:
+To provide outbound internet access for resources in the backend pool, create a NAT gateway.
- * A frontend IP pool that receives the incoming network traffic on the load balancer.
- * A backend IP pool where the frontend pool sends the load balanced network traffic.
- * A health probe that determines health of the backend VM instances.
- * A load balancer rule that defines how traffic is distributed to the VMs.
+### Create public IP
-### Create the load balancer resource
-
-Create a public load balancer with [az network lb create](/cli/azure/network/lb#az_network_lb_create):
-
-* Named **myLoadBalancer**.
-* A frontend pool named **myFrontEnd**.
-* A backend pool named **myBackEndPool**.
-* Associated with the public IP address **myPublicIP** that you created in the preceding step.
+Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create) to create a single IP for the outbound connectivity.
-```azurecli-interactive
- az network lb create \
+```azurecli
+ az network public-ip create \
--resource-group CreatePubLBQS-rg \
- --name myLoadBalancer \
- --sku Basic \
- --public-ip-address myPublicIP \
- --frontend-ip-name myFrontEnd \
- --backend-pool-name myBackEndPool
+ --name myNATgatewayIP \
+ --sku Standard \
+ --zone 1 2 3
```
-### Create the health probe
-
-A health probe checks all virtual machine instances to ensure they can send network traffic.
-
-A virtual machine with a failed probe check is removed from the load balancer. The virtual machine is added back into the load balancer when the failure is resolved.
-
-Create a health probe with [az network lb probe create](/cli/azure/network/lb/probe#az_network_lb_probe_create):
-
-* Monitors the health of the virtual machines.
-* Named **myHealthProbe**.
-* Protocol **TCP**.
-* Monitoring **Port 80**.
+To create a zonal redundant public IP address in Zone 1:
-```azurecli-interactive
- az network lb probe create \
+```azurecli
+ az network public-ip create \
--resource-group CreatePubLBQS-rg \
- --lb-name myLoadBalancer \
- --name myHealthProbe \
- --protocol tcp \
- --port 80
+ --name myNATgatewayIP \
+ --sku Standard \
+ --zone 1
```
-### Create the load balancer rule
+### Create NAT gateway resource
-A load balancer rule defines:
+Use [az network nat gateway create](/cli/azure/network/nat#az_network_nat_gateway_create) to create the NAT gateway resource. The public IP created in the previous step is associated with the NAT gateway.
-* Frontend IP configuration for the incoming traffic.
-* The backend IP pool to receive the traffic.
-* The required source and destination port.
-
-Create a load balancer rule with [az network lb rule create](/cli/azure/network/lb/rule#az_network_lb_rule_create):
-
-* Named **myHTTPRule**
-* Listening on **Port 80** in the frontend pool **myFrontEnd**.
-* Sending load-balanced network traffic to the backend address pool **myBackEndPool** using **Port 80**.
-* Using health probe **myHealthProbe**.
-* Protocol **TCP**.
-* Idle timeout of **15 minutes**.
-
-```azurecli-interactive
- az network lb rule create \
+```azurecli
+ az network nat gateway create \
--resource-group CreatePubLBQS-rg \
- --lb-name myLoadBalancer \
- --name myHTTPRule \
- --protocol tcp \
- --frontend-port 80 \
- --backend-port 80 \
- --frontend-ip-name myFrontEnd \
- --backend-pool-name myBackEndPool \
- --probe-name myHealthProbe \
- --idle-timeout 15
+ --name myNATgateway \
+ --public-ip-addresses myNATgatewayIP \
+ --idle-timeout 10
```
-### Add virtual machines to load balancer backend pool
-
-Add the virtual machines to the backend pool with [az network nic ip-config address-pool add](/cli/azure/network/nic/ip-config/address-pool#az_network_nic_ip_config_address_pool_add):
+### Associate NAT gateway with subnet
-* In backend address pool **myBackEndPool**.
-* In resource group **CreatePubLBQS-rg**.
-* Associated with load balancer **myLoadBalancer**.
+Configure the source subnet in virtual network to use a specific NAT gateway resource with [az network vnet subnet update](/cli/azure/network/vnet/subnet#az_network_vnet_subnet_update).
-```azurecli-interactive
- array=(myNicVM1 myNicVM2 myNicVM3)
- for vmnic in "${array[@]}"
- do
- az network nic ip-config address-pool add \
- --address-pool myBackendPool \
- --ip-config-name ipconfig1 \
- --nic-name $vmnic \
- --resource-group CreatePubLBQS-rg \
- --lb-name myLoadBalancer
- done
+```azurecli
+ az network vnet subnet update \
+ --resource-group CreatePubLBQS-rg \
+ --vnet-name myVNet \
+ --name myBackendSubnet \
+ --nat-gateway myNATgateway
``` -- ## Install IIS Use [az vm extension set](/cli/azure/vm/extension#az_vm_extension_set) to install IIS on the virtual machines and set the default website to the computer name.
-```azurecli-interactive
- array=(myVM1 myVM2 myVM3)
+```azurecli
+ array=(myVM1 myVM2)
for vm in "${array[@]}" do az vm extension set \
Use [az vm extension set](/cli/azure/vm/extension#az_vm_extension_set) to instal
--resource-group CreatePubLBQS-rg \ --settings '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}' done- ``` ## Test the load balancer
To get the public IP address of the load balancer, use [az network public-ip sho
Copy the public IP address, and then paste it into the address bar of your browser.
-```azurecli-interactive
+```azurecli
az network public-ip show \ --resource-group CreatePubLBQS-rg \ --name myPublicIP \
Copy the public IP address, and then paste it into the address bar of your brows
When no longer needed, use the [az group delete](/cli/azure/group#az_group_delete) command to remove the resource group, load balancer, and all related resources.
-```azurecli-interactive
+```azurecli
az group delete \ --name CreatePubLBQS-rg ``` ## Next steps
-In this quickstart
+In this quickstart:
+
+* You created a standard public load balancer
+
+* Attached two virtual machines
+
+* Configured the load balancer traffic rule and health probe
-* You created a standard or public load balancer
-* Attached virtual machines.
-* Configured the load balancer traffic rule and health probe.
-* Tested the load balancer.
+* Tested the load balancer
To learn more about Azure Load Balancer, continue to: > [!div class="nextstepaction"]
logic-apps Concepts Schedule Automated Recurring Tasks Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md
Title: Scheduling recurring tasks and workflows
-description: An overview about scheduling recurring automated tasks, processes, and workflows with Azure Logic Apps.
+ Title: Schedules for recurring triggers in workflows
+description: An overview about scheduling recurring automated workflows in Azure Logic Apps.
ms.suite: integration Previously updated : 01/24/2022 Last updated : 03/17/2022
-# Schedule and run recurring automated tasks, processes, and workflows with Azure Logic Apps
+# Schedules for recurring triggers in Azure Logic Apps workflows
-Logic Apps helps you create and run automated recurring tasks and processes on a schedule. By creating a logic app workflow that starts with a built-in Recurrence trigger or Sliding Window trigger, which are Schedule-type triggers, you can run tasks immediately, at a later time, or on a recurring interval. You can call services inside and outside Azure, such as HTTP or HTTPS endpoints, post messages to Azure services such as Azure Storage and Azure Service Bus, or get files uploaded to a file share. With the Recurrence trigger, you can also set up complex schedules and advanced recurrences for running tasks. To learn more about the built-in Schedule triggers and actions, see [Schedule triggers](#schedule-triggers) and [Schedule actions](#schedule-actions).
+Azure Logic Apps helps you create and run automated recurring workflows on a schedule. By creating a logic app workflow that starts with a built-in Recurrence trigger or Sliding Window trigger, which are Schedule-type triggers, you can run tasks immediately, at a later time, or on a recurring interval. You can call services inside and outside Azure, such as HTTP or HTTPS endpoints, post messages to Azure services such as Azure Storage and Azure Service Bus, or get files uploaded to a file share. With the Recurrence trigger, you can also set up complex schedules and advanced recurrences for running tasks. To learn more about the built-in Schedule triggers and actions, see [Schedule triggers](#schedule-triggers) and [Schedule actions](#schedule-actions).
> [!NOTE] > You can schedule and run recurring workloads without creating a separate logic app for each scheduled job and running into the [limit on workflows per region and subscription](../logic-apps/logic-apps-limits-and-config.md#definition-limits). Instead, you can use the logic app pattern that's created by the [Azure QuickStart template: Logic Apps job scheduler](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.logic/logicapps-jobscheduler/).
So, no matter how far in the past you specify the start time, for example, 2017-
## Recurrence for daylight saving time and standard time
-Recurring built-in triggers honor the schedule that you set, including any time zone that you specify. If you don't select a time zone, daylight saving time (DST) might affect when triggers run, for example, shifting the start time one hour forward when DST starts and one hour backward when DST ends. When scheduling jobs, Azure Logic Apps puts the message for processing into the queue and specifies when that message becomes available, based on the UTC time when the last job ran and the UTC time when the next job is scheduled to run.
+To schedule jobs, Azure Logic Apps puts the message for processing into the queue and specifies when that message becomes available, based on the UTC time when the last job ran and the UTC time when the next job is scheduled to run. If you specify a start time with your recurrence, *make sure that you select a time zone* so that your logic app workflow runs at the specified start time. That way, the UTC time for your logic app also shifts to counter the seasonal time change. Recurring triggers honor the schedule that you set, including any time zone that you specify.
-To avoid this shift so that your logic app runs at your specified start time, make sure that you select a time zone. That way, the UTC time for your logic app also shifts to counter the seasonal time change.
+Otherwise, if you don't select a time zone, daylight saving time (DST) events might affect when triggers run. For example, the start time shifts one hour forward when DST starts and one hour backward when DST ends.
<a name="dst-window"></a>
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
Previously updated : 10/21/2021 Last updated : 03/15/2022
These settings allow you to review and control your experiment runs and its chil
|**Get guardrails**| Γ£ô|Γ£ô| |**Pause & resume runs**| Γ£ô| |
-## When to use AutoML: classification, regression, forecasting & computer vision
+## When to use AutoML: classification, regression, forecasting, computer vision & NLP
Apply automated ML when you want Azure Machine Learning to train and tune a model for you using the target metric you specify. Automated ML democratizes the machine learning model development process, and empowers its users, no matter their data science expertise, to identify an end-to-end machine learning pipeline for any problem.
-Data scientists, analysts, and developers across industries can use automated ML to:
+ML professionals and developers across industries can use automated ML to:
+ Implement ML solutions without extensive programming knowledge + Save time and resources + Leverage data science best practices
See examples of regression and automated machine learning for predictions in the
> [!IMPORTANT] > This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Automated ML for images (preview) adds support for computer vision tasks, which allows you to easily generate models trained on image data for scenarios like image classification and object detection.
+Support for computer vision tasks allows you to easily generate models trained on image data for scenarios like image classification and object detection.
With this capability you can:
Multi-label image classification | Tasks where an image could have one or more l
Object detection| Tasks to identify objects in an image and locate each object with a bounding box e.g. locate all dogs and cats in an image and draw a bounding box around each. Instance segmentation | Tasks to identify objects in an image at the pixel level, drawing a polygon around each object in the image.
+<a name="nlp"></a>
+
+### Natural language processing: NLP (preview)
++
+Support for natural language processing (NLP) tasks in automated ML allows you to easily generate models trained on text data for text classification and named entity recognition scenarios. Authoring automated ML trained NLP models is supported via the Azure Machine Learning Python SDK. The resulting experimentation runs, models, and outputs can be accessed from the Azure Machine Learning studio UI.
+
+The NLP capability supports:
+
+* End-to-end deep neural network NLP training with the latest pre-trained BERT models
+* Seamless integration with [Azure Machine Learning data labeling](how-to-create-text-labeling-projects.md)
+* Use labeled data for generating NLP models
+* Multi-lingual support with 104 languages
+* Distributed training with Horovod
+
+Learn how to [set up AutoML training for NLP models](how-to-auto-train-nlp-models.md).
+ ## How automated ML works During training, Azure Machine Learning creates a number of pipelines in parallel that try different algorithms and parameters for you. The service iterates through ML algorithms paired with feature selections, where each iteration produces a model with a training score. The higher the score, the better the model is considered to "fit" your data. It will stop once it hits the exit criteria defined in the experiment.
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
+
+ Title: Set up AutoML for NLP
+
+description: Set up Azure Machine Learning automated ML to train natural language processing models with the Azure Machine Learning Python SDK.
+++++++ Last updated : 03/15/2022+
+# Customer intent: I'm a data scientist with ML knowledge in the natural language processing space, looking to build ML models using language specific data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
++
+# Set up AutoML to train a natural language processing model with Python (preview)
++
+In this article, you learn how to train natural language processing (NLP) models with [automated ML](concept-automated-ml.md) in the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/).
+
+Automated ML supports NLP which allows ML professionals and data scientists to bring their own text data and build custom models for tasks such as, multi-class text classification, multi-label text classification, and named entity recognition (NER).
+
+You can seamlessly integrate with the [Azure Machine Learning data labeling](how-to-create-text-labeling-projects.md) capability to label your text data or bring your existing labeled data. Automated ML provides the option to use distributed training on multi-GPU compute clusters for faster model training. The resulting model can be operationalized at scale by leveraging Azure MLΓÇÖs MLOps capabilities.
+
+## Prerequisites
+
+* Azure subscription. If you don't have an Azure subscription, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+
+* An Azure Machine Learning workspace with a GPU training compute. To create the workspace, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md). See [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md) for more details of GPU instances provided by Azure.
+
+ > [!Warning]
+ > Support for multilingual models and the use of models with longer max sequence length is necessary for several NLP use cases, such as non-english datasets and longer range documents. As a result, these scenarios may require higher GPU memory for model training to succeed, such as the NC_v3 series or the ND series.
+
+* The Azure Machine Learning Python SDK installed.
+
+ To install the SDK you can either,
+ * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md) for more information.
+
+ * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
+
+ [!INCLUDE [automl-sdk-version](../../includes/machine-learning-automl-sdk-version.md)]
+
+ > [!WARNING]
+ > Python 3.8 is not compatible with `automl`.
+
+* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [tutorial](tutorial-auto-train-models.md) or [how-to](how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
+
+## Select your NLP task
+
+Determine what NLP task you want to accomplish. Currently, automated ML supports the follow deep neural network NLP tasks.
+
+Task |AutoMLConfig syntax| Description
+-|-|
+Multi-class text classification | `task = 'text-classification'`| There are multiple possible classes and each sample can be classified as exactly one class. The task is to predict the correct class for each sample. <br> <br> For example, classifying a movie script as "Comedy" or "Romantic".
+Multi-label text classification | `task = 'text-classification-multilabel'`| There are multiple possible classes and each sample can be assigned any number of classes. The task is to predict all the classes for each sample<br> <br> For example, classifying a movie script as "Comedy", or "Romantic", or "Comedy and Romantic".
+Named Entity Recognition (NER)| `task = 'text-ner'`| There are multiple possible tags for tokens in sequences. The task is to predict the tags for all the tokens for each sequence. <br> <br> For example, extracting domain-specific entities from unstructured text, such as contracts or financial documents
+
+## Preparing data
+
+For NLP experiments in automated ML, you can bring an Azure Machine Learning dataset with `.csv` format for multi-class and multi-label classification tasks. For NER tasks, two-column `.txt` files that use a space as the separator and adhere to the CoNLL format are supported. The following sections provide additional detail for the data format accepted for each task.
+
+### Multi-class
+
+For multi-class classification, the dataset can contain several text columns and exactly one label column. The following example has only one text column.
+
+```python
+
+text,labels
+"I love watching Chicago Bulls games.","NBA"
+"Tom Brady is a great player.","NFL"
+"There is a game between Yankees and Orioles tonight","NFL"
+"Stephen Curry made the most number of 3-Pointers","NBA"
+```
+
+### Multi-label
+
+For multi-label classification, the dataset columns would be the same as multi-class, however there are special format requirements for data in the label column. The two accepted formats and examples are in the following table.
+
+|Label column format options |Multiple labels| One label | No labels
+||||
+|Plain text|`"label1, label2, label3"`| `"label1"`| `""`
+|Python list with quotes| `"['label1','label2','label3']"`| `"['label1']"`|`"[]"`
+
+> [!IMPORTANT]
+> Different parsers are used to read labels for these formats. If you are using the plaint text format, only use alphabetical, numerical and `'_'` in your labels. All other characters are recognized as the separator of labels.
+>
+> For example, if your label is `"cs.AI"`, it's read as `"cs"` and `"AI"`. Whereas with the Python list format, the label would be `"['cs.AI']"`, which is read as `"cs.AI"` .
++
+Example data for multi-label in plain text format.
+
+```python
+text,labels
+"I love watching Chicago Bulls games.","basketball"
+"The four most popular leagues are NFL, MLB, NBA and NHL","football,baseball,basketball,hockey"
+"I like drinking beer.",""
+```
+
+Example data for multi-label in Python list with quotes format.
+
+``` python
+text,labels
+"I love watching Chicago Bulls games.","['basketball']"
+"The four most popular leagues are NFL, MLB, NBA and NHL","['football','baseball','basketball','hockey']"
+"I like drinking beer.","[]"
+```
+
+### Named entity recognition (NER)
+
+Unlike multi-class or multi-label, which takes `.csv` format datasets, named entity recognition requires [CoNLL](https://www.clips.uantwerpen.be/conll2003/ner/) format. The file must contain exactly two columns and in each row, the token and the label is separated by a single space.
+
+For example,
+
+``` python
+Hudson B-loc
+Square I-loc
+is O
+a O
+famous O
+place O
+in O
+New B-loc
+York I-loc
+City I-loc
+
+Stephen B-per
+Curry I-per
+got O
+three O
+championship O
+rings O
+```
+
+### Data validation
+
+Before training, automated ML applies data validation checks on the input data to ensure that the data can be preprocessed correctly. If any of these checks fail, the run fails with the relevant error message. The following are the requirements to pass data validation checks for each task.
+
+> [!Note]
+> Some data validation checks are applicable to both the training and the validation set, whereas others are applicable only to the training set. If the test dataset could not pass the data validation, that means that automated ML couldn't capture it and there is a possibility of model inference failure, or a decline in model performance.
+
+Task | Data validation check
+|
+All tasks | At least 50 training samples are required
+Multi-class and Multi-label | The training data and validation data must have <br> - The same set of columns <br>- The same order of columns from left to right <br>- The same data type for columns with the same name <br>- At least two unique labels <br> - Unique column names within each dataset (For example, the training set can't have multiple columns named **Age**)
+Multi-class only | None
+Multi-label only | - The label column format must be in [accepted format](#multi-label) <br> - At least one sample should have 0 or 2+ labels, otherwise it should be a `multiclass` task <br> - All labels should be in `str` or `int` format, with no overlapping. You should not have both label `1` and label `'1'`
+NER only | - The file should not start with an empty line <br> - Each line must be an empty line, or follow format `{token} {label}`, where there is exactly one space between the token and the label and no white space after the label <br> - All labels must start with `I-`, `B-`, or be exactly `O`. Case sensitive <br> - Exactly one empty line between two samples <br> - Exactly one empty line at the end of the file
+
+## Configure experiment
+
+Automated ML's NLP capability is triggered through `AutoMLConfig`, which is the same workflow for submitting automated ML experiments for classification, regression and forecasting tasks. You would set most of the parameters as you would for those experiments, such as `task`, `compute_target` and data inputs.
+
+However, there are key differences include:
+* You can ignore `primary_metric`, as it is only for reporting purpose. Currently, automated ML only trains one model per run for NLP and there is no model selection.
+* The `label_column_name` parameter is only required for multi-class and multi-label text classification tasks.
++
+```python
+automl_settings = {
+ "verbosity": logging.INFO,
+}
+
+automl_config = AutoMLConfig(
+ task="text-classification",
+ debug_log="automl_errors.log",
+ compute_target=compute_target,
+ training_data=train_dataset,
+ validation_data=val_dataset,
+ label_column_name=target_column_name,
+ **automl_settings
+)
+```
+
+### Language settings
+
+As part of the NLP functionality, automated ML supports 104 languages leveraging language specific and multilingual pre-trained text DNN models, such as the BERT family of models. Currently, language selection defaults to English.
+
+ The following table summarizes what model is applied based on task type and language. See the full list of [supported languages and their codes](/python/api/azureml-automl-core/azureml.automl.core.constants.textdnnlanguages#azureml-automl-core-constants-textdnnlanguages-supported).
+
+ Task type |Syntax for `dataset_language` | Text model algorithm
+-|-|
+Multi-label text classification| `'eng'` <br> `'deu'` <br> `'mul'`| English&nbsp;BERT&nbsp;[uncased](https://huggingface.co/bert-base-uncased) <br> [German BERT](https://huggingface.co/bert-base-german-cased)<br> [Multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) <br><br>For all other languages, automated ML applies multilingual BERT
+Multi-class text classification| `'eng'` <br> `'deu'` <br> `'mul'`| English&nbsp;BERT&nbsp;[cased](https://huggingface.co/bert-base-cased)<br> [Multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) <br><br>For all other languages, automated ML applies multilingual BERT
+Named entity recognition (NER)| `'eng'` <br> `'deu'` <br> `'mul'`| English&nbsp;BERT&nbsp;[cased](https://huggingface.co/bert-base-cased) <br> [German BERT](https://huggingface.co/bert-base-german-cased)<br> [Multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) <br><br>For all other languages, automated ML applies multilingual BERT
++
+You can specify your dataset language in your `FeaturizationConfig`. BERT is also used in the featurization process of automated ML experiment training, learn more about [BERT integration and featurization in automated ML](how-to-configure-auto-features.md#bert-integration-in-automated-ml).
+
+```python
+from azureml.automl.core.featurization import FeaturizationConfig
+
+featurization_config = FeaturizationConfig(dataset_language='{your language code}')
+automl_config = AutomlConfig("featurization": featurization_config)
+```
+
+## Distributed training
+
+You can also run your NLP experiments with distributed training on an Azure ML compute cluster. This is handled automatically by automated ML when the parameters `max_concurrent_iterations = number_of_vms` and `enable_distributed_dnn_training = True` are provided in your `AutoMLConfig` during experiment set up.
+
+```python
+max_concurrent_iterations = number_of_vms
+enable_distributed_dnn_training = True
+```
+
+Doing so, schedules distributed training of the NLP models and automatically scales to every GPU on your virtual machine or cluster of virtual machines. The max number of virtual machines allowed is 32. The training is scheduled with number of virtual machines that is in powers of two.
+
+## Example notebooks
+
+See the sample notebooks for detailed code examples for each NLP task.
+* [Multi-class text classification](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/automl-nlp-multiclass/automl-nlp-text-classification-multiclass.ipynb)
+* [Multi-label text classification](
+https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/automl-nlp-multilabel/automl-nlp-text-classification-multilabel.ipynb)
+* [Named entity recognition](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/automl-nlp-ner/automl-nlp-ner.ipynb)
+
+## Next steps
++ Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).++ [Troubleshoot automated ML experiments](how-to-troubleshoot-auto-ml.md).
machine-learning How To Configure Auto Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-features.md
If the underlying model does not support the `predict_proba()` function or the f
## BERT integration in automated ML
-[BERT](https://techcommunity.microsoft.com/t5/azure-ai/how-bert-is-integrated-into-azure-automated-machine-learning/ba-p/1194657) is used in the featurization layer of AutoML. In this layer, if a column contains free text or other types of data like timestamps or simple numbers, then featurization is applied accordingly.
+[BERT](https://techcommunity.microsoft.com/t5/azure-ai/how-bert-is-integrated-into-azure-automated-machine-learning/ba-p/1194657) is used in the featurization layer of automated ML. In this layer, if a column contains free text or other types of data like timestamps or simple numbers, then featurization is applied accordingly.
For BERT, the model is fine-tuned and trained utilizing the user-provided labels. From here, document embeddings are output as features alongside others, like timestamp-based features, day of week.
+Learn how to [set up natural language processing (NLP) experiments that also use BERT with automated ML](how-to-auto-train-nlp-models.md).
### Steps to invoke BERT In order to invoke BERT, set `enable_dnn: True` in your automl_settings and use a GPU compute (`vm_size = "STANDARD_NC6"` or a higher GPU). If a CPU compute is used, then instead of BERT, AutoML enables the BiLSTM DNN featurizer.
-AutoML takes the following steps for BERT.
+Automated ML takes the following steps for BERT.
1. **Preprocessing and tokenization of all text columns**. For example, the "StringCast" transformer can be found in the final model's featurization summary. An example of how to produce the model's featurization summary can be found in [this notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/classification-text-dnn/auto-ml-classification-text-dnn.ipynb).
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
For this article you need,
## Select your experiment type
-Before you begin your experiment, you should determine the kind of machine learning problem you are solving. Automated machine learning supports task types of `classification`, `regression`, and `forecasting`. Learn more about [task types](concept-automated-ml.md#when-to-use-automl-classification-regression-forecasting--computer-vision).
+Before you begin your experiment, you should determine the kind of machine learning problem you are solving. Automated machine learning supports task types of `classification`, `regression`, and `forecasting`. Learn more about [task types](concept-automated-ml.md#when-to-use-automl-classification-regression-forecasting-computer-vision--nlp).
>[!NOTE]
-> Support for computer vision tasks: image classification (multi-class and multi-label), object detection, and instance segmentation is available in public preview. [Learn more about computer vision tasks in automated ML](concept-automated-ml.md#when-to-use-automl-classification-regression-forecasting--computer-vision).<br><br> This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Support for computer vision tasks: image classification (multi-class and multi-label), object detection, and instance segmentation is available in public preview. [Learn more about computer vision tasks in automated ML](concept-automated-ml.md#computer-vision-preview).
+>
+>Support for natural language processing (NLP) tasks: image classification (multi-class and multi-label) and named entity recognition is available in public preview. [Learn more about NLP tasks in automated ML](concept-automated-ml.md#nlp).
+>
+> These preview capabilities are provided without a service-level agreement. Certain features might not be supported or might have constrained functionality. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
The following code uses the `task` parameter in the `AutoMLConfig` constructor to specify the experiment type as `classification`.
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md
The `ml` CLI extension (sometimes called 'CLI v2') for Azure Machine Learning se
You can increase the security of CLI communications with Azure Resource Manager by using Azure Private Link. The following links provide information on using a Private Link for managing Azure resources: 1. [Secure your Azure Machine Learning workspace inside a virtual network using a private endpoint](how-to-configure-private-link.md).
-2. [Create a Private Link for managing Azure resources](/azure/azure-resource-manager/management/create-private-link-access-portal).
-3. [Create a private endpoint](/azure/azure-resource-manager/management/create-private-link-access-portal#create-private-endpoint) for the Private Link created in the previous step.
+2. [Create a Private Link for managing Azure resources](../azure-resource-manager/management/create-private-link-access-portal.md).
+3. [Create a private endpoint](../azure-resource-manager/management/create-private-link-access-portal.md#create-private-endpoint) for the Private Link created in the previous step.
> [!IMPORTANT]
-> To configure the private link for Azure Resource Manager, you must be the _subscription owner_ for the Azure subscription, and an _owner_ or _contributor_ of the root management group. For more information, see [Create a private link for managing Azure resources](/azure/azure-resource-manager/management/create-private-link-access-portal).
+> To configure the private link for Azure Resource Manager, you must be the _subscription owner_ for the Azure subscription, and an _owner_ or _contributor_ of the root management group. For more information, see [Create a private link for managing Azure resources](../azure-resource-manager/management/create-private-link-access-portal.md).
## Next steps - [Train models using CLI (v2)](how-to-train-cli.md) - [Set up the Visual Studio Code Azure Machine Learning extension](how-to-setup-vs-code.md) - [Train an image classification TensorFlow model using the Azure Machine Learning Visual Studio Code extension](tutorial-train-deploy-image-classification-model-vscode.md)-- [Explore Azure Machine Learning with examples](samples-notebooks.md)
+- [Explore Azure Machine Learning with examples](samples-notebooks.md)
machine-learning How To Run Jupyter Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-run-jupyter-notebooks.md
Using the following keystroke shortcuts, you can more easily navigate and run co
* When uploading a file through the notebook's file explorer, you are limited files that are smaller than 5TB. If you need to upload a file larger than this, we recommend that you use one of the following methods:
- * Use the SDK to upload the data to a datastore. For more information, see the [Upload the data](/azure/machine-learning/tutorial-1st-experiment-bring-data#upload) section of the tutorial.
+ * Use the SDK to upload the data to a datastore. For more information, see the [Upload the data](./tutorial-1st-experiment-bring-data.md#upload) section of the tutorial.
* Use [Azure Data Factory](how-to-data-ingest-adf.md) to create a data ingestion pipeline. ## Next steps * [Run your first experiment](tutorial-1st-experiment-sdk-train.md) * [Backup your file storage with snapshots](../storage/files/storage-snapshots-files.md)
-* [Working in secure environments](./how-to-secure-training-vnet.md#compute-cluster)
+* [Working in secure environments](./how-to-secure-training-vnet.md#compute-cluster)
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
In this article you learn how to secure the following training compute resources
> [!IMPORTANT] > These resources are limited by the subscription's [resource quotas](../azure-resource-manager/management/azure-subscription-service-limits.md). If the virtual network resource group is locked then deletion of compute cluster/instance will fail. Load balancer cannot be deleted until the compute cluster/instance is deleted. Also please ensure there is no Azure Policy assignment which prohibits creation of network security groups.
-* If you create a compute instance and plan to use the no public IP address configuration, your Azure Machine Learning workspace's managed identity must be assigned the __Reader__ role for the virtual network that contains the workspace. For more information on assigning roles, see [Steps to assign an Azure role](/azure/role-based-access-control/role-assignments-steps).
+* If you create a compute instance and plan to use the no public IP address configuration, your Azure Machine Learning workspace's managed identity must be assigned the __Reader__ role for the virtual network that contains the workspace. For more information on assigning roles, see [Steps to assign an Azure role](../role-based-access-control/role-assignments-steps.md).
* If you have configured Azure Container Registry for your workspace behind the virtual network, you must use a compute cluster to build Docker images. You can't use a compute cluster with the no public IP address configuration. For more information, see [Enable Azure Container Registry](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr).
When you enable **No public IP**, your compute cluster doesn't use a public IP f
A compute cluster with **No public IP** enabled has **no inbound communication requirements** from public internet. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork** and any port source, to destination of **VirtualNetwork**, and destination port of **29876, 29877**. > [!IMPORTANT]
-> When creating a compute instance with no public IP, the managed identity for your workspace must be assigned the __Owner__ role on the virtual network. For more information on assigning roles, see [Steps to assign an Azure role](/azure/role-based-access-control/role-assignments-steps).
+> When creating a compute instance with no public IP, the managed identity for your workspace must be assigned the __Owner__ role on the virtual network. For more information on assigning roles, see [Steps to assign an Azure role](../role-based-access-control/role-assignments-steps.md).
**No public IP** clusters are dependent on [Azure Private Link](how-to-configure-private-link.md) for Azure Machine Learning workspace. A compute cluster with **No public IP** also requires you to disable private endpoint network policies and private link service network policies. These requirements come from Azure private link service and private endpoints and aren't Azure Machine Learning specific. Follow instruction from [Disable network policies for Private Link service](../private-link/disable-private-link-service-network-policy.md) to set the parameters `disable-private-endpoint-network-policies` and `disable-private-link-service-network-policies` on the virtual network subnet.
This article is part of a series on securing an Azure Machine Learning workflow.
* [Secure the inference environment](how-to-secure-inferencing-vnet.md) * [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md)
-* [Use a firewall](how-to-access-azureml-behind-firewall.md)
+* [Use a firewall](how-to-access-azureml-behind-firewall.md)
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
Previously updated : 03/01/2022 Last updated : 03/09/2022
When ACR is behind a virtual network, Azure Machine Learning canΓÇÖt use it to d
> The compute cluster used to build Docker images needs to be able to access the package repositories that are used to train and deploy your models. You may need to add network security rules that allow access to public repos, [use private Python packages](how-to-use-private-python-packages.md), or use [custom Docker images](how-to-train-with-custom-image.md) that already include the packages. > [!WARNING]
-> If your Azure Container Registry uses a private endpoint to communicate with the virtual network, you cannot use a managed identity with an Azure Machine Learning compute cluster. To use a managed identity with a compute cluster, use a service endpoint with the Azure Container Registry for the workspace.
+> If your Azure Container Registry uses a private endpoint or service endpoint to communicate with the virtual network, you cannot use a managed identity with an Azure Machine Learning compute cluster.
### Azure Monitor > [!WARNING]
-> Azure Monitor supports using Azure Private Link to connect to a VNet. However, you must use the open Private Link mode in Azure Monitor. For more information, see [Private Link access modes: Private only vs. Open](/azure/azure-monitor/logs/private-link-security#private-link-access-modes-private-only-vs-open).
+> Azure Monitor supports using Azure Private Link to connect to a VNet. However, you must use the open Private Link mode in Azure Monitor. For more information, see [Private Link access modes: Private only vs. Open](../azure-monitor/logs/private-link-security.md#private-link-access-modes-private-only-vs-open).
## Required public internet access
This article is part of a series on securing an Azure Machine Learning workflow.
* [Secure the inference environment](how-to-secure-inferencing-vnet.md) * [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md)
-* [Use a firewall](how-to-access-azureml-behind-firewall.md)
+* [Use a firewall](how-to-access-azureml-behind-firewall.md)
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
By default the logs are pulled from the inference server. Logs include the conso
You can also get logs from the storage initializer container by passing `ΓÇô-container storage-initializer`. These logs contain information on whether code and model data were successfully downloaded to the container.
-Add `--help` and/or `--debug` to commands to see more information. Include the `x-ms-client-request-id` header to help with troubleshooting.
+Add `--help` and/or `--debug` to commands to see more information.
+
+## Request tracing
+
+There are three supported tracing headers:
+
+- `x-request-id` is reserved for server tracing. We override this header to ensure it's a valid GUID.
+
+ > [!Note]
+ > When you create a support ticket for a failed request, attach the failed request ID to expedite investigation.
+
+- `x-ms-request-id` and `x-ms-client-request-id` are available for client tracing scenarios. We sanitize these headers to remove non-alphanumeric symbols. These headers are truncated to 72 characters.
## Common deployment errors
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Otherwise, you'll see a list of your recent automated ML experiments, including
Select **Next**.
-1. On the **Task type and settings** form, select the task type: classification, regression, or forecasting. See [supported task types](concept-automated-ml.md#when-to-use-automl-classification-regression-forecasting--computer-vision) for more information.
+1. On the **Task type and settings** form, select the task type: classification, regression, or forecasting. See [supported task types](concept-automated-ml.md#when-to-use-automl-classification-regression-forecasting-computer-vision--nlp) for more information.
1. For **classification**, you can also enable deep learning.
managed-instance-apache-cassandra Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/faq.md
The [Prometheus](https://prometheus.io/docs/introduction/overview/) server is ho
### Does Azure Managed Instance for Apache Cassandra provide full backups?
-Yes, it provides full backups to Azure Storage and restores to a new cluster
+Yes, it provides full backups to Azure Storage and restores to a new cluster. For more information, see [here](management-operations.md#backup-and-restore).
### How can I migrate data from my existing Apache Cassandra cluster to Azure Managed Instance for Apache Cassandra?
managed-instance-apache-cassandra Management Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/management-operations.md
Azure Managed Instance for Apache Cassandra provides an [SLA](https://azure.micr
Snapshot backups are enabled by default and taken every 4 hours with [Medusa](https://github.com/thelastpickle/cassandra-medusa). Backups are stored in an internal Azure Blob Storage account and are retained for up to 2 days (48 hours). There is no cost for backups. To restore from a backup, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal. > [!WARNING]
-> Backups are restored to new clusters only. Backups are intended for accidental deletion scenarios, and are not geo-redundant. They are therefore not recommended for use as a disaster recovery (DR) strategy in case of a total regional outage. To safeguard against region-wide outages, we recommend a multi-region deployment. Take a look at our [quickstart for multi-region deployments](create-multi-region-cluster.md).
+> Backups can be restored to the same VNet/subnet as your existing cluster, but they cannot be restored to the *same cluster*. Backups can only be restored to **new clusters**. Backups are intended for accidental deletion scenarios, and are not geo-redundant. They are therefore not recommended for use as a disaster recovery (DR) strategy in case of a total regional outage. To safeguard against region-wide outages, we recommend a multi-region deployment. Take a look at our [quickstart for multi-region deployments](create-multi-region-cluster.md).
## Security
Azure Managed Instance for Apache Cassandra provides many built-in explicit secu
* Active virus scanning. * Secure coding practices.
+For more information on security features, see our article [here](security.md).
+ ## Hybrid support When a [hybrid](configure-hybrid-cluster.md) cluster is configured, automated reaper operations running in the service will benefit the whole cluster. This includes data centers that are not provisioned by the service. Outside this, it is your responsibility to maintain your on-premise or externally hosted data center.
managed-instance-apache-cassandra Materialized Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/materialized-views.md
Like most NoSQL stores, Apache Cassandra is not designed to have a normalized da
- BATCH guarantees that all statements in the batch are committed or none. - All the statements have the same quorum and commit semantics.
-If your workload truly needs a normalized data model, consider a scalable relational store like Azure's [Hyperscale PostgreSQL](/azure/postgresql/hyperscale/).
+If your workload truly needs a normalized data model, consider a scalable relational store like Azure's [Hyperscale PostgreSQL](../postgresql/hyperscale/index.yml).
## How to enable materialized views You need to set `enable_materialized_views: true` in the `rawUserConfig` field of your Cassandra data center. To do so, use the following Azure CLI command to update each data center in your cluster:
az managed-cassandra datacenter update \
* [Create a managed instance cluster from the Azure portal](create-cluster-portal.md) * [Deploy a Managed Apache Spark Cluster with Azure Databricks](deploy-cluster-databricks.md)
-* [Manage Azure Managed Instance for Apache Cassandra resources using Azure CLI](manage-resources-cli.md)
+* [Manage Azure Managed Instance for Apache Cassandra resources using Azure CLI](manage-resources-cli.md)
marketplace Azure App Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-plans.md
Title: Create plans for an Azure application offer
-description: Create plans for an Azure application offer in Partner Center (Azure Marketplace).
+description: Create plans for an Azure application offer in Partner Center | Azure Marketplace.
Previously updated : 06/01/2021 Last updated : 03/17/2022 # Create plans for an Azure application offer
Offers sold through the Microsoft commercial marketplace must have at least one
1. Near the top of the **Plan overview** tab, select **+ Create new plan**. 1. In the dialog box that appears, in the **Plan ID** box, enter a unique plan ID. This ID will be visible to customers in the product URL. Use up to 50 lowercase alphanumeric characters, dashes, or underscores. You cannot modify the plan ID after you select **Create**.
-1. In the **Plan name** box, enter a unique name for this plan. Customers will see this name when deciding which plan to select within your offer. Use a maximum of 50 characters.
+1. In the **Plan name** box, enter a unique name for this plan. Customers will see this name when deciding which plan to select within your offer. Use a maximum of 2,000 characters.
1. Select **Create**. ## Define the plan setup
Select **Save draft** before continuing to the next tab: Plan listing.
The **Plan listing** tab is where you configure listing details of the plan. This tab displays specific information that shows the difference between plans in the same offer. You can define the plan name, summary, and description as you want them to appear in the commercial marketplace.
-1. In the **Plan name** box, the name you provided earlier for this plan appears here. You can change it at any time. This name will appear in the commercial marketplace as the title of your offer's software plan and is limited to 100 characters.
+1. In the **Plan name** box, the name you provided earlier for this plan appears here. You can change it at any time. This name will appear in the commercial marketplace as the title of your offer's software plan and is limited to 200 characters.
1. In the **Plan summary** box, provide a short summary of your plan (not the offer). This summary is limited to 100 characters.
-1. In the **Plan description** box, explain what makes this software plan unique and any differences from other plans within your offer. Don't describe the offer, just the plan. This description may contain up to 2,000 characters.
+1. In the **Plan description** box, explain what makes this software plan unique and any differences from other plans within your offer. Don't describe the offer, just the plan. This description may contain up to 3,000 characters.
1. Select **Save draft** before continuing. ## Next steps
marketplace Create Consulting Service Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-consulting-service-offer-listing.md
Previously updated : 11/30/2021 Last updated : 03/17/2022 # Configure consulting service offer listing details
On the **Offer listing** page, provide the information described below. To learn
## Offer details
-1. The **Name** box is pre-filled with the name you entered earlier in the **New offer** dialog box, but you can change it at any time. This name will appear as the title of your offer listing on the online store.
+1. The **Name** box is pre-filled with the name you entered earlier in the **New offer** dialog box, but you can change it at any time. This name will appear as the title of your offer listing on the online store.
> [!IMPORTANT] > The offer name must be in the format *Name: Duration + type*. For more information, see [offer listing details](./plan-consulting-service-offer.md#offer-listing-details). 2. In the **Search results summary** box, describe the purpose or goal of your offer in 200 characters or less.
-3. In the **Description** field, describe your consulting service offer. You can use HTML tags to format your description. You can enter up to 2,000 characters of text in this box, including HTML tags and spaces. For information about HTML formatting, see [HTML tags supported in the offer descriptions](./supported-html-tags.md).
+3. In the **Description** field, describe your consulting service offer. You can use HTML tags to format your description. You can enter up to 5,000 characters of text in this box, including HTML tags and spaces. For information about HTML formatting, see [HTML tags supported in the offer descriptions](./supported-html-tags.md).
4. You have the option to enter up to three **search keywords**. These keywords will help customers find your offer in the online store. You don't need to include the offer name and description. 5. Enter the expected duration of your consulting service in the **Duration** drop-down lists. The duration you select must match the duration you mentioned in the offer name.
marketplace Create New Saas Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-new-saas-offer-listing.md
Previously updated : 12/08/2021 Last updated : 03/16/2022 # Configure SaaS offer listing details
On the **Offer listing** page, under **Marketplace details**, complete the follo
1. The **Name** box is prefilled with the name you entered earlier in the **New offer** dialog box. You can change the name at any time. 1. In the **Search results summary** box, enter up to 50 characters of text. This summary is used in the marketplace listing search results.
-1. In the **Description** box, enter a description for your offer. This text box has rich text editor controls that you can use to make your description more engaging. You can also use HTML tags to format your description. You can enter up to 3,000 characters of text in this box, which includes HTML markup and spaces. For information about HTML formatting, see [HTML tags supported in the commercial marketplace offer descriptions](supported-html-tags.md).
+1. In the **Description** box, enter a description for your offer. This text box has rich text editor controls that you can use to make your description more engaging. You can also use HTML tags to format your description. You can enter up to 5,000 characters of text in this box, which includes HTML markup and spaces. For information about HTML formatting, see [HTML tags supported in the commercial marketplace offer descriptions](supported-html-tags.md).
1. In the **Getting started instructions** box, provide instructions to help customers connect to your SaaS offer. You can add up to 3,000 characters of text and links to more detailed online documentation. 1. (Optional) In the **Search keywords** boxes, enter up to three search keywords that customers can use to find your offer in the commercial marketplace. You don't need to include the offer **Name** and **Description**: that text is automatically included in search. 1. In the **Privacy policy link** box, enter a link (starting with https) to your organization's privacy policy. You're responsible to ensure your app complies with privacy laws and regulations, and for providing a valid privacy policy.
marketplace Create New Saas Offer Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-new-saas-offer-plans.md
Previously updated : 10/15/2021 Last updated : 03/16/2022 # Create plans for a SaaS offer
Offers sold through the Microsoft commercial marketplace must have at least one
1. Near the top of the **Plan overview** tab, select **+ Create new plan**. 1. In the dialog box that appears, in the **Plan ID** box, enter a unique plan ID. Use up to 50 lowercase alphanumeric characters, dashes, or underscores. You cannot modify the plan ID after you select **Create**.
-1. In the **Plan name** box, enter a unique name for this plan. Use a maximum of 50 characters.
+1. In the **Plan name** box, enter a unique name for this plan. Use a maximum of 200 characters.
1. Select **Create**. ## Define the plan listing
Offers sold through the Microsoft commercial marketplace must have at least one
On the **Plan listing** tab, you can define the plan name and description as you want them to appear in the commercial marketplace. 1. In the **Plan name** box, the name you provided earlier for this plan appears here. You can change it at any time. This name will appear in the commercial marketplace as the title of your offer's software plan.
-1. In the **Plan description** box, explain what makes this software plan unique and any differences from other plans within your offer. This description may contain up to 500 characters.
+1. In the **Plan description** box, explain what makes this software plan unique and any differences from other plans within your offer. This description may contain up to 3,000 characters.
1. Select **Save draft** before continuing to the next tab: **Pricing and availability**. ## Define markets, pricing, and availability
marketplace Marketplace Dynamics 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-dynamics-365.md
Previously updated : 12/03/2021 Last updated : 03/17/2022 # Plan a Microsoft Dynamics 365 offer
Here's an example of how offer information appears in Microsoft AppSource (any l
To help create your offer more easily, prepare these items ahead of time. All are required except where noted. -- **Name** ΓÇô The name will appear as the title of your offer listing in the commercial marketplace. The name may be trademarked. It cannot contain emojis (unless they are the trademark and copyright symbols) and is limited to 50 characters.
+- **Name** ΓÇô The name will appear as the title of your offer listing in the commercial marketplace. The name may be trademarked. It cannot contain emojis (unless they are the trademark and copyright symbols) and is limited to 200 characters.
- **Search results summary** ΓÇô The purpose or function of your offer as a single sentence with no line breaks in 100 characters or less. This is used in the commercial marketplace listing(s) search results. - **Description** ΓÇô This description displays in the commercial marketplace listing(s) overview. Consider including a value proposition, key benefits, intended user base, any category or industry associations, in-app purchase opportunities, any required disclosures, and a link to learn more. This text box has rich text editor controls to make your description more engaging. Optionally, use HTML tags for formatting. - **Search keywords** (optional) ΓÇô Up to three search keywords that customers can use to find your offer. Don't include the offer **Name** and **Description**; that text is automatically included in search.
marketplace Marketplace Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-iot-edge.md
Previously updated : 04/30/2021 Last updated : 03/16/2022 # Plan an IoT Edge module offer
You can choose to provide your own terms and conditions, instead of the standard
To help create your offer more easily, prepare these items ahead of time. All are required except where noted. -- **Name** ΓÇô The name will appear as the title of your offer listing in the commercial marketplace. The name may be trademarked. It cannot contain emojis (unless they are the trademark and copyright symbols) and is limited to 50 characters.
+- **Name** ΓÇô The name will appear as the title of your offer listing in the commercial marketplace. The name may be trademarked. It cannot contain emojis (unless they are the trademark and copyright symbols) and is limited to 200 characters.
- **Search results summary** ΓÇô The purpose or function of your offer as a single sentence with no line breaks in 100 characters or less. This is used in the commercial marketplace listing(s) search results. - **Short description** ΓÇô Details of the purpose or function of the offer, written in plain text with no line breaks. This will appear on your offer's details page. - **Description** ΓÇô This description displays in the commercial marketplace listing(s) overview. Consider including a value proposition, key benefits, intended user base, any category or industry associations, in-app purchase opportunities, any required disclosures, and a link to learn more. This text box has rich text editor controls to make your description more engaging. Optionally, use HTML tags for formatting.
marketplace Plan Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-saas-offer.md
The following example shows an offer listing in the Azure portal.
To help create your offer more easily, prepare some of these items ahead of time. The following items are required unless otherwise noted. -- **Name**: This name will appear as the title of your offer listing in the commercial marketplace. The name may be trademarked. It cannot contain emojis (unless they are the trademark and copyright symbols) and must be limited to 50 characters.
+- **Name**: This name will appear as the title of your offer listing in the commercial marketplace. The name may be trademarked. It cannot contain emojis (unless they are the trademark and copyright symbols) and must be limited to 200 characters.
- **Search results summary**: Describe the purpose or function of your offer as a single sentence with no line breaks in 100 characters or less. This summary is used in the commercial marketplace listing(s) search results. - **Description**: This description will be displayed in the commercial marketplace listing(s) overview. Consider including a value proposition, key benefits, intended user base, any category or industry associations, in-app purchase opportunities, any required disclosures, and a link to learn more.
- This text box has rich text editor controls that you can use to make your description more engaging. You can also use HTML tags to format your description. You can enter up to 3,000 characters of text in this box, including HTML markup. For additional tips, see [Write a great app description](/windows/uwp/publish/write-a-great-app-description).
+ This text box has rich text editor controls that you can use to make your description more engaging. You can also use HTML tags to format your description. You can enter up to 5,000 characters of text in this box, including HTML markup. For additional tips, see [Write a great app description](/windows/uwp/publish/write-a-great-app-description).
- **Getting Started instructions**: If you choose to sell your offer through Microsoft (transactable offer), this field is required. These instructions help customers connect to your SaaS offer. You can add up to 3,000 characters of text and links to more detailed online documentation. - **Search keywords** (optional): Provide up to three search keywords that customers can use to find your offer in the online stores. You don't need to include the offer **Name** and **Description**: that text is automatically included in search.
marketplace What Is New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/what-is-new.md
Learn about important updates in the commercial marketplace program of Partner C
| Category | Description | Date | | | | |
-| Offers | An ISV can now specify time-bound margins for CSP partners to incentivize them to sell it to their customers. When their partner makes a sale to a customer, Microsoft will pay the ISV the wholesale price. See [ISV to CSP Partner private offers](/azure/marketplace/isv-csp-reseller) and [the FAQs](/azure/marketplace/isv-csp-faq). | 2022-02-15 |
-| Analytics | We added a new [Customer Retention Dashboard](/azure/marketplace/customer-retention-dashboard) that provides vital insights into customer retention and engagement. See the [FAQ article](/azure/marketplace/analytics-faq). | 2022-02-15 |
-| Analytics | We added a Quality of Service (QoS) report query to the [List of system queries](/azure/marketplace/analytics-system-queries) used in the Create Report API. | 2022-01-27 |
+| Offers | An ISV can now specify time-bound margins for CSP partners to incentivize them to sell it to their customers. When their partner makes a sale to a customer, Microsoft will pay the ISV the wholesale price. See [ISV to CSP Partner private offers](./isv-csp-reseller.md) and [the FAQs](./isv-csp-faq.yml). | 2022-02-15 |
+| Analytics | We added a new [Customer Retention Dashboard](./customer-retention-dashboard.md) that provides vital insights into customer retention and engagement. See the [FAQ article](./analytics-faq.yml). | 2022-02-15 |
+| Analytics | We added a Quality of Service (QoS) report query to the [List of system queries](./analytics-system-queries.md) used in the Create Report API. | 2022-01-27 |
| Offers | Added a [Revenue Dashboard](revenue-dashboard.md) to Partner Center, including a revenue report, [sample queries](analytics-sample-queries.md#revenue-report-queries), and [FAQs](/azure/marketplace/analytics-faq#revenue) page. | 2021-12-08 | | Offers | Container and container apps offers can now use the Microsoft [Standard Contract](standard-contract.md). | 2021-11-02 | | Offers | Private plans for [SaaS offers](plan-saas-offer.md) are now available on AppSource. | 2021-10-06 |
Learn about important updates in the commercial marketplace program of Partner C
| Offers | Setup and maintenance of Power BI Visuals is migrating from the Office Store to the commercial marketplace this month. [This FAQ](power-bi-visual-faq.yml) provides a summary of improvements to the offer submission process. To start, see [Plan a Power BI visual offer](marketplace-power-bi-visual.md).| 2021-09-21 | | Offers | While [private plans](private-plans.md) were previously only available on the Azure portal, they are now also available on Microsoft AppSource. | 2021-09-10 | | Analytics | Publishers of Azure application offers can view offer deployment health in the Quality of service (QoS) reports. QoS helps publishers understand the reasons for offer deployment failures and provides actionable insights for their remediation. For details, see [Quality of service (QoS) dashboard](quality-of-service-dashboard.md). | 2021-09-07 |
-| Policy | The SaaS customer [refund window](/marketplace/refund-policies) is now [72 hours](/azure/marketplace/marketplace-faq-publisher-guide) for all offers. | 2021-09-01 |
+| Policy | The SaaS customer [refund window](/marketplace/refund-policies) is now [72 hours](./marketplace-faq-publisher-guide.yml) for all offers. | 2021-09-01 |
| ## Tax updates
Learn about important updates in the commercial marketplace program of Partner C
| Category | Description | Date | | | - | - | | Payouts | We updated the payment schedule for [Payout schedules and processes](/partner-center/payout-policy-details). | 2022-01-19 |
-| Analytics | Added questions and answers to the [Commercial marketplace analytics FAQ](/azure/marketplace/analytics-faq), such as enrolling in the commercial marketplace, where to create a marketplace offer, getting started with programmatic access to commercial marketplace analytics reports, and more. | 2022-01-07 |
+| Analytics | Added questions and answers to the [Commercial marketplace analytics FAQ](./analytics-faq.yml), such as enrolling in the commercial marketplace, where to create a marketplace offer, getting started with programmatic access to commercial marketplace analytics reports, and more. | 2022-01-07 |
| Offers | Added a new article, [Troubleshooting Private Plans in the commercial marketplace](azure-private-plan-troubleshooting.md). | 2021-12-13 | | Offers | We have updated the names of [Dynamics 365](./marketplace-dynamics-365.md#licensing-options) offer types: <br><br> - Dynamics 365 for Customer Engagement &amp; PowerApps is now **Dynamics 365 apps on Dataverse and Power Apps** <br> - Dynamics 365 for operations is now **Dynamics 365 Operations Apps** <br> - Dynamics 365 business central is now **Dynamics 365 Business Central** | 2021-12-03 | | Policy | WeΓÇÖve created an [FAQ topic](/legal/marketplace/mpa-faq) to answer publisher questions about the Microsoft Publisher Agreement. | 2021-09-27 |
Learn about important updates in the commercial marketplace program of Partner C
| Offers | We moved the list of categories and industries from our [Marketing Best Practices](gtm-offer-listing-best-practices.md) topic to their [own page](marketplace-categories-industries.md). | 2021-08-20 | | Offers | The [Commercial marketplace transact capabilities](marketplace-commercial-transaction-capabilities-and-considerations.md) topic now includes a flowchart to help you determine the appropriate transactable offer type and pricing plan to sell your software in the commercial marketplace. | 2021-08-18 | | Policy | Updated [certification](/legal/marketplace/certification-policies?context=/azure/marketplace/context/context) policy; see [change history](/legal/marketplace/offer-policies-change-history). | 2021-08-06 |
-|
+|
media-services Encode Recommended On Premises Live Encoders https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/encode-recommended-on-premises-live-encoders.md
To play back content, both an audio and video stream must be present. Playback o
- Whenever possible, use a hardwired internet connection. - When you're determining bandwidth requirements, double the streaming bitrates. Although not mandatory, this simple rule helps to mitigate the impact of network congestion. - When using software-based encoders, close out any unnecessary programs.-- Changing your encoder configuration after it has started pushing has negative effects on the event. Configuration changes can cause the event to become unstable. If you change your encoder configuration, you need to reset [Live Events](https://docs.microsoft.com/rest/api/media/live-events/reset) and restart the live event in order for the change to take place. If you stop and start the live event without resetting it, the live event will preserve the previous configuration.
+- Changing your encoder configuration after it has started pushing has negative effects on the event. Configuration changes can cause the event to become unstable. If you change your encoder configuration, you need to reset [Live Events](/rest/api/media/live-events/reset) and restart the live event in order for the change to take place. If you stop and start the live event without resetting it, the live event will preserve the previous configuration.
- Always test and validate newer versions of encoder software for continued compatibility with Azure Media Services. Microsoft does not re-validate encoders on this list, and most validations are done by the software vendors directly as a "self-certification." - Ensure that you give yourself ample time to set up your event. For high-scale events, we recommend starting the setup an hour before your event. - Use the H.264 video and AAC-LC audio codec output.
To play back content, both an audio and video stream must be present. Playback o
> [!IMPORTANT] > Watch the physical condition of the machine (CPU / Memory / etc) as uploading fragments to cloud involves CPU and IO operations.
-> If you change any encoder configurations, reset [Live Events](https://docs.microsoft.com/rest/api/media/live-events/reset) the channels and the live event for the change to take place. If you stop and start the live event without resetting it, the live event will preserve the previous configuration.
+> If you change any encoder configurations, reset [Live Events](/rest/api/media/live-events/reset) the channels and the live event for the change to take place. If you stop and start the live event without resetting it, the live event will preserve the previous configuration.
## See also
To play back content, both an audio and video stream must be present. Playback o
## Next steps
-[How to verify your encoder](encode-on-premises-encoder-partner.md)
+[How to verify your encoder](encode-on-premises-encoder-partner.md)
media-services Limits Quotas Constraints Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/limits-quotas-constraints-reference.md
This article lists some of the most common Microsoft Azure Media Services limits
## Storage limits
-Azure Storage block blog limits apply to storage accounts used with Media Services. See [Azure Blob Storage limits](/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-blob-storage-limits).
+Azure Storage block blog limits apply to storage accounts used with Media Services. See [Azure Blob Storage limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-blob-storage-limits).
These limit includes the total stored data storage size of the files that you upload for encoding and the file sizes of the encoded files. The limit for file size for encoding is a different limit. See [File size for encoding](#file-size-for-encoding-limit).
For resources that are not fixed, you may ask for the quotas to be raised, by op
## Next steps
-[Overview](media-services-overview.md)
+[Overview](media-services-overview.md)
media-services Video On Demand Simple Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/video-on-demand-simple-portal-quickstart.md
This article shows you how to do the basic steps for delivering a basic video on
- [Create a Media Services account](account-create-how-to.md). When you set up the Media Services account, a storage account, a user managed identity, and a default streaming endpoint will also be created. - One MP4 video to use for this exercise. - Create a GitHub account if you don't have one already, and stay logged in.-- Create an Azure [Static Web App](/azure/static-web-apps/get-started-portal?tabs=vanilla-javascript).
+- Create an Azure [Static Web App](../../static-web-apps/get-started-portal.md?tabs=vanilla-javascript).
> [!NOTE] > You will be switching between several browser tabs or windows during this process. The below steps assume that you have your browser set to open tabs. Keep them all open.
mysql Sample Scripts Java Connection Pooling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/sample-scripts-java-connection-pooling.md
Last updated 02/28/2018
[!INCLUDE[applies-to-mysql-single-server](includes/applies-to-mysql-single-server.md)]
-The below sample code illustrates connection pooling in java.
+The below sample code illustrates connection pooling in Java.
```java import java.sql.Connection;
network-watcher Network Watcher Intrusion Detection Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-intrusion-detection-open-source-tools.md
While the logs that Suricata produces contain valuable information about what's
#### Install Elasticsearch
-1. The Elastic Stack from version 5.0 and above requires Java 8. Run the command `java -version` to check your version. If you do not have java installed, refer to documentation on the [Azure-suppored JDKs](/azure/developer/java/fundamentals/java-support-on-azure).
+1. The Elastic Stack from version 5.0 and above requires Java 8. Run the command `java -version` to check your version. If you do not have Java installed, refer to documentation on the [Azure-suppored JDKs](/azure/developer/java/fundamentals/java-support-on-azure).
1. Download the correct binary package for your system:
network-watcher Network Watcher Visualize Nsg Flow Logs Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-visualize-nsg-flow-logs-open-source-tools.md
By connecting NSG flow logs with the Elastic Stack, we can create a Kibana dashb
#### Install Elasticsearch
-1. The Elastic Stack from version 5.0 and above requires Java 8. Run the command `java -version` to check your version. If you do not have java installed, refer to documentation on the [Azure-suppored JDKs](/azure/developer/java/fundamentals/java-support-on-azure).
+1. The Elastic Stack from version 5.0 and above requires Java 8. Run the command `java -version` to check your version. If you do not have Java installed, refer to documentation on the [Azure-suppored JDKs](/azure/developer/java/fundamentals/java-support-on-azure).
2. Download the correct binary package for your system: ```bash
postgresql Application Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/application-best-practices.md
Title: App development best practices - Azure Database for PostgreSQL description: Learn about best practices for building an app by using Azure Database for PostgreSQL.-- + ++ Last updated 12/10/2020
postgresql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concept-reserved-pricing.md
Title: Reserved compute pricing - Azure Database for PostgreSQL description: Prepay for Azure Database for PostgreSQL compute resources with reserved capacity-- + ++ Last updated 10/06/2021
postgresql Concepts Aad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-aad-authentication.md
Title: Active Directory authentication - Azure Database for PostgreSQL - Single Server description: Learn about the concepts of Azure Active Directory for authentication with Azure Database for PostgreSQL - Single Server-- ++ Last updated 07/23/2020
postgresql Concepts Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-aks.md
Title: Connect to Azure Kubernetes Service - Azure Database for PostgreSQL - Single Server description: Learn about connecting Azure Kubernetes Service (AKS) with Azure Database for PostgreSQL - Single Server-- Previously updated : 07/14/2020 ++ Last updated : 07/14/2020 # Connecting Azure Kubernetes Service and Azure Database for PostgreSQL - Single Server
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-audit.md
Title: Audit logging in Azure Database for PostgreSQL - Single Server description: Concepts for pgAudit audit logging in Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 01/28/2020
postgresql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-azure-advisor-recommendations.md
Title: Azure Advisor for PostgreSQL description: Learn about Azure Advisor recommendations for PostgreSQL.-- + ++ Last updated 04/08/2021 # Azure Advisor for PostgreSQL
postgresql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-backup.md
Title: Backup and restore - Azure Database for PostgreSQL - Single Server description: Learn about automatic backups and restoring your Azure Database for PostgreSQL server - Single Server.-- + ++ Last updated 11/08/2021
postgresql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-business-continuity.md
Title: Business continuity - Azure Database for PostgreSQL - Single Server description: This article describes business continuity (point in time restore, data center outage, geo-restore, replicas) when using Azure Database for PostgreSQL.-- + ++ Last updated 08/07/2020
postgresql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-certificate-rotation.md
Title: Certificate rotation for Azure Database for PostgreSQL Single server description: Learn about the upcoming changes of root certificate changes that will affect Azure Database for PostgreSQL Single server-- + ++ Last updated 09/02/2020
postgresql Concepts Connection Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-connection-libraries.md
Title: Connection libraries - Azure Database for PostgreSQL - Single Server description: This article describes several libraries and drivers that you can use when coding applications to connect and query Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 5/6/2019
postgresql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-connectivity-architecture.md
Title: Connectivity architecture - Azure Database for PostgreSQL - Single Server description: Describes the connectivity architecture of your Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 10/15/2021
postgresql Concepts Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-connectivity.md
Title: Handle transient connectivity errors - Azure Database for PostgreSQL - Single Server description: Learn how to handle transient connectivity errors for Azure Database for PostgreSQL - Single Server.
-keywords: postgresql connection,connection string,connectivity issues,transient error,connection error
-- + ++ Last updated 5/6/2019
postgresql Concepts Data Access And Security Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-data-access-and-security-private-link.md
Title: Private Link - Azure Database for PostgreSQL - Single server description: Learn how Private link works for Azure Database for PostgreSQL - Single server.-- + ++ Last updated 03/10/2020
postgresql Concepts Data Access And Security Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-data-access-and-security-vnet.md
Title: Virtual network rules - Azure Database for PostgreSQL - Single Server description: Learn how to use virtual network (vnet) service endpoints to connect to Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 07/17/2020
postgresql Concepts Data Encryption Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-data-encryption-postgresql.md
Title: Data encryption with customer-managed key - Azure Database for PostgreSQL - Single server description: Azure Database for PostgreSQL Single server data encryption with a customer-managed key enables you to Bring Your Own Key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data.-- + ++ Last updated 01/13/2020
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-extensions.md
Title: Extensions - Azure Database for PostgreSQL - Single Server description: Learn about the available PostgreSQL extensions in Azure Database for PostgreSQL - Single Server-- + ++ Last updated 03/25/2021 # PostgreSQL extensions in Azure Database for PostgreSQL - Single Server
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-firewall-rules.md
Title: Firewall rules - Azure Database for PostgreSQL - Single Server description: This article describes how to use firewall rules to connect to Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 07/17/2020
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-high-availability.md
Title: High availability - Azure Database for PostgreSQL - Single Server description: This article provides information on high availability in Azure Database for PostgreSQL - Single Server-- + ++ Last updated 6/15/2020 + # High availability in Azure Database for PostgreSQL ΓÇô Single Server The Azure Database for PostgreSQL ΓÇô Single Server service provides a guaranteed high level of availability with the financially backed service level agreement (SLA) of [99.99%](https://azure.microsoft.com/support/legal/sla/postgresql) uptime. Azure Database for PostgreSQL provides high availability during planned events such as user-initated scale compute operation, and also when unplanned events such as underlying hardware, software, or network failures occur. Azure Database for PostgreSQL can quickly recover from most critical circumstances, ensuring virtually no application down time when using this service.
postgresql Concepts Infrastructure Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-infrastructure-double-encryption.md
Title: Infrastructure double encryption - Azure Database for PostgreSQL description: Learn about using Infrastructure double encryption to add a second layer of encryption with a service-managed keys.-- + ++ Last updated 6/30/2020
postgresql Concepts Known Issues Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-known-issues-limitations.md
Title: Known issues and limitations for Azure Database for PostgreSQL - Single Server and Flexible Server description: Lists the known issues that customers should be aware of.-- + ++ Last updated 11/30/2021 + # Azure Database for PostgreSQL - Known issues and limitations This page provides a list of known issues in Azure Database for PostgreSQL that could impact your application. It also lists any mitigation and recommendations to workaround the issue.
postgresql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-limits.md
Title: Limits - Azure Database for PostgreSQL - Single Server description: This article describes limits in Azure Database for PostgreSQL - Single Server, such as number of connection and storage engine options.-- + ++ Last updated 01/28/2020
postgresql Concepts Logical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-logical.md
Title: Logical decoding - Azure Database for PostgreSQL - Single Server description: Describes logical decoding and wal2json for change data capture in Azure Database for PostgreSQL - Single Server-- + ++ Last updated 12/09/2020
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-monitoring.md
Title: Monitor and tune - Azure Database for PostgreSQL - Single Server description: This article describes monitoring and tuning features in Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 10/21/2020
postgresql Concepts Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-performance-recommendations.md
Title: Performance Recommendations - Azure Database for PostgreSQL - Single Server description: This article describes the Performance Recommendation feature in Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 08/21/2019 # Performance Recommendations in Azure Database for PostgreSQL - Single Server
postgresql Concepts Planned Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-planned-maintenance-notification.md
Title: Planned maintenance notification - Azure Database for PostgreSQL - Single Server description: This article describes the Planned maintenance notification feature in Azure Database for PostgreSQL - Single Server-- + ++ Last updated 2/17/2022 + # Planned maintenance notification in Azure Database for PostgreSQL - Single Server Learn how to prepare for planned maintenance events on your Azure Database for PostgreSQL.
postgresql Concepts Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-pricing-tiers.md
Title: Pricing tiers - Azure Database for PostgreSQL - Single Server description: This article describes the compute and storage options in Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 10/14/2020
postgresql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-query-performance-insight.md
Title: Query Performance Insight - Azure Database for PostgreSQL - Single Server description: This article describes the Query Performance Insight feature in Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 08/21/2019
postgresql Concepts Query Store Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-query-store-best-practices.md
Title: Query Store best practices in Azure Database for PostgreSQL - Single Server description: This article describes best practices for the Query Store in Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 5/6/2019
postgresql Concepts Query Store Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-query-store-scenarios.md
Title: Query Store scenarios - Azure Database for PostgreSQL - Single Server description: This article describes some scenarios for the Query Store in Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 5/6/2019 # Usage scenarios for Query Store
postgresql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-query-store.md
Title: Query Store - Azure Database for PostgreSQL - Single Server description: This article describes the Query Store feature in Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 07/01/2020
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-read-replicas.md
Title: Read replicas - Azure Database for PostgreSQL - Single Server description: This article describes the read replica feature in Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 05/29/2021
postgresql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-security.md
Title: Security in Azure Database for PostgreSQL - Single Server description: An overview of the security features in Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 11/22/2019
postgresql Concepts Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-server-logs.md
Title: Logs - Azure Database for PostgreSQL - Single Server description: Describes logging configuration, storage and analysis in Azure Database for PostgreSQL - Single Server-- + ++ Last updated 06/25/2020
postgresql Concepts Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-servers.md
 Title: Servers - Azure Database for PostgreSQL - Single Server description: This article provides considerations and guidelines for configuring and managing Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 5/6/2019 # Azure Database for PostgreSQL - Single Server
postgresql Concepts Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-ssl-connection-security.md
Title: SSL/TLS - Azure Database for PostgreSQL - Single Server description: Instructions and information on how to configure TLS connectivity for Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 07/08/2020 # Configure TLS connectivity in Azure Database for PostgreSQL - Single Server
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-supported-versions.md
Title: Supported versions - Azure Database for PostgreSQL - Single Server description: Describes the supported Postgres major and minor versions in Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 02/17/2021
postgresql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-version-policy.md
Title: Versioning policy - Azure Database for PostgreSQL - Single Server and Flexible Server (Preview) description: Describes the policy around Postgres major and minor versions in Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 12/14/2021
postgresql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-csharp.md
Title: 'Quickstart: Connect with C# - Azure Database for PostgreSQL - Single Server' description: "This quickstart provides a C# (.NET) code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server."-- +++ ms.devlang: csharp
postgresql Connect Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-go.md
Title: 'Quickstart: Connect with Go - Azure Database for PostgreSQL - Single Server' description: This quickstart provides a Go programming language sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server.-- +++ ms.devlang: golang
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-java.md
Title: 'Quickstart: Use Java and JDBC with Azure Database for PostgreSQL' description: In this quickstart, you learn how to use Java and JDBC with an Azure Database for PostgreSQL.-- -+ ++ ms.devlang: java+ Last updated 08/17/2020
postgresql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-nodejs.md
Title: 'Quickstart: Use Node.js to connect to Azure Database for PostgreSQL - Single Server' description: This quickstart provides a Node.js code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server.-- +++ ms.devlang: javascript
postgresql Connect Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-php.md
Title: 'Quickstart: Connect with PHP - Azure Database for PostgreSQL - Single Server' description: This quickstart provides a PHP code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server.-- +++ ms.devlang: php
postgresql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-python.md
Title: 'Quickstart: Connect with Python - Azure Database for PostgreSQL - Single Server' description: This quickstart provides Python code samples that you can use to connect and query data from Azure Database for PostgreSQL - Single Server.-- +++ ms.devlang: python
postgresql Connect Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-ruby.md
Title: 'Quickstart: Connect with Ruby - Azure Database for PostgreSQL - Single Server' description: This quickstart provides a Ruby code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server.-- +++ ms.devlang: ruby
postgresql Connect Rust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-rust.md
Title: 'Quickstart: Connect with Rust - Azure Database for PostgreSQL - Single Server' description: This quickstart provides Rust code samples that you can use to connect and query data from Azure Database for PostgreSQL - Single Server.-- + Previously updated : 03/26/2021++
+ms.devlang: rust
Last updated : 03/26/2021 # Quickstart: Use Rust to connect and query data in Azure Database for PostgreSQL - Single Server
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-audit.md
description: Concepts for pgAudit audit logging in Azure Database for PostgreSQL
- + Last updated 11/30/2021
postgresql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-backup-restore.md
description: Learn about the concepts of backup and restore with Azure Database
- + Last updated 11/30/2021
postgresql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-business-continuity.md
Title: Overview of business continuity with Azure Database for PostgreSQL - Flexible Server description: Learn about the concepts of business continuity with Azure Database for PostgreSQL - Flexible Server- +
postgresql Concepts Compare Single Server Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compare-single-server-flexible-server.md
Title: Compare Azure Database for PostgreSQL - Single Server and Flexible Server description: Detailed comparison of features and capabilities between Azure Database for PostgreSQL Single Server and Flexible Server- +
postgresql Concepts Compute Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compute-storage.md
Title: Compute and Storage Options - Azure Database for PostgreSQL - Flexible Server description: This article describes the compute and storage options in Azure Database for PostgreSQL - Flexible Server.- +
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
Title: Extensions - Azure Database for PostgreSQL - Flexible Server description: Learn about the available PostgreSQL extensions in Azure Database for PostgreSQL - Flexible Server- +
Using the [Azure portal](https://portal.azure.com):
4. Select extensions you wish to allow-list. :::image type="content" source="./media/concepts-extensions/allow-list.png" alt-text=" Screenshot showing Azure Database for PostgreSQL - allow-listing extensions for installation ":::
-Using [Azure CLI](https://docs.microsoft.com/cli/azure/):
+Using [Azure CLI](/cli/azure/):
You can allow-list extensions via CLI parameter set [command]( https://docs.microsoft.com/cli/azure/postgres/flexible-server/parameter?view=azure-cli-latest&preserve-view=true).
Using [Azure CLI](https://docs.microsoft.com/cli/azure/):
az postgres flexible-server parameter set --resource-group <your resource group> --server-name <your server name> --subscription <your subscription id> --name azure.extensions --value <extension name>,<extension name> ```
- Using [ARM Template](https://docs.microsoft.com/azure/azure-resource-manager/templates/):
+ Using [ARM Template](../../azure-resource-manager/templates/index.yml):
Example below allow-lists extensions dblink, dict_xsyn, pg_buffercache on server mypostgreserver ```json {
Using the [Azure portal](https://portal.azure.com):
:::image type="content" source="./media/concepts-extensions/shared-libraries.png" alt-text=" Screenshot showing Azure Database for PostgreSQL -setting shared preload libraries parameter setting for extensions installation .":::
-Using [Azure CLI](https://docs.microsoft.com/cli/azure/):
+Using [Azure CLI](/cli/azure/):
You can set `shared_preload_libraries` via CLI parameter set [command]( https://docs.microsoft.com/cli/azure/postgres/flexible-server/parameter?view=azure-cli-latest&preserve-view=true).
For more details on restore method wiith Timescale enabled database see [Timesca
## Next steps
-If you don't see an extension that you'd like to use, let us know. Vote for existing requests or create new feedback requests in our [feedback forum](https://feedback.azure.com/d365community/forum/c5e32b97-ee24-ec11-b6e6-000d3a4f0da0).
+If you don't see an extension that you'd like to use, let us know. Vote for existing requests or create new feedback requests in our [feedback forum](https://feedback.azure.com/d365community/forum/c5e32b97-ee24-ec11-b6e6-000d3a4f0da0).
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-high-availability.md
Title: Overview of zone redundant high availability with Azure Database for PostgreSQL - Flexible Server description: Learn about the concepts of zone redundant high availability with Azure Database for PostgreSQL - Flexible Server- +
postgresql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-limits.md
Title: Limits - Azure Database for PostgreSQL - Flexible Server description: This article describes limits in Azure Database for PostgreSQL - Flexible Server, such as number of connection and storage engine options.- +
postgresql Concepts Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-logging.md
Title: Logs - Azure Database for PostgreSQL - Flexible Server description: Describes logging configuration, storage and analysis in Azure Database for PostgreSQL - Flexible Server- +
postgresql Concepts Logical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-logical.md
Title: Logical replication and logical decoding - Azure Database for PostgreSQL - Flexible Server description: Learn about using logical replication and logical decoding in Azure Database for PostgreSQL - Flexible Server- +
postgresql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-maintenance.md
Title: Scheduled maintenance - Azure Database for PostgreSQL - Flexible server description: This article describes the scheduled maintenance feature in Azure Database for PostgreSQL - Flexible server.-- ++ Last updated 11/30/2021
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-monitoring.md
Title: Monitoring and metrics - Azure Database for PostgreSQL - Flexible Server description: This article describes monitoring and metrics features in Azure Database for PostgreSQL - Flexible Server.- +
postgresql Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking.md
Title: Networking overview - Azure Database for PostgreSQL - Flexible Server description: Learn about connectivity and networking options in the Flexible Server deployment option for Azure Database for PostgreSQL.-- ++ Last updated 11/30/2021
postgresql Concepts Pgbouncer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-pgbouncer.md
Title: PgBouncer - Azure Database for PostgreSQL - Flexible Server description: This article provides an overview with the built-in PgBouncer extension.- +
postgresql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-server-parameters.md
Title: Server parameters - Azure Database for PostgreSQL - Flexible Server description: Describes the server parameters in Azure Database for PostgreSQL - Flexible Server- +
postgresql Concepts Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-servers.md
Title: Servers in Azure Database for PostgreSQL - Flexible Server description: This article provides considerations and guidelines for configuring and managing Azure Database for PostgreSQL - Flexible Server.- +
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-supported-versions.md
Title: Supported versions - Azure Database for PostgreSQL - Flexible Server description: Describes the supported PostgreSQL major and minor versions in Azure Database for PostgreSQL - Flexible Server.- +
postgresql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-azure-cli.md
Title: 'Quickstart: Connect using Azure CLI - Azure Database for PostgreSQL - Flexible Server' description: This quickstart provides several ways to connect with Azure CLI with Azure Database for PostgreSQL - Flexible Server.-- ++ Last updated 11/30/2021
postgresql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-csharp.md
Title: 'Quickstart: Connect with C# - Azure Database for PostgreSQL - Flexible Server' description: "This quickstart provides a C# (.NET) code sample you can use to connect and query data from Azure Database for PostgreSQL - Flexible Server."-- ++ ms.devlang: csharp
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-java.md
Title: 'Quickstart: Use Java and JDBC with Azure Database for PostgreSQL Flexible Server' description: In this quickstart, you learn how to use Java and JDBC with an Azure Database for PostgreSQL Flexible server.-- ++ ms.devlang: java
postgresql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-python.md
Title: 'Quickstart: Connect using Python - Azure Database for PostgreSQL - Flexible Server' description: This quickstart provides several Python code samples you can use to connect and query data from Azure Database for PostgreSQL - Flexible Server.-- - ++
+ms.devlang: python
+ Last updated 11/30/2021
postgresql How To Configure High Availability Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-high-availability-cli.md
Title: Manage zone redundant high availability - Azure CLI - Azure Database for PostgreSQL Flexible Server description: This article describes how to configure zone redundant high availability in Azure Database for PostgreSQL flexible Server with the Azure CLI.-- ++ Last updated 11/30/2021
postgresql How To Connect Query Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-connect-query-guide.md
- + Last updated 11/30/2021
postgresql How To Maintenance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-maintenance-portal.md
Title: Azure Database for PostgreSQL - Flexible Server - Scheduled maintenance - Azure portal description: Learn how to configure scheduled maintenance settings for an Azure Database for PostgreSQL - Flexible server from the Azure portal.- +
postgresql How To Manage High Availability Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-high-availability-portal.md
Title: Manage zone redundant high availability - Azure portal - Azure Database for PostgreSQL - Flexible Server description: This article describes how to enable or disable zone redundant high availability in Azure Database for PostgreSQL - Flexible Server through the Azure portal.- +
postgresql How To Manage Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-server-cli.md
Title: Manage server - Azure CLI - Azure Database for PostgreSQL - Flexible Server description: Learn how to manage an Azure Database for PostgreSQL - Flexible Server from the Azure CLI.-- ++ Last updated 11/30/2021
postgresql How To Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-server-portal.md
Title: 'Manage server - Azure portal - Azure Database for PostgreSQL - Flexible Server' description: Learn how to manage an Azure Database for PostgreSQL - Flexible Server from the Azure portal.-- ++ Last updated 11/30/2021
postgresql How To Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restart-server-cli.md
Title: Restart - Azure portal - Azure Database for PostgreSQL Flexible Server description: This article describes how to restart operations in Azure Database for PostgreSQL through the Azure CLI.-- ++ Last updated 11/30/2021
postgresql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restart-server-portal.md
Title: Restart - Azure portal - Azure Database for PostgreSQL - Flexible Server description: This article describes how to perform restart operations in Azure Database for PostgreSQL through the Azure portal.- +
postgresql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-server-cli.md
Title: Restore Azure Database for PostgreSQL - Flexible Server with Azure CLI description: This article describes how to perform restore operations in Azure Database for PsotgreSQL through the Azure CLI.-- ++ Last updated 11/30/2021
postgresql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-server-portal.md
Title: Restore - Azure portal - Azure Database for PostgreSQL - Flexible Server description: This article describes how to perform restore operations in Azure Database for PostgreSQL through the Azure portal.- +
postgresql How To Scale Compute Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-scale-compute-storage-portal.md
Title: Scale operations - Azure portal - Azure Database for PostgreSQL - Flexible Server description: This article describes how to perform scale operations in Azure Database for PostgreSQL through the Azure portal.- +
postgresql How To Stop Start Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-stop-start-server-cli.md
Title: Stop/start - Azure CLI - Azure Database for PostgreSQL Flexible Server description: This article describes how to stop/start operations in Azure Database for PostgreSQL through the Azure CLI.-- ++ Last updated 11/30/2021
postgresql How To Stop Start Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-stop-start-server-portal.md
Title: Stop/start - Azure portal - Azure Database for PostgreSQL Flexible Server description: This article describes how to stop/start operations in Azure Database for PostgreSQL through the Azure portal.- +
postgresql How To Troubleshoot Cli Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-troubleshoot-cli-errors.md
Title: Troubleshoot Azure Database for PostgreSQL Flexible Server CLI errors description: This topic gives guidance on troubleshooting common issues with Azure CLI when using PostgreSQL Flexible Server.-- ++ Last updated 11/30/2021
postgresql Howto Alert On Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-alert-on-metrics.md
Title: Configure alerts - Azure portal - Azure Database for PostgreSQL - Flexible Server description: This article describes how to configure and access metric alerts for Azure Database for PostgreSQL - Flexible Server from the Azure portal.- +
postgresql Howto Configure And Access Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-configure-and-access-logs.md
Title: Configure and Access Logs - Flexible Server - Azure Database for PostgreSQL description: How to access database logs for Azure Database for PostgreSQL - Flexible Server- +
postgresql Howto Configure Server Parameters Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-configure-server-parameters-using-cli.md
Title: Configure parameters - Azure Database for PostgreSQL - Flexible Server description: This article describes how to configure Postgres parameters in Azure Database for PostgreSQL - Flexible Server using the Azure CLI.- + ms.devlang: azurecli
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
Title: Azure Database for PostgreSQL - Flexible Server description: Provides an overview of Azure Database for PostgreSQL - Flexible Server.- +
postgresql Quickstart Create Connect Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-connect-server-vnet.md
Title: 'Connect to Azure Database for PostgreSQL flexible server with private access in the Azure portal' description: This article shows how to create and connect to Azure Database for PostgreSQL flexible server with private access or virtual network using Azure portal.-- ++ Last updated 11/30/2021
postgresql Quickstart Create Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-cli.md
Title: 'Quickstart: Create a server - Azure CLI - Azure Database for PostgreSQL - Flexible Server' description: This quickstart describes how to use the Azure CLI to create an Azure Database for PostgreSQL Flexible Server in an Azure resource group.- + ms.devlang: azurecli
postgresql Quickstart Create Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-portal.md
Title: 'Quickstart: Create server - Azure portal - Azure Database for PostgreSQL - Flexible Server' description: Quickstart guide to creating and managing an Azure Database for PostgreSQL - Flexible Server by using the Azure portal user interface.- +
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Title: Azure Database for PostgreSQL - Flexible Server Release notes description: Release notes of Azure Database for PostgreSQL - Flexible Server.- +
postgresql Tutorial Django Aks Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/tutorial-django-aks-database.md
Title: 'Tutorial: Deploy Django on AKS cluster with PostgreSQL Flexible Server b
description: Learn how to quickly build and deploy Django on AKS with Azure Database for PostgreSQL - Flexible Server. - + Last updated 11/30/2021
postgresql Tutorial Django App Service Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/tutorial-django-app-service-postgres.md
Title: Tutorial on how to Deploy Django app with App Service and Azure Database for PostgreSQL - Flexible Server in virtual network description: Deploy Django app with App Serice and Azure Database for PostgreSQL - Flexible Server in virtual network-- ++ ms.devlang: azurecli Last updated 11/30/2021
postgresql Tutorial Webapp Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/tutorial-webapp-server-vnet.md
Title: 'Tutorial: Create Azure Database for PostgreSQL - Flexible Server and Azure App Service Web App in same virtual network' description: Quickstart guide to create Azure Database for PostgreSQL - Flexible Server with Web App in a virtual network-- ++ ms.devlang: azurecli Last updated 11/30/2021
postgresql How To Connect Query Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/how-to-connect-query-guide.md
Title: Connect and query - Single Server PostgreSQL description: Links to quickstarts showing how to connect to your Azure Database for PostgreSQL Single Server and run queries.- + - + Last updated 09/21/2020
postgresql How To Deploy Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/how-to-deploy-github-action.md
Title: 'Quickstart: Connect to Azure PostgreSQL with GitHub Actions' description: Use Azure PostgreSQL from a GitHub Actions workflow- + Previously updated : 10/12/2020+ Last updated : 10/12/2020 # Quickstart: Use GitHub Actions to connect to Azure PostgreSQL
postgresql How To Manage Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/how-to-manage-server-cli.md
Title: Manage server - Azure CLI - Azure Database for PostgreSQL description: Learn how to manage an Azure Database for PostgreSQL server from the Azure CLI.-- + ++ Last updated 9/22/2020
postgresql How To Upgrade Using Dump And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/how-to-upgrade-using-dump-and-restore.md
Title: Upgrade using dump and restore - Azure Database for PostgreSQL description: Describes offline upgrade methods using dump and restore databases to migrate to a higher version Azure Database for PostgreSQL.-- + ++ Last updated 11/30/2021
postgresql Howto Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-alert-on-metric.md
Title: Configure alerts - Azure portal - Azure Database for PostgreSQL - Single Server description: This article describes how to configure and access metric alerts for Azure Database for PostgreSQL - Single Server from the Azure portal.-- + ++ Last updated 5/6/2019
postgresql Howto Auto Grow Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-auto-grow-storage-cli.md
Title: Auto-grow storage - Azure CLI - Azure Database for PostgreSQL - Single Server description: This article describes how you can configure storage auto-grow using the Azure CLI in Azure Database for PostgreSQL - Single Server.-- + Previously updated : 8/7/2019 ++ Last updated : 8/7/2019 # Auto-grow Azure Database for PostgreSQL storage - Single Server using the Azure CLI This article describes how you can configure an Azure Database for PostgreSQL server storage to grow without impacting the workload.
postgresql Howto Auto Grow Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-auto-grow-storage-portal.md
Title: Auto grow storage - Azure portal - Azure Database for PostgreSQL - Single Server description: This article describes how you can configure storage auto-grow using the Azure portal in Azure Database for PostgreSQL - Single Server-- + ++ Last updated 5/29/2019 # Auto grow storage using the Azure portal in Azure Database for PostgreSQL - Single Server
postgresql Howto Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-auto-grow-storage-powershell.md
Title: Auto grow storage - Azure PowerShell - Azure Database for PostgreSQL description: This article describes how you can enable auto grow storage using PowerShell in Azure Database for PostgreSQL.-- + ++ Last updated 06/08/2020
postgresql Howto Configure Privatelink Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-configure-privatelink-cli.md
Title: Private Link - Azure CLI - Azure Database for PostgreSQL - Single server description: Learn how to configure private link for Azure Database for PostgreSQL- Single server from Azure CLI-- +++ Last updated 01/09/2020
postgresql Howto Configure Privatelink Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-configure-privatelink-portal.md
Title: Private Link - Azure portal - Azure Database for PostgreSQL - Single server description: Learn how to configure private link for Azure Database for PostgreSQL- Single server from Azure portal-- +++ Last updated 01/09/2020
postgresql Howto Configure Server Logs In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-configure-server-logs-in-portal.md
Title: Manage logs - Azure portal - Azure Database for PostgreSQL - Single Server description: This article describes how to configure and access the server logs (.log files) in Azure Database for PostgreSQL - Single Server from the Azure portal.-- + ++ Last updated 5/6/2019
postgresql Howto Configure Server Logs Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-configure-server-logs-using-cli.md
Title: Manage logs - Azure CLI - Azure Database for PostgreSQL - Single Server description: This article describes how to configure and access the server logs (.log files) in Azure Database for PostgreSQL - Single Server by using the Azure CLI.-- + Previously updated : 5/6/2019 ++
+ms.devlang: azurecli
Last updated : 5/6/2019 + # Configure and access server logs by using Azure CLI You can download the PostgreSQL server error logs by using the command-line interface (Azure CLI). However, access to transaction logs isn't supported.
postgresql Howto Configure Server Parameters Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-configure-server-parameters-using-cli.md
Title: Configure parameters - Azure Database for PostgreSQL - Single Server description: This article describes how to configure Postgres parameters in Azure Database for PostgreSQL - Single Server using the Azure CLI.-- + ++
+ms.devlang: azurecli
Last updated 06/19/2019
postgresql Howto Configure Server Parameters Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-configure-server-parameters-using-portal.md
Title: Configure server parameters - Azure portal - Azure Database for PostgreSQL - Single Server description: This article describes how to configure the Postgres parameters in Azure Database for PostgreSQL through the Azure portal.-- + ++ Last updated 02/28/2018
postgresql Howto Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-configure-server-parameters-using-powershell.md
Title: Configure server parameters - Azure PowerShell - Azure Database for PostgreSQL description: This article describes how to configure the service parameters in Azure Database for PostgreSQL using PowerShell.-- + Previously updated : 06/08/2020 ++
+ms.devlang: azurepowershell
Last updated : 06/08/2020 # Customize Azure Database for PostgreSQL server parameters using PowerShell
postgresql Howto Configure Sign In Aad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-configure-sign-in-aad-authentication.md
Title: Use Azure Active Directory - Azure Database for PostgreSQL - Single Server description: Learn about how to set up Azure Active Directory (AAD) for authentication with Azure Database for PostgreSQL - Single Server-- + ++ Last updated 05/26/2021
postgresql Howto Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-connect-with-managed-identity.md
Title: Connect with Managed Identity - Azure Database for PostgreSQL - Single Server description: Learn about how to connect and authenticate using Managed Identity for authentication with Azure Database for PostgreSQL-- + Previously updated : 05/19/2020++ Last updated : 05/19/2020 # Connect with Managed Identity to Azure Database for PostgreSQL
postgresql Howto Connection String Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-connection-string-powershell.md
Title: Generate a connection string with PowerShell - Azure Database for PostgreSQL description: This article provides an Azure PowerShell example to generate a connection string for connecting to Azure Database for PostgreSQL.-- -+ +++ Last updated 8/6/2020
postgresql Howto Create Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-create-manage-server-portal.md
Title: Manage Azure Database for PostgreSQL - Azure portal description: Learn how to manage an Azure Database for PostgreSQL server from the Azure portal.-- + ++ Last updated 11/20/2019
postgresql Howto Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-create-users.md
Title: Create users - Azure Database for PostgreSQL - Single Server description: This article describes how you can create new user accounts to interact with an Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 09/22/2019
postgresql Howto Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-data-encryption-cli.md
Title: Data encryption - Azure CLI - for Azure Database for PostgreSQL - Single server description: Learn how to set up and manage data encryption for your Azure Database for PostgreSQL Single server by using the Azure CLI.-- +++ Last updated 03/30/2020
postgresql Howto Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-data-encryption-portal.md
Title: Data encryption - Azure portal - for Azure Database for PostgreSQL - Single server description: Learn how to set up and manage data encryption for your Azure Database for PostgreSQL Single server by using the Azure portal.-- +++ Last updated 01/13/2020
postgresql Howto Data Encryption Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-data-encryption-troubleshoot.md
Title: Troubleshoot data encryption - Azure Database for PostgreSQL - Single Server description: Learn how to troubleshoot the data encryption on your Azure Database for PostgreSQL - Single Server-- +++ Last updated 02/13/2020
postgresql Howto Data Encryption Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-data-encryption-validation.md
Title: How to ensure validation of the Azure Database for PostgreSQL - Data encryption description: Learn how to validate the encryption of the Azure Database for PostgreSQL - Data encryption using the customers managed key.-- +++ Last updated 04/28/2020
postgresql Howto Deny Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-deny-public-network-access.md
Title: Deny Public Network Access - Azure portal - Azure Database for PostgreSQL - Single server description: Learn how to configure Deny Public Network Access using Azure portal for your Azure Database for PostgreSQL Single server -- +++ Last updated 03/10/2020
postgresql Howto Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-double-encryption.md
Title: Infrastructure double encryption - Azure portal - Azure Database for PostgreSQL description: Learn how to set up and manage Infrastructure double encryption for your Azure Database for PostgreSQL.-- + ++ Last updated 03/14/2021
postgresql Howto Manage Firewall Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-manage-firewall-using-portal.md
Title: Manage firewall rules - Azure portal - Azure Database for PostgreSQL - Single Server description: Create and manage firewall rules for Azure Database for PostgreSQL - Single Server using the Azure portal-- ++ + Last updated 5/6/2019 + # Create and manage firewall rules for Azure Database for PostgreSQL - Single Server using the Azure portal Server-level firewall rules can be used to manage access to an Azure Database for PostgreSQL Server from a specified IP address or range of IP addresses.
postgresql Howto Manage Vnet Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-manage-vnet-using-cli.md
Title: Use virtual network rules - Azure CLI - Azure Database for PostgreSQL - Single Server description: This article describes how to create and manage VNet service endpoints and rules for Azure Database for PostgreSQL using Azure CLI command line.-- + Previously updated : 01/26/2022 ++
+ms.devlang: azurecli
Last updated : 01/26/2022 + # Create and manage VNet service endpoints for Azure Database for PostgreSQL - Single Server using Azure CLI Virtual Network (VNet) services endpoints and rules extend the private address space of a Virtual Network to your Azure Database for PostgreSQL server. Using convenient Azure CLI commands, you can create, update, delete, list, and show VNet service endpoints and rules to manage your server. For an overview of Azure Database for PostgreSQL VNet service endpoints, including limitations, see [Azure Database for PostgreSQL Server VNet service endpoints](concepts-data-access-and-security-vnet.md). VNet service endpoints are available in all supported regions for Azure Database for PostgreSQL.
postgresql Howto Manage Vnet Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-manage-vnet-using-portal.md
Title: Use virtual network rules - Azure portal - Azure Database for PostgreSQL - Single Server description: Create and manage VNet service endpoints and rules Azure Database for PostgreSQL - Single Server using the Azure portal-- + ++ Last updated 5/6/2019 # Create and manage VNet service endpoints and VNet rules in Azure Database for PostgreSQL - Single Server by using the Azure portal
postgresql Howto Migrate From Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-migrate-from-oracle.md
Title: "Oracle to Azure Database for PostgreSQL: Migration guide"- description: This guide helps you to migrate your Oracle schema to Azure Database for PostgreSQL. -- ++ Last updated 03/18/2021
postgresql Howto Migrate Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-migrate-online.md
Title: Minimal-downtime migration to Azure Database for PostgreSQL - Single Server description: This article describes how to perform a minimal-downtime migration of a PostgreSQL database to Azure Database for PostgreSQL - Single Server by using the Azure Database Migration Service.-- + ++ Last updated 5/6/2019
postgresql Howto Migrate Using Dump And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-migrate-using-dump-and-restore.md
Title: Dump and restore - Azure Database for PostgreSQL - Single Server description: You can extract a PostgreSQL database into a dump file. Then, you can restore from a file created by pg_dump in Azure Database for PostgreSQL Single Server.-- ++ Last updated 09/22/2020
postgresql Howto Migrate Using Export And Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-migrate-using-export-and-import.md
Title: Migrate a database - Azure Database for PostgreSQL - Single Server description: Describes how extract a PostgreSQL database into a script file and import the data into the target database from that file.-- + ++ Last updated 09/22/2020 # Migrate your PostgreSQL database using export and import
postgresql Howto Move Regions Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-move-regions-portal.md
Title: Move Azure regions - Azure portal - Azure Database for PostgreSQL - Single Server description: Move an Azure Database for PostgreSQL server from one Azure region to another using a read replica and the Azure portal.-- + ++ Last updated 06/29/2020 #Customer intent: As an Azure service administrator, I want to move my service resources to another Azure region
postgresql Howto Optimize Autovacuum https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-optimize-autovacuum.md
Title: Optimize autovacuum - Azure Database for PostgreSQL - Single Server description: This article describes how you can optimize autovacuum on an Azure Database for PostgreSQL - Single Server-- + ++ Last updated 07/09/2020
postgresql Howto Optimize Bulk Inserts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-optimize-bulk-inserts.md
Title: Optimize bulk inserts - Azure Database for PostgreSQL - Single Server description: This article describes how you can optimize bulk insert operations on an Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 5/6/2019
postgresql Howto Optimize Query Stats Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-optimize-query-stats-collection.md
Title: Optimize query stats collection - Azure Database for PostgreSQL - Single Server description: This article describes how you can optimize query stats collection on an Azure Database for PostgreSQL - Single Server-- + ++ Last updated 5/6/2019
postgresql Howto Optimize Query Time With Toast Table Storage Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-optimize-query-time-with-toast-table-storage-strategy.md
Title: Optimize query time by using the TOAST table storage strategy in Azure Database for PostgreSQL - Single Server description: This article describes how to optimize query time with the TOAST table storage strategy on an Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 5/6/2019
postgresql Howto Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-read-replicas-cli.md
Title: Manage read replicas - Azure CLI, REST API - Azure Database for PostgreSQL - Single Server description: Learn how to manage read replicas in Azure Database for PostgreSQL - Single Server from the Azure CLI and REST API-- + ++ Last updated 12/17/2020
postgresql Howto Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-read-replicas-portal.md
Title: Manage read replicas - Azure portal - Azure Database for PostgreSQL - Single Server description: Learn how to manage read replicas Azure Database for PostgreSQL - Single Server from the Azure portal.-- + ++ Last updated 11/05/2020
postgresql Howto Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-read-replicas-powershell.md
Title: Manage read replicas - Azure PowerShell - Azure Database for PostgreSQL description: Learn how to set up and manage read replicas in Azure Database for PostgreSQL using PowerShell.-- + ++ Last updated 06/08/2020
postgresql Howto Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-restart-server-cli.md
Title: Restart server - Azure CLI - Azure Database for PostgreSQL - Single Server description: This article describes how you can restart an Azure Database for PostgreSQL - Single Server using the Azure CLI-- + ++ Last updated 5/6/2019
postgresql Howto Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-restart-server-portal.md
Title: Restart server - Azure portal - Azure Database for PostgreSQL - Single Server description: This article describes how you can restart an Azure Database for PostgreSQL - Single Server using the Azure portal.-- + ++ Last updated 12/20/2020
postgresql Howto Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-restart-server-powershell.md
Title: Restart server - Azure PowerShell - Azure Database for PostgreSQL description: This article describes how you can restart an Azure Database for PostgreSQL server using PowerShell.-- + ++ Last updated 06/08/2020
postgresql Howto Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-restore-dropped-server.md
Title: Restore a dropped Azure Database for PostgreSQL server description: This article describes how to restore a dropped server in Azure Database for PostgreSQL using the Azure portal.-- + ++ Last updated 04/26/2021 # Restore a dropped Azure Database for PostgreSQL server
postgresql Howto Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-restore-server-cli.md
Title: Backup and restore - Azure CLI - Azure Database for PostgreSQL - Single Server description: Learn how to set backup configurations and restore a server in Azure Database for PostgreSQL - Single Server by using the Azure CLI.-- + Previously updated : 10/25/2019 ++
+ms.devlang: azurecli
Last updated : 10/25/2019 # How to back up and restore a server in Azure Database for PostgreSQL - Single Server using the Azure CLI
postgresql Howto Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-restore-server-portal.md
Title: Backup and restore - Azure portal - Azure Database for PostgreSQL - Single Server description: This article describes how to restore a server in Azure Database for PostgreSQL - Single Server using the Azure portal.-- + ++ Last updated 6/30/2020
postgresql Howto Restore Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-restore-server-powershell.md
Title: Backup and restore - Azure PowerShell - Azure Database for PostgreSQL description: Learn how to backup and restore a server in Azure Database for PostgreSQL by using Azure PowerShell.-- + Previously updated : 06/08/2020++
+ms.devlang: azurepowershell
Last updated : 06/08/2020 # How to back up and restore an Azure Database for PostgreSQL server using PowerShell
postgresql Howto Tls Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-tls-configurations.md
Title: TLS configuration - Azure portal - Azure Database for PostgreSQL - Single server description: Learn how to set TLS configuration using Azure portal for your Azure Database for PostgreSQL Single server -- +++ Last updated 06/02/2020
postgresql Howto Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-troubleshoot-common-connection-issues.md
Title: Troubleshoot connections - Azure Database for PostgreSQL - Single Server description: Learn how to troubleshoot connection issues to Azure Database for PostgreSQL - Single Server.
-keywords: postgresql connection,connection string,connectivity issues,transient error,connection error
--- + ++ Last updated 5/6/2019
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-audit.md
Title: Audit logging - Azure Database for PostgreSQL - Hyperscale (Citus) description: Concepts for pgAudit audit logging in Azure Database for PostgreSQL - Hyperscale (Citus).- +
postgresql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-backup.md
Title: Backup and restore ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL description: Protecting data from accidental corruption or deletion- +
postgresql Concepts Colocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-colocation.md
Title: Table colocation - Hyperscale (Citus) - Azure Database for PostgreSQL description: How to store related information together for faster queries- +
postgresql Concepts Columnar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-columnar.md
Title: Columnar table storage - Hyperscale (Citus) - Azure Database for PostgreSQL description: Compressing data using columnar storage- +
postgresql Concepts Connection Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-connection-pool.md
Title: Connection pooling ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL description: Scaling client database connections- +
postgresql Concepts Distributed Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-distributed-data.md
Title: Distributed data ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL description: Learn about distributed tables, reference tables, local tables, and shards in Azure Database for PostgreSQL.- +
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-firewall-rules.md
Title: Public access - Hyperscale (Citus) - Azure Database for PostgreSQL description: This article describes public access for Azure Database for PostgreSQL - Hyperscale (Citus).- +
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-high-availability.md
Title: High availability ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL description: High availability and disaster recovery concepts- +
postgresql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-maintenance.md
Title: Scheduled maintenance - Azure Database for PostgreSQL - Hyperscale (Citus) description: This article describes the scheduled maintenance feature in Azure Database for PostgreSQL - Hyperscale (Citus).- +
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-monitoring.md
Title: Monitor and tune - Hyperscale (Citus) - Azure Database for PostgreSQL description: This article describes monitoring and tuning features in Azure Database for PostgreSQL - Hyperscale (Citus)- +
postgresql Concepts Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-nodes.md
Title: Nodes ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL description: Learn about the types of nodes and tables in a server group in Azure Database for PostgreSQL.- +
postgresql Concepts Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-private-access.md
Title: Private access - Hyperscale (Citus) - Azure Database for PostgreSQL description: This article describes private access for Azure Database for PostgreSQL - Hyperscale (Citus).- +
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-read-replicas.md
Title: Read replicas - Azure Database for PostgreSQL - Hyperscale (Citus) description: This article describes the read replica feature in Azure Database for PostgreSQL - Hyperscale (Citus).- +
postgresql Concepts Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-security-overview.md
Title: Security overview - Hyperscale (Citus) - Azure Database for PostgreSQL description: Information protection and network security for Azure Database for PostgreSQL - Hyperscale (Citus).- +
postgresql Concepts Server Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-server-group.md
Title: Server group - Hyperscale (Citus) - Azure Database for PostgreSQL description: What is a server group in Azure Database for PostgreSQL - Hyperscale (Citus)- +
postgresql Howto Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-alert-on-metric.md
Title: Configure alerts - Hyperscale (Citus) - Azure Database for PostgreSQL description: This article describes how to configure and access metric alerts for Azure Database for PostgreSQL - Hyperscale (Citus)- +
postgresql Howto App Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-app-type.md
Title: Determine application type - Hyperscale (Citus) - Azure Database for PostgreSQL description: Identify your application for effective distributed data modeling- +
postgresql Howto Choose Distribution Column https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-choose-distribution-column.md
Title: Choose distribution columns ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL description: Learn how to choose distribution columns in common scenarios in Azure Database for PostgreSQL - Hyperscale (Citus).- +
postgresql Howto Compute Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-compute-quota.md
Title: Change compute quotas - Azure portal - Azure Database for PostgreSQL - Hyperscale (Citus) description: Learn how to increase vCore quotas per region in Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal.- +
postgresql Howto Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-create-users.md
Title: Create users - Hyperscale (Citus) - Azure Database for PostgreSQL description: This article describes how you can create new user accounts to interact with an Azure Database for PostgreSQL - Hyperscale (Citus).- +
postgresql Howto High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-high-availability.md
Title: Configure high availability - Hyperscale (Citus) - Azure Database for PostgreSQL description: How to enable or disable high availability- +
postgresql Howto Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-logging.md
Title: Logs - Hyperscale (Citus) - Azure Database for PostgreSQL description: How to access database logs for Azure Database for PostgreSQL - Hyperscale (Citus)- +
postgresql Howto Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-maintenance.md
Title: Azure Database for PostgreSQL - Hyperscale (Citus) - Scheduled maintenance - Azure portal description: Learn how to configure scheduled maintenance settings for an Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal.- +
postgresql Howto Manage Firewall Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-manage-firewall-using-portal.md
Title: Manage firewall rules - Hyperscale (Citus) - Azure Database for PostgreSQL description: Create and manage firewall rules for Azure Database for PostgreSQL - Hyperscale (Citus) using the Azure portal- +
postgresql Howto Modify Distributed Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-modify-distributed-tables.md
Title: Modify distributed tables - Hyperscale (Citus) - Azure Database for PostgreSQL description: SQL commands to create and modify distributed tables - Hyperscale (Citus) using the Azure portal- +
postgresql Howto Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-monitoring.md
Title: How to view metrics - Hyperscale (Citus) - Azure Database for PostgreSQL description: How to access database metrics for Azure Database for PostgreSQL - Hyperscale (Citus)- +
postgresql Howto Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-private-access.md
Title: Enable private access - Hyperscale (Citus) - Azure Database for PostgreSQL description: How to set up private link in a server group for Azure Database for PostgreSQL - Hyperscale (Citus)- +
postgresql Howto Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-read-replicas-portal.md
Title: Manage read replicas - Azure portal - Azure Database for PostgreSQL - Hyperscale (Citus) description: Learn how to manage read replicas Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal.- +
postgresql Howto Restart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-restart.md
Title: Restart server - Hyperscale (Citus) - Azure Database for PostgreSQL description: How to restart the database in Azure Database for PostgreSQL - Hyperscale (Citus)- +
postgresql Howto Restore Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-restore-portal.md
Title: Restore - Hyperscale (Citus) - Azure Database for PostgreSQL - Azure portal description: This article describes how to perform restore operations in Azure Database for PostgreSQL - Hyperscale (Citus) through the Azure portal.- +
postgresql Howto Scale Grow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-scale-grow.md
Title: Scale server group - Hyperscale (Citus) - Azure Database for PostgreSQL description: Adjust server group memory, disk, and CPU resources to deal with increased load- +
postgresql Howto Scale Initial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-scale-initial.md
Title: Initial server group size - Hyperscale (Citus) - Azure Database for PostgreSQL description: Pick the right initial size for your use case- +
postgresql Howto Scale Rebalance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-scale-rebalance.md
Title: Rebalance shards - Hyperscale (Citus) - Azure Database for PostgreSQL description: Distribute shards evenly across servers for better performance- +
postgresql Howto Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-ssl-connection-security.md
Title: Transport Layer Security (TLS) - Hyperscale (Citus) - Azure Database for PostgreSQL description: Instructions and information to configure Azure Database for PostgreSQL - Hyperscale (Citus) and associated applications to properly use TLS connections.- +
postgresql Howto Table Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-table-size.md
Title: Determine table size - Hyperscale (Citus) - Azure Database for PostgreSQL description: How to find the true size of distributed tables in a Hyperscale (Citus) server group- +
postgresql Howto Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-troubleshoot-common-connection-issues.md
Title: Troubleshoot connections - Hyperscale (Citus) - Azure Database for PostgreSQL description: Learn how to troubleshoot connection issues to Azure Database for PostgreSQL - Hyperscale (Citus) keywords: postgresql connection,connection string,connectivity issues,transient error,connection error- +
postgresql Howto Troubleshoot Read Only https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-troubleshoot-read-only.md
Title: Troubleshoot read-only access - Hyperscale (Citus) - Azure Database for PostgreSQL description: Learn why a Hyperscale (Citus) server group can become read-only, and what to do keywords: postgresql connection,read only- +
postgresql Howto Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-upgrade.md
Title: Upgrade server group - Hyperscale (Citus) - Azure Database for PostgreSQL description: This article describes how you can upgrade PostgreSQL and Citus in Azure Database for PostgreSQL - Hyperscale (Citus).- +
postgresql Howto Useful Diagnostic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-useful-diagnostic-queries.md
Title: Useful diagnostic queries - Hyperscale (Citus) - Azure Database for PostgreSQL description: Queries to learn about distributed data and more- +
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/overview.md
Title: Overview of Azure Database for PostgreSQL - Hyperscale (Citus) description: Provides an overview of the Hyperscale (Citus) deployment option- +
postgresql Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/product-updates.md
Title: Product updates for Azure Database for PostgreSQL - Hyperscale (Citus) description: New features and features in preview- +
postgresql Quickstart Connect Psql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-connect-psql.md
Title: 'Quickstart: connect to a server group with psql - Hyperscale (Citus) - Azure Database for PostgreSQL' description: Quickstart to connect psql to Azure Database for PostgreSQL - Hyperscale (Citus).- +
postgresql Quickstart Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-create-portal.md
Title: 'Quickstart: create a server group - Hyperscale (Citus) - Azure Database for PostgreSQL' description: Quickstart to create and query distributed tables on Azure Database for PostgreSQL Hyperscale (Citus).- +
postgresql Quickstart Distribute Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-distribute-tables.md
Title: 'Quickstart: distribute tables - Hyperscale (Citus) - Azure Database for PostgreSQL' description: Quickstart to distribute table data across nodes in Azure Database for PostgreSQL - Hyperscale (Citus).- +
postgresql Quickstart Run Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-run-queries.md
Title: 'Quickstart: Run queries - Hyperscale (Citus) - Azure Database for PostgreSQL' description: Quickstart to run queries on table data in Azure Database for PostgreSQL - Hyperscale (Citus).- +
postgresql Reference Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-extensions.md
Title: Extensions ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL description: Describes the ability to extend the functionality of your database by using extensions in Azure Database for PostgreSQL - Hyperscale (Citus)- +
postgresql Reference Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-functions.md
Title: SQL functions ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL description: Functions in the Hyperscale (Citus) SQL API- +
postgresql Reference Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-limits.md
Title: Limits and limitations ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL description: Current limits for Hyperscale (Citus) server groups- +
postgresql Reference Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-metadata.md
Title: System tables ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL description: Metadata for distributed query execution- +
postgresql Reference Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-overview.md
Title: Reference ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL description: Overview of the Hyperscale (Citus) SQL API- +
postgresql Reference Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-parameters.md
Title: Server parameters ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL description: Parameters in the Hyperscale (Citus) SQL API- +
postgresql Reference Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-versions.md
Title: Supported versions ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL description: PostgreSQL versions available in Azure Database for PostgreSQL - Hyperscale (Citus)- +
PostgreSQL - Hyperscale (Citus).
Depending on which version of PostgreSQL is running in a server group, different [versions of PostgreSQL extensions](reference-extensions.md)
-will be installed as well. In particular, Postgres versions 12-14 come with
-Citus 10, and earlier Postgres versions come with Citus 9.5.
+will be installed as well. In particular, PostgreSQL versions 12-14 come with
+Citus 10, and earlier PostgreSQL versions come with Citus 9.5.
## Next steps
postgresql Resources Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/resources-compute.md
Title: Compute and storage ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL description: Options for a Hyperscale (Citus) server group, including node compute and storage- +
postgresql Resources Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/resources-pricing.md
Title: Pricing ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL description: Pricing and how to save with Hyperscale (Citus)- +
postgresql Resources Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/resources-regions.md
Title: Regional availability ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL description: Where you can run a Hyperscale (Citus) server group- +
postgresql Tutorial Design Database Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/tutorial-design-database-multi-tenant.md
Title: 'Tutorial: Design a multi-tenant database - Hyperscale (Citus) - Azure Database for PostgreSQL' description: This tutorial shows how to power a scalable multi-tenant application with Azure Database for PostgreSQL Hyperscale (Citus).- +
postgresql Tutorial Design Database Realtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/tutorial-design-database-realtime.md
Title: 'Tutorial: Design a real-time dashboard - Hyperscale (Citus) - Azure Database for PostgreSQL' description: This tutorial shows how to parallelize real-time dashboard queries with Azure Database for PostgreSQL Hyperscale (Citus).- +
postgresql Tutorial Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/tutorial-private-access.md
Title: Create server group with private access - Hyperscale (Citus) - Azure Database for PostgreSQL description: Connect a VM to a server group private endpoint- +
postgresql Tutorial Shard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/tutorial-shard.md
Title: 'Tutorial: Shard data on worker nodes - Hyperscale (Citus) - Azure Database for PostgreSQL' description: This tutorial shows how to create distributed tables and visualize their data distribution with Azure Database for PostgreSQL Hyperscale (Citus).- +
postgresql Overview Postgres Choose Server Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/overview-postgres-choose-server-options.md
Title: Choose the right PostgreSQL server option in Azure description: Provides guidelines for choosing the right PostgreSQL server option for your deployments.--- + +++ Last updated 12/01/2021 # Choose the right PostgreSQL server option in Azure
postgresql Overview Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/overview-single-server.md
Title: Azure Database for PostgreSQL Single Server description: Provides an overview of Azure Database for PostgreSQL Single Server.--- + +++ Last updated 11/30/2021 # Azure Database for PostgreSQL Single Server
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/overview.md
Title: What is Azure Database for PostgreSQL description: Provides an overview of Azure Database for PostgreSQL relational database service in the context of flexible server.--- + +++ Last updated 01/24/2022
postgresql Partners Migration Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/partners-migration-postgresql.md
 Title: Azure Database for PostgreSQL migration partners description: Lists of third-party migration partners with solutions that support Azure Database for PostgreSQL.--- + ++ Last updated 08/07/2018
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/policy-reference.md
Title: Built-in policy definitions for Azure Database for PostgreSQL description: Lists Azure Policy built-in policy definitions for Azure Database for PostgreSQL. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/08/2022++ - -+ Last updated : 03/08/2022 # Azure Policy built-in definitions for Azure Database for PostgreSQL
postgresql Quickstart Create Postgresql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/quickstart-create-postgresql-server-database-using-arm-template.md
Title: 'Quickstart: Create an Azure DB for PostgreSQL - ARM template' description: In this quickstart, learn how to create an Azure Database for PostgreSQL single server by using an Azure Resource Manager template.-- + ++ Last updated 02/11/2021
postgresql Quickstart Create Postgresql Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/quickstart-create-postgresql-server-database-using-azure-powershell.md
Title: 'Quickstart: Create server - Azure PowerShell - Azure Database for PostgreSQL - Single Server' description: Quickstart guide to create an Azure Database for PostgreSQL - Single Server using Azure PowerShell.-- + Previously updated : 06/08/2020++
+ms.devlang: azurepowershell
Last updated : 06/08/2020 # Quickstart: Create an Azure Database for PostgreSQL - Single Server using PowerShell
postgresql Quickstart Create Server Database Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/quickstart-create-server-database-azure-cli.md
Title: 'Quickstart: Create server - Azure CLI - Azure Database for PostgreSQL - single server' description: In this quickstart guide, you'll create an Azure Database for PostgreSQL server by using the Azure CLI.-- + Previously updated : 01/26/2022 ++
+ms.devlang: azurecli
Last updated : 01/26/2022 + # Quickstart: Create an Azure Database for PostgreSQL server by using the Azure CLI This quickstart shows how to use [Azure CLI](/cli/azure/get-started-with-azure-cli) commands in [Azure Cloud Shell](https://shell.azure.com) to create a single Azure Database for PostgreSQL server in five minutes.
postgresql Quickstart Create Server Database Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/quickstart-create-server-database-portal.md
Title: 'Quickstart: Create server - Azure portal - Azure Database for PostgreSQL - single server' description: In this quickstart guide, you'll create and manage an Azure Database for PostgreSQL server by using the Azure portal.-- -+ +++ Last updated 10/18/2020
postgresql Quickstart Create Server Up Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/quickstart-create-server-up-azure-cli.md
Title: 'Quickstart: Create server - az postgres up - Azure Database for PostgreSQL - Single Server' description: Quickstart guide to create Azure Database for PostgreSQL - Single Server using Azure CLI (command-line interface) up command.-- + Previously updated : 01/25/2022++
+ms.devlang: azurecli
Last updated : 01/25/2022 # Quickstart: Use the az postgres up command to create an Azure Database for PostgreSQL - Single Server
postgresql Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/sample-scripts-azure-cli.md
Title: Azure CLI samples - Azure Database for PostgreSQL - Single Server | Microsoft Docs description: This article lists several Azure CLI code samples available for interacting with Azure Database for PostgreSQL - Single Server.-- + ++
+ms.devlang: azurecli
Last updated 09/17/2021
-keywords: azure cli samples, azure cli code samples, azure cli script samples
# Azure CLI samples for Azure Database for PostgreSQL - Single Server
postgresql Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-change-server-configuration.md
Title: Azure CLI script - Change server configurations (PostgreSQL) description: This sample CLI script lists all available server configuration options and updates the value of one of the options.- + ms.devlang: azurecli
postgresql Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-create-server-and-firewall-rule.md
Title: Azure CLI Script - Create an Azure Database for PostgreSQL description: Azure CLI Script Sample - Creates an Azure Database for PostgreSQL server and configures a server-level firewall rule.- + ms.devlang: azurecli
postgresql Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-point-in-time-restore.md
Title: Azure CLI script - Restore an Azure Database for PostgreSQL server description: This sample Azure CLI script shows how to restore an Azure Database for PostgreSQL server and its databases to a previous point in time.- + ms.devlang: azurecli
postgresql Sample Scale Server Up Or Down https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-scale-server-up-or-down.md
Title: Azure CLI script - Scale and monitor Azure Database for PostgreSQL description: Azure CLI Script Sample - Scale Azure Database for PostgreSQL server to a different performance level after querying the metrics.- + ms.devlang: azurecli
postgresql Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-server-logs.md
Title: Azure CLI script - Download server logs in Azure Database for PostgreSQL description: This sample Azure CLI script shows how to enable and download the server logs of an Azure Database for PostgreSQL server.- + ms.devlang: azurecli
postgresql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for PostgreSQL description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for PostgreSQL. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022++ - -+ Last updated : 03/10/2022 + # Azure Policy Regulatory Compliance controls for Azure Database for PostgreSQL [Regulatory Compliance in Azure Policy](../governance/policy/concepts/regulatory-compliance.md)
postgresql Tutorial Design Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/tutorial-design-database-using-azure-cli.md
Title: 'Tutorial: Design an Azure Database for PostgreSQL - Single Server - Azure CLI' description: This tutorial shows how to create, configure, and query your first Azure Database for PostgreSQL - Single Server using Azure CLI.-- -+ ++
+ms.devlang: azurecli
+ Last updated 01/26/2022
Create a server with the [az postgres server create](/cli/azure/postgres/server#
## Configure a server-based firewall rule
-Create a firewall rule with the [az postgres server firewall-rule create](/azure/postgresql/concepts-firewall-rules) command to give your local environment access to connect to the server.
+Create a firewall rule with the [az postgres server firewall-rule create](./concepts-firewall-rules.md) command to give your local environment access to connect to the server.
:::code language="azurecli" source="~/azure_cli_scripts/postgresql/create-postgresql-server-and-firewall-rule/create-postgresql-server-and-firewall-rule.sh" id="CreateFirewallRule":::
In this tutorial, you learned how to use Azure CLI (command-line interface) and
> * Restore data > [!div class="nextstepaction"]
-> [Design your first Azure Database for PostgreSQL using the Azure portal](tutorial-design-database-using-azure-portal.md)
+> [Design your first Azure Database for PostgreSQL using the Azure portal](tutorial-design-database-using-azure-portal.md)
postgresql Tutorial Design Database Using Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/tutorial-design-database-using-azure-portal.md
Title: 'Tutorial: Design an Azure Database for PostgreSQL - Single Server - Azure portal' description: This tutorial shows how to Design your first Azure Database for PostgreSQL - Single Server using the Azure portal.-- -+ +++ Last updated 06/25/2019 # Tutorial: Design an Azure Database for PostgreSQL - Single Server using the Azure portal
postgresql Tutorial Design Database Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/tutorial-design-database-using-powershell.md
Title: 'Tutorial: Design an Azure Database for PostgreSQL - Single Server - Azure PowerShell' description: This tutorial shows how to create, configure, and query your first Azure Database for PostgreSQL - Single Server using Azure PowerShell.-- + Previously updated : 06/08/2020++
+ms.devlang: azurepowershell
Last updated : 06/08/2020 # Tutorial: Design an Azure Database for PostgreSQL - Single Server using PowerShell
postgresql Tutorial Monitor And Tune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/tutorial-monitor-and-tune.md
Title: 'Tutorial: Monitor and tune - Azure Database for PostgreSQL - Single Server' description: This tutorial walks through monitoring and tuning in Azure Database for PostgreSQL - Single Server.-- + ++ Last updated 5/6/2019
postgresql Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/videos.md
Title: Azure Database for PostgreSQL Videos description: This page lists video content relevant for learning Azure Database for PostgreSQL.-- + ++ Last updated 07/30/2020 # Azure Database for PostgreSQL videos
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
In this how-to guide, you'll carry out each of the tasks you need to complete be
Contact your support representative and ask them to register your Azure subscription for access to Azure Private 5G Core.
-Once your support representative has confirmed your access, register the Mobile Network resource provider (Microsoft.MobileNetwork) for your subscription, as described in [Azure resource providers and types](/azure/azure-resource-manager/management/resource-providers-and-types).
+Once your support representative has confirmed your access, register the Mobile Network resource provider (Microsoft.MobileNetwork) for your subscription, as described in [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md).
## Allocate subnets and IP addresses
You must do the following for each site you want to add to your private mobile n
| Step No. | Description | Detailed instructions | |--|--|--|
-| 1. | Order and prepare your Azure Stack Edge Pro device. | [Tutorial: Prepare to deploy Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-prep?tabs=azure-portal) |
-| 2. | Rack and cable your Azure Stack Edge Pro device. </br></br>When carrying out this procedure, you must ensure that the device has its ports connected as follows:</br></br>- Port 5 - access network</br>- Port 6 - data network</br></br>Additionally, you must have a port connected to your management network. You can choose any port from 2 to 4. | [Tutorial: Install Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-install) |
-| 3. | Connect to your Azure Stack Edge Pro device using the local web UI. | [Tutorial: Connect to Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-connect) |
-| 4. | Configure the network for your Azure Stack Edge Pro device. When carrying out the *Enable compute network* step of this procedure, ensure you use the port you've connected to your management network. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy) |
-| 5. | Configure a name, Domain Name System (DNS) name, and (optionally) time settings. | [Tutorial: Configure the device settings for Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time) |
-| 6. | Configure certificates for your Azure Stack Edge Pro device. | [Tutorial: Configure certificates for your Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-certificates) |
-| 7. | Activate your Azure Stack Edge Pro device. | [Tutorial: Activate Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-activate) |
-| 8. | Run the diagnostics tests for the Azure Stack Edge Pro device in the local web UI, and verify they all pass. </br></br>You may see a warning about a disconnected, unused port. You should fix the issue if the warning relates to any of these ports:</br></br>- Port 5.</br>- Port 6.</br>- The port you chose to connect to the management network in Step 2.</br></br>For all other ports, you can ignore the warning.</br></br>If there are any errors, resolve them before continuing with the remaining steps. This includes any errors related to invalid gateways on unused ports. In this case, either delete the gateway IP address or set it to a valid gateway for the subnet. | [Run diagnostics, collect logs to troubleshoot Azure Stack Edge device issues](/azure/databox-online/azure-stack-edge-gpu-troubleshoot) |
+| 1. | Order and prepare your Azure Stack Edge Pro device. | [Tutorial: Prepare to deploy Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-prep.md?tabs=azure-portal) |
+| 2. | Rack and cable your Azure Stack Edge Pro device. </br></br>When carrying out this procedure, you must ensure that the device has its ports connected as follows:</br></br>- Port 5 - access network</br>- Port 6 - data network</br></br>Additionally, you must have a port connected to your management network. You can choose any port from 2 to 4. | [Tutorial: Install Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-install.md) |
+| 3. | Connect to your Azure Stack Edge Pro device using the local web UI. | [Tutorial: Connect to Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-connect.md) |
+| 4. | Configure the network for your Azure Stack Edge Pro device. When carrying out the *Enable compute network* step of this procedure, ensure you use the port you've connected to your management network. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md) |
+| 5. | Configure a name, Domain Name System (DNS) name, and (optionally) time settings. | [Tutorial: Configure the device settings for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time.md) |
+| 6. | Configure certificates for your Azure Stack Edge Pro device. | [Tutorial: Configure certificates for your Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-configure-certificates.md) |
+| 7. | Activate your Azure Stack Edge Pro device. | [Tutorial: Activate Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-activate.md) |
+| 8. | Run the diagnostics tests for the Azure Stack Edge Pro device in the local web UI, and verify they all pass. </br></br>You may see a warning about a disconnected, unused port. You should fix the issue if the warning relates to any of these ports:</br></br>- Port 5.</br>- Port 6.</br>- The port you chose to connect to the management network in Step 2.</br></br>For all other ports, you can ignore the warning.</br></br>If there are any errors, resolve them before continuing with the remaining steps. This includes any errors related to invalid gateways on unused ports. In this case, either delete the gateway IP address or set it to a valid gateway for the subnet. | [Run diagnostics, collect logs to troubleshoot Azure Stack Edge device issues](../databox-online/azure-stack-edge-gpu-troubleshoot.md) |
| 9. | Deploy an Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on your Azure Stack Edge Pro device. At the end of this step, the Kubernetes cluster will be connected to Azure Arc and ready to host a packet core instance. During this step, you'll need to use the information you collected in [Allocate subnets and IP addresses](#allocate-subnets-and-ip-addresses). | Contact your support representative for detailed instructions. |
You must do the following for each site you want to add to your private mobile n
You can now collect the information you'll need to deploy your own private mobile network. -- [Collect the required information to deploy your own private mobile network](collect-required-information-for-private-mobile-network.md)
+- [Collect the required information to deploy your own private mobile network](collect-required-information-for-private-mobile-network.md)
public-multi-access-edge-compute-mec Considerations For Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/considerations-for-deployment.md
Applications you deploy in the Azure public MEC can be made available and resili
- [Deploy resources in active/standby](/azure/architecture/example-scenario/hybrid/multi-access-edge-compute-ha), with primary resources in the Azure public MEC and standby resources in the parent Azure region. If there's a failure in the Azure public MEC, the resources in the parent region become active. -- Use the [Azure backup and disaster recovery solution](/azure/architecture/framework/resiliency/backup-and-recovery), which provides [Azure Site Recovery](/azure/site-recovery/site-recovery-overview) and Azure Backup features. This solution:
+- Use the [Azure backup and disaster recovery solution](/azure/architecture/framework/resiliency/backup-and-recovery), which provides [Azure Site Recovery](../site-recovery/site-recovery-overview.md) and Azure Backup features. This solution:
- Actively replicates VMs from the Azure public MEC to the parent region and makes them available to fail over and fail back if there's an outage. - Backs up VMs to prevent data corruption or lost data.
A trade-off exists between availability and latency. Although failing over the a
To deploy a virtual machine in Azure public MEC using an Azure Resource Manager (ARM) template, advance to the following article: > [!div class="nextstepaction"]
-> [Quickstart: Deploy a virtual machine in Azure public MEC using an ARM template](quickstart-create-vm-azure-resource-manager-template.md)
+> [Quickstart: Deploy a virtual machine in Azure public MEC using an ARM template](quickstart-create-vm-azure-resource-manager-template.md)
public-multi-access-edge-compute-mec Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/key-concepts.md
Azure public MEC supports creating Standard SSD managed disks only. All other Az
### Default outbound access
-Because Azure public MEC doesn't support [default outbound access](/azure/virtual-network/ip-services/default-outbound-access), manage your outbound connectivity by using one of the following methods:
+Because Azure public MEC doesn't support [default outbound access](../virtual-network/ip-services/default-outbound-access.md), manage your outbound connectivity by using one of the following methods:
- Use the frontend IP addresses of an Azure Load Balancer for outbound via outbound rules. - Assign an Azure public IP to the VM.
By default, all services running in the Azure public MEC use the DNS infrastruct
To learn about considerations for deployment in the Azure public MEC, advance to the following article: > [!div class="nextstepaction"]
-> [Considerations for deployment in the Azure public MEC](considerations-for-deployment.md)
+> [Considerations for deployment in the Azure public MEC](considerations-for-deployment.md)
purview Catalog Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-managed-vnet.md
Previously updated : 03/10/2022 Last updated : 03/17/2022 # Customer intent: As a Azure Purview admin, I want to set up Managed Virtual Network and managed private endpoints for my Azure Purview account.
Currently, the following data sources are supported to have a managed private en
Additionally, you can deploy managed private endpoints for your Azure Key Vault resources if you need to run scans using any authentication options rather than Managed Identities, such as SQL Authentication or Account Key.
+> [!IMPORTANT]
+> If you are planning to scan Azure Synapse workspaces using Managed Virtual Network, you are also required to [configure Azure Synapse workspace firewall access](register-scan-synapse-workspace.md#set-up-azure-synapse-workspace-firewall-access) to enable **Allow Azure services and resources to access this workspace**. Currently, we do not support setting up scans for an Azure Synapse workspace from Azure Purview Studio, if you cannot enable **Allow Azure services and resources to access this workspace** on your Azure Synapse workspaces. If you cannot enable the firewall:
+> - You can use [Azure Purview Rest API - Scans - Create Or Update](/rest/api/purview/scanningdataplane/scans/create-or-update/) to create a new scan for your Synapse workspaces including dedicated and serverless pools.
+> - You must use **SQL Authentication** as authentication mechanism.
+ ### Managed Virtual Network A Managed Virtual Network in Azure Purview is a virtual network which is deployed and managed by Azure inside the same region as Azure Purview account to allow scanning Azure data sources inside a managed network, without having to deploy and manage any self-hosted integration runtime virtual machines by the customer in Azure.
purview How To Integrate With Azure Security Products https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-integrate-with-azure-security-products.md
Integrate Azure Purview with Microsoft Sentinel to gain visibility into where on
Customize the Azure Purview workbook and analytics rules to best suit the needs of your organization, and combine Azure Purview logs with data ingested from other sources to create enriched insights within Microsoft Sentinel.
-For more information, see [Tutorial: Integrate Microsoft Sentinel and Azure Purview](/azure/sentinel/purview-solution).
+For more information, see [Tutorial: Integrate Microsoft Sentinel and Azure Purview](../sentinel/purview-solution.md).
## Next steps-- [Experiences in Microsoft Defender for Cloud enriched using sensitivity from Azure Purview](../security-center/information-protection.md)
+- [Experiences in Microsoft Defender for Cloud enriched using sensitivity from Azure Purview](../security-center/information-protection.md)
purview Register Scan Google Bigquery Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-google-bigquery-source.md
When setting up scan, you can choose to scan an entire Google BigQuery project,
* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
-* Download and install BigQuery's JDBC driver on the machine where your self-hosted integration runtime is running. You can find the driver [here](https://cloud.google.com/bigquery/providers/simba-drivers).
+* Download and unzip BigQuery's JDBC driver on the machine where your self-hosted integration runtime is running. You can find the driver [here](https://cloud.google.com/bigquery/providers/simba-drivers).
> [!Note]
- > The driver should be accessible to all accounts in the VM. Do not install it in a user account.
+ > The driver should be accessible to all accounts in the machine. Don't put it in a path under user account.
## Register
Follow the steps below to scan a Google BigQuery project to automatically identi
* Select **Basic Authentication** as the Authentication method * Provide the email ID of the service account in the User name field. For example, `xyz\@developer.gserviceaccount.com`
- * Follow below steps to generate the private key, copy the JSON then store it as the value of a Key Vault secret.
+ * Follow below steps to generate the private key, copy the entire JSON key file then store it as the value of a Key Vault secret.
To create a new private key from Google's cloud platform: 1. In the navigation menu, select IAM & Admin -\> Service Accounts -\> Select a project -\>
Follow the steps below to scan a Google BigQuery project to automatically identi
1. **Driver location**: Specify the path to the JDBC driver location in your VM where self-host integration runtime is running. This should be the path to valid JAR folder location. > [!Note]
- > The driver should be accessible to all accounts in the VM.Please do not install in a user account.
+ > The driver should be accessible to all accounts in the machine. Don't put it in a path under user account.
1. **Dataset**: Specify a list of BigQuery datasets to import. For example, dataset1; dataset2. When the list is empty, all available datasets are imported.
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-synapse-workspace.md
GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[scoped_credential] TO [PurviewA
> [!IMPORTANT] > Currently, we do not support setting up scans for an Azure Synapse workspace from Azure Purview Studio, if you cannot enable **Allow Azure services and resources to access this workspace** on your Azure Synapse workspaces. In this case:
-> - You can use [Azure Purview Rest API - Scans - Create Or Update](/api/purview/scanningdataplane/scans/create-or-update) to create a new scan for your Synapse workspaces including dedicated and serverless pools.
+> - You can use [Azure Purview Rest API - Scans - Create Or Update](/rest/api/purview/scanningdataplane/scans/create-or-update/) to create a new scan for your Synapse workspaces including dedicated and serverless pools.
> - You must use **SQL Auth** as authentication mechanism. ### Create and run scan
purview Tutorial Azure Purview Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-azure-purview-checklist.md
Last updated 03/15/2022
This article lists prerequisites that help you get started quickly on Azure Purview planning and deployment.
-|No. |Prerequisite / Action |Required Permission |Additional guidance and recommendations |
+|No. |Prerequisite / Action |Required permission |Additional guidance and recommendations |
|:|:|:|:| |1 | Azure Active Directory Tenant |N/A |An [Azure Active Directory tenant](../active-directory/fundamentals/active-directory-access-create-new-tenant.md) should be associated with your subscription. <ul><li>*Global Administrator* or *Information Protection Administrator* role is required, if you plan to [extend Microsoft 365 Sensitivity Labels to Azure Purview for files and db columns](create-sensitivity-label.md)</li><li> *Global Administrator* or *Power BI Administrator* role is required, if you're planning to [scan Power BI tenants](register-scan-power-bi-tenant.md).</li></ul> | |2 |An active Azure Subscription |*Subscription Owner* |An Azure subscription is needed to deploy Azure Purview and its managed resources. If you don't have an Azure subscription, create a [free subscription](https://azure.microsoft.com/free/) before you begin. |
-|3 |Define whether you plan to deploy an Azure Purview with managed Event Hub | N/A |A managed Event Hub is created as part of Azure Purview account creation, see Azure Purview account creation. You can publish messages to the Event Hub kafka topic ATLAS_HOOK and Azure Purview will consume and process it. Azure Purview will notify entity changes to Event Hub kafka topic ATLAS_ENTITIES and user can consume and process it.This quickstart uses the new Azure.Messaging.EventHubs library. |
+|3 |Define whether you plan to deploy an Azure Purview with managed Event Hub | N/A |A managed Event Hub is created as part of Azure Purview account creation, see Azure Purview account creation. You can publish messages to the Event Hub kafka topic ATLAS_HOOK and Azure Purview will consume and process it. Azure Purview will notify entity changes to Event Hub kafka topic ATLAS_ENTITIES and user can consume and process it. |
|4 |Register the following resource providers: <ul><li>Microsoft.Storage</li><li>Microsoft.EventHub (optional)</li><li>Microsoft.Purview</li></ul> |*Subscription Owner* or custom role to register Azure resource providers (_/register/action_) | [Register required Azure Resource Providers](/azure-resource-manager/management/resource-providers-and-types.md) in the Azure Subscription that is designated for Azure Purview Account. Review [Azure resource provider operations](../role-based-access-control/resource-provider-operations.md). | |5 |Update Azure Policy to allow deployment of the following resources in your Azure subscription: <ul><li>Azure Purview</li><li>Azure Storage</li><li>Azure Event Hub (optional)</li></ul> |*Subscription Owner* |Use this step if an existing Azure Policy prevents deploying such Azure resources. If a blocking policy exists and needs to remain in place, please follow our [Azure Purview exception tag guide](create-azure-purview-portal-faq.md) and follow the steps to create an exception for Azure Purview accounts. | |6 | Define your network security requirements. | Network and Security architects. |<ul><li> Review [Azure Purview network architecture and best practices](concept-best-practices-network.md) to define what scenario is more relevant to your network requirements. </li><li>If private network is needed, use [Azure Purview Managed IR](catalog-managed-vnet.md) to scan Azure data sources when possible to reduce complexity and administrative overhead. </li></ul> |
-|7 |An Azure Virtual Network and Subnet(s) for Azure Purview private endpoints. | *Network Contributor* to create or update Azure VNet. |Use this step if you're planning to set up[private endpoint connectivity with Azure Purview](catalog-private-link.md): <ul><li>Private endpoints for **ingestion**.</li><li>Private endpoint for Azure Purview **Account**.</li><li>Private endpoint for Azure Purview **Portal**.</li></ul> <br> Deploy [Azure Virtual Network](../virtual-network/quick-create-portal.md) if you need to. |
-|8 |Deploy private endpoint for Azure data sources. |*Network Contributor* to set up Private endpoints for each data source. |perform this step if you're planning to use [Private Endpoint for Ingestion](catalog-private-link-end-to-end.md). |
+|7 |An Azure Virtual Network and Subnet(s) for Azure Purview private endpoints. | *Network Contributor* to create or update Azure VNet. |Use this step if you're planning to deploy [private endpoint connectivity with Azure Purview](catalog-private-link.md): <ul><li>Private endpoints for **Ingestion**.</li><li>Private endpoint for Azure Purview **Account**.</li><li>Private endpoint for Azure Purview **Portal**.</li></ul> <br> Deploy [Azure Virtual Network](../virtual-network/quick-create-portal.md) if you need one. |
+|8 |Deploy private endpoint for Azure data sources. |*Network Contributor* to set up private endpoints for each data source. |Perform this step, if you're planning to use [Private Endpoint for Ingestion](catalog-private-link-end-to-end.md). |
|9 |Define whether to deploy new or use existing Azure Private DNS Zones. |Required [Azure Private DNS Zones](catalog-private-link-name-resolution.md) can be created automatically during Purview Account deployment using Subscription Owner / Contributor role |Use this step if you're planning to use Private Endpoint connectivity with Azure Purview. Required DNS Zones for Private Endpoint: <ul><li>privatelink.purview.azure.com</li><li>privatelink.purviewstudio.azure.com</li><li>privatelink.blob.core.windows.net</li><li>privatelink.queue.core.windows.net</li><li>privatelink.servicebus.windows.net</li></ul> |
-|10 |A management machine in your CorpNet or inside Azure VNet to launch Azure Purview Studio. |N/A |Use this step if you're planning to set **Allow Public Network** to **deny** on you Azure Purview Account. |
+|10 |A management machine in your CorpNet or inside Azure VNet to launch Azure Purview Studio. |N/A |Use this step if you're planning to set **Allow Public Network** to **deny** on your Azure Purview Account. |
|11 |Deploy an Azure Purview Account |Subscription Owner / Contributor |Purview account is deployed with 1 Capacity Unit and will scale up based [on demand](concept-elastic-data-map.md). | |12 |Deploy a Managed Integration Runtime and Managed private endpoints for Azure data sources. |*Data source admin* to setup Managed VNet inside Azure Purview. <br> *Network Contributor* to approve managed private endpoint for each Azure data source. |Perform this step if you're planning to use [Managed VNet](catalog-managed-vnet.md). within your Azure Purview account for scanning purposes. |
-|13 |Deploy Self-hosted integration runtime VMs inside your network. |Azure: *Virtual Machine Contributor* <br> On-prem: Application owner |Use this step if you're planning to perform any scans using Self-hosted Integration Runtime. |
+|13 |Deploy Self-hosted integration runtime VMs inside your network. |Azure: *Virtual Machine Contributor* <br> On-prem: Application owner |Use this step if you're planning to perform any scans using [Self-hosted Integration Runtime](manage-integration-runtimes.md). |
|14 |Create a Self-hosted integration runtime inside Azure Purview. |Data curator <br> VM Administrator or application owner |Use this step if you're planning to use Self-hosted Integration Runtime instead of Managed Integration Runtime or Azure Integration Runtime. <br><br> <br> [download](https://www.microsoft.com/en-us/download/details.aspx?id=39717) | |15 |Register your Self-hosted integration runtime | Virtual machine administrator |Use this step if you have **on-premises** or **VM-based data sources** (e.g. SQL Server). <br> Use this step are using **Private Endpoint** to scan to **any** data sources. |
-|16 |Grant Azure RBAC **Reader** role to **Azure Purview MSI** at data sources' Subscriptions |*Subscription owner* or *User Access Administrator* |Use this step if you're planning to register **multiple** or **any** of the following data sources: <ul><li>Azure Blob Storage</li><li>Azure Data Lake Storage Gen1</li><li>Azure Data Lake Storage Gen2</li><li>Azure SQL Database</li><li>Azure SQL Database Managed Instance</li><li>Azure Synapse Analytics</li></ul> |
-|17 |Grant Azure RBAC **Storage Blob Data Reader** role to **Azure Purview MSI** at data sources Subscriptions. |*Subscription owner* or *User Access Administrator* | **Skip** this step if you are using Private Endpoint to connect to data sources. Use this step if you have these data sources:<ul><li>Azure Blob Storage</li><li>Azure Data Lake Storage Gen1</li></ul> |
+|16 |Grant Azure RBAC **Reader** role to **Azure Purview MSI** at data sources' Subscriptions |*Subscription owner* or *User Access Administrator* |Use this step if you're planning to register [multiple](register-scan-azure-multiple-sources.md) or **any** of the following data sources: <ul><li>[Azure Blob Storage](register-scan-azure-blob-storage-source.md)</li><li>[Azure Data Lake Storage Gen1](register-scan-adls-gen1.md)</li><li>[Azure Data Lake Storage Gen2](register-scan-adls-gen2.md)</li><li>[Azure SQL Database](register-scan-azure-sql-database.md)</li><li>[Azure SQL Database Managed Instance](register-scan-azure-sql-database-managed-instance.md)</li><li>[Azure Synapse Analytics](register-scan-synapse-workspace.md)</li></ul> |
+|17 |Grant Azure RBAC **Storage Blob Data Reader** role to **Azure Purview MSI** at data sources Subscriptions. |*Subscription owner* or *User Access Administrator* | **Skip** this step if you are using Private Endpoint to connect to data sources. Use this step if you have these data sources:<ul><li>[Azure Blob Storage](register-scan-azure-blob-storage-source.md#using-a-system-or-user-assigned-managed-identity-for-scanning)</li><li>[Azure Data Lake Storage Gen2](register-scan-adls-gen2.md#using-a-system-or-user-assigned-managed-identity-for-scanning)</li></ul> |
|18 |Enable network connectivity to allow AzureServices to access data sources: <br> e.g. Enable "**Allow trusted Microsoft services to access this storage account**". |*Owner* or *Contributor* at Data source |Use this step if **Service Endpoint** is used in your data sources. (Don't use this step if Private Endpoint is used) | |19 |Enable **Azure Active Directory Authentication** on **Azure SQL Servers**, **Azure SQL Database Managed Instance** and **Azure Synapse Analytics** |Azure SQL Server Contributor |Use this step if you have **Azure SQL DB** or **Azure SQL Database Managed Instance** or **Azure Synapse Analytics** as data source. **Skip** this step if you are using **Private Endpoint** to connect to data sources. | |20 |Grant **Azure Purview MSI** account with **db_datareader** role to Azure SQL databases and Azure SQL Database Managed Instance databases |Azure SQL Administrator |Use this step if you have **Azure SQL DB** or **Azure SQL Database Managed Instance** as data source. **Skip** this step if you are using **Private Endpoint** to connect to data sources. |
This article lists prerequisites that help you get started quickly on Azure Purv
|22 |Grant Azure RBAC **Reader** role to **Azure Purview MSI** at **Synapse workspace** resources |Owner or User Access Administrator at data source |Use this step if you have **Azure Synapse Analytics** as data sources. **Skip** this step if you are using Private Endpoint to connect to data sources. | |23 |Grant Azure **Purview MSI account** with **db_datareader** role |Azure SQL Administrator |Use this step if you have **Azure Synapse Analytics (Dedicated SQL databases)**. <br> **Skip** this step if you are using **Private Endpoint** to connect to data sources. | |24 |Grant **Azure Purview MSI** account with **sysadmin** role |Azure SQL Administrator |Use this step if you have Azure Synapse Analytics (Serverless SQL databases). **Skip** this step if you are using **Private Endpoint** to connect to data sources. |
-|25 |Create an app registration or service principal inside your Azure Active Directory tenant | Azure Active Directory *Global Administrator* or *Application Administrator* | Use this step if you're planning to perform an scan on a data source using Delegated Auth or [Service Principal](create-service-principal-azure.md).|
+|25 |Create an app registration or service principal inside your Azure Active Directory tenant | Azure Active Directory *Global Administrator* or *Application Administrator* | Use this step if you're planning to perform a scan on a data source using Delegated Auth or [Service Principal](create-service-principal-azure.md).|
|26 |Create an **Azure Key Vault** and a **Secret** to save data source credentials or service principal secret. |*Contributor* or *Key Vault Administrator* |Use this step if you have **on-premises** or **VM-based data sources** (e.g. SQL Server). <br> Use this step are using **ingestion private endpoints** to scan a data source. | |27 |Grant Key **Vault Access Policy** to Azure Purview MSI: **Secret: get/list** |*Key Vault Administrator* |Use this step if you have **on-premises** / **VM-based data sources** (e.g. SQL Server) <br> Use this step if **Key Vault Permission Model** is set to [Vault Access Policy](../key-vault/general/assign-access-policy.md). | |28 |Grant **Key Vault RBAC role** Key Vault Secrets User to Azure Purview MSI. | *Owner* or *User Access Administrator* |Use this step if you have **on-premises** or **VM-based data sources** (e.g. SQL Server) <br> Use this step if **Key Vault Permission Model** is set to [Azure role-based access control](../key-vault/general/rbac-guide.md). |
-|29 | Create a new connection to Azure Key Vault from Azure Purview Studio | *Data source admin* | Use this step if you are planing to use any of the following authentication options to scan a data source in Azure Purview: <ul><li>Account key</li><li>Basic Authentication</li><li>Delegated Auth</li><li>SQL Authentication</li><li>Service Principal</li><li>Consumer Key</li></ul>
-|30 |Deploy a private endpoint for Power BI tenant |*Power BI Administrator* <br> *Network contributor* |Use this step if you're planning to register a Power BI tenant as data source and your Azure Purview Purview account is set to **deny public access**. <br> For more information, see [How to configure private endpoints for accessing Power BI](/power-bi/enterprise/service-security-private-links). |
+|29 | Create a new connection to Azure Key Vault from Azure Purview Studio | *Data source admin* | Use this step if you are planing to use any of the following [authentication options](manage-credentials.md#create-a-new-credential) to scan a data source in Azure Purview: <ul><li>Account key</li><li>Basic Authentication</li><li>Delegated Auth</li><li>SQL Authentication</li><li>Service Principal</li><li>Consumer Key</li></ul>
+|30 |Deploy a private endpoint for Power BI tenant |*Power BI Administrator* <br> *Network contributor* |Use this step if you're planning to register a Power BI tenant as data source and your Azure Purview account is set to **deny public access**. <br> For more information, see [How to configure private endpoints for accessing Power BI](/power-bi/enterprise/service-security-private-links). |
|31 |Connect Azure Data Factory to Azure Purview from Azure Data Factory Portal. **Manage** -> **Azure Purview**. Select **Connect to a Purview account**. <br> Validate if Azure resource tag **catalogUri** exists in ADF Azure resource. |Azure Data Factory Contributor / Data curator |Use this step if you have **Azure Data Factory**. |
-|32 |Verify if you have at least one **Microsoft 365 required license** in your Azure Active Directory tenant to use sensitivity labels in Azure Purview. |Azure Active Directory *Global Reader* |Perform this step if you're planning in extending **Sensitivity Labels from Microsoft 365 to Azure Purview** <br> |
+|32 |Verify if you have at least one **Microsoft 365 required license** in your Azure Active Directory tenant to use sensitivity labels in Azure Purview. |Azure Active Directory *Global Reader* |Perform this step if you're planning in extending **Sensitivity Labels from Microsoft 365 to Azure Purview** <br> For more information, see [licensing requirements to use sensitivity labels on files and database columns in Azure Purview](sensitivity-labels-frequently-asked-questions.yml) |
|33 |Consent "**Extend labeling to assets in Azure Purview**" |Compliance Administrator <br> Azure Information Protection Administrator |Use this step if you are interested in extending Sensitivity Labels from Microsoft 365 to Azure Purview. <br> Use this step if you are interested in extending **Sensitivity Labels** from Microsoft 365 to Azure Purview. | |34 |Create new collections and assign roles in Azure Purview |*Collection admin* | [Create a collection and assign permissions in Azure Purview](/quickstart-create-collection.md). | |36 |Register and scan Data Sources in Azure Purview |*Data Source admin* <br> *Data Reader* or *Data Curator* | For more information, see [supported data sources and file types](azure-purview-connector-overview.md) | |35 |Grant access to data roles in the organization |*Collection admin* |Provide access to other teams to use Azure Purview: <ul><li> Data curator</li><li>Data reader</li><li>Collection admin</li><li>Data source admin</li><li>Policy Author</li><li>Workflow admin</li></ul> <br> For more information, see [Access control in Azure Purview](catalog-permissions.md). | ## Next steps-- [Review Azure Purview deployment best practices](./deployment-best-practices.md)
+- [Review Azure Purview deployment best practices](./deployment-best-practices.md)
purview Tutorial Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-resource-group.md
The limit for Azure Purview policies that can be enforced by Storage accounts is
Check blog, demo and related tutorials * [Blog: resource group-level governance can significantly reduce effort](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-resource-group-level-governance-can/ba-p/3096314)
-* [Demo of data owner access policies for Azure Storage](https://docs.microsoft.com/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
-* [Fine-grain data owner policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
+* [Demo of data owner access policies for Azure Storage](/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
+* [Fine-grain data owner policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
purview Tutorial Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-storage.md
This section contains a reference of how actions in Azure Purview data policies
Check blog, demo and related tutorials * [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./tutorial-data-owner-policies-resource-group.md)
-* [Demo of access policy for Azure Storage](https://docs.microsoft.com/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
+* [Demo of access policy for Azure Storage](/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
* [Blog: What's New in Azure Purview at Microsoft Ignite 2021](https://techcommunity.microsoft.com/t5/azure-purview/what-s-new-in-azure-purview-at-microsoft-ignite-2021/ba-p/2915954) * [Blog: Accessing data when folder level permission is granted](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-folder-level-permission/ba-p/3109583)
-* [Blog: Accessing data when file level permission is granted](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-file-level-permission/ba-p/3102166)
+* [Blog: Accessing data when file level permission is granted](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-file-level-permission/ba-p/3102166)
search Query Simple Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/query-simple-syntax.md
Previously updated : 12/14/2020 Last updated : 03/16/2022 # Simple query syntax in Azure Cognitive Search
-Azure Cognitive Search implements two Lucene-based query languages: [Simple Query Parser](https://lucene.apache.org/core/6_6_1/queryparser/org/apache/lucene/queryparser/simple/SimpleQueryParser.html) and the [Lucene Query Parser](https://lucene.apache.org/core/6_6_1/queryparser/org/apache/lucene/queryparser/classic/package-summary.html). The simple parser is more flexible and will attempt to interpret a request even if it's not perfectly composed. Because of this flexibility, it is the default for queries in Azure Cognitive Search.
+Azure Cognitive Search implements two Lucene-based query languages: [Simple Query Parser](https://lucene.apache.org/core/6_6_1/queryparser/org/apache/lucene/queryparser/simple/SimpleQueryParser.html) and the [Lucene Query Parser](https://lucene.apache.org/core/6_6_1/queryparser/org/apache/lucene/queryparser/classic/package-summary.html). The simple parser is more flexible and will attempt to interpret a request even if it's not perfectly composed. Because it's flexible, it's the default for queries in Azure Cognitive Search.
-The simple syntax is used for query expressions passed in the **`search`** parameter of a [Search Documents (REST API)](/rest/api/searchservice/search-documents) request, not to be confused with the [OData syntax](query-odata-filter-orderby-syntax.md) used for the [**`$filter`**](search-filters.md) and [**`$orderby`**](search-query-odata-orderby.md) expressions in the same request. OData parameters have different syntax and rules for constructing queries, escaping strings, and so on.
+The simple syntax is used for query expressions passed in the "search" parameter of a [Search Documents (REST API)](/rest/api/searchservice/search-documents) request, not to be confused with the [OData syntax](query-odata-filter-orderby-syntax.md) used for the ["$filter"](search-filters.md) and ["$orderby"](search-query-odata-orderby.md) expressions in the same request. OData parameters have different syntax and rules for constructing queries, escaping strings, and so on.
-Although the simple parser is based on the [Apache Lucene Simple Query Parser](https://lucene.apache.org/core/6_6_1/queryparser/org/apache/lucene/queryparser/simple/SimpleQueryParser.html) class, the implementation in Cognitive Search excludes fuzzy search. If you need [fuzzy search](search-query-fuzzy.md), consider the alternative [full Lucene query syntax](query-lucene-syntax.md) instead.
+Although the simple parser is based on the [Apache Lucene Simple Query Parser](https://lucene.apache.org/core/6_6_1/queryparser/org/apache/lucene/queryparser/simple/SimpleQueryParser.html) class, its implementation in Cognitive Search excludes fuzzy search. If you need [fuzzy search](search-query-fuzzy.md), consider the alternative [full Lucene query syntax](query-lucene-syntax.md) instead.
## Example (simple syntax)
-Although **`queryType`** is set below, it's the default and can be omitted unless you are reverting from an alternative type. The following example is a search over independent terms, with a requirement that all matching documents include "pool".
+This example shows a simple query, distinguished by `"queryType": "simple"` and valid syntax. Although query type is set below, it's the default and can be omitted unless you are reverting from an alternative type. The following example is a search over independent terms, with a requirement that all matching documents include "pool".
```http POST https://{{service-name}}.search.windows.net/indexes/hotel-rooms-sample/docs/search?api-version=2020-06-30
POST https://{{service-name}}.search.windows.net/indexes/hotel-rooms-sample/docs
} ```
-The **`searchMode`** parameter is relevant in this example. Whenever boolean operators are on the query, you should generally set `searchMode=all` to ensure that *all* of the criteria is matched. Otherwise, you can use the default `searchMode=any` that favors recall over precision.
+The "searchMode" parameter is relevant in this example. Whenever boolean operators are on the query, you should generally set `"searchMode=all"` to ensure that *all* of the criteria is matched. Otherwise, you can use the default `"searchMode=any"` that favors recall over precision.
For additional examples, see [Simple query syntax examples](search-query-simple-examples.md). For details about the query request and parameters, see [Search Documents (REST API)](/rest/api/searchservice/Search-Documents). ## Keyword search on terms and phrases
-Strings passed to the **`search`** parameter can include terms or phrases in any supported language, boolean operators, precedence operators, wildcard or prefix characters for "starts with" queries, escape characters, and URL encoding characters. The **`search`** parameter is optional. Unspecified, search (`search=*` or `search=" "`) returns the top 50 documents in arbitrary (unranked) order.
+Strings passed to the "search" parameter can include terms or phrases in any supported language, boolean operators, precedence operators, wildcard or prefix characters for "starts with" queries, escape characters, and URL encoding characters. The "search" parameter is optional. Unspecified, search (`search=*` or `search=" "`) returns the top 50 documents in arbitrary (unranked) order.
+ A *term search* is a query of one or more terms, where any of the terms are considered a match.
Strings passed to the **`search`** parameter can include terms or phrases in any
Depending on your search client, you might need to escape the quotation marks in a phrase search. For example, in Postman in a POST request, a phrase search on `"Roach Motel"` in the request body would be specified as `"\"Roach Motel\""`.
-By default, all terms or phrases passed in the **`search`** parameter undergo lexical analysis. Make sure you understand the tokenization behavior of the analyzer you are using. Often, when query results are unexpected, the reason can be traced to how terms are tokenized at query time.
+By default, all strings passed in the "search" parameter undergo lexical analysis. Make sure you understand the tokenization behavior of the analyzer you are using. Often, when query results are unexpected, the reason can be traced to how terms are tokenized at query time. You can [test tokenization on specific strings](/rest/api/searchservice/test-analyzer) to confirm the output.
-Any text with one or more terms is considered a valid starting point for query execution. Azure Cognitive Search will match documents containing any or all of the terms, including any variations found during analysis of the text.
+Any text input with one or more terms is considered a valid starting point for query execution. Azure Cognitive Search will match documents containing any or all of the terms, including any variations found during analysis of the text.
-As straightforward as this sounds, there is one aspect of query execution in Azure Cognitive Search that *might* produce unexpected results, increasing rather than decreasing search results as more terms and operators are added to the input string. Whether this expansion actually occurs depends on the inclusion of a NOT operator, combined with a **`searchMode`** parameter setting that determines how NOT is interpreted in terms of AND or OR behaviors. For more information, see the NOT operator under [Boolean operators](#boolean-operators).
+As straightforward as this sounds, there is one aspect of query execution in Azure Cognitive Search that *might* produce unexpected results, increasing rather than decreasing search results as more terms and operators are added to the input string. Whether this expansion actually occurs depends on the inclusion of a NOT operator, combined with a "searchMode" parameter setting that determines how NOT is interpreted in terms of AND or OR behaviors. For more information, see the NOT operator under [Boolean operators](#boolean-operators).
## Boolean operators
You can embed Boolean operators in a query string to improve the precision of a
|-- |--|-| | `+` | `pool + ocean` | An AND operation. For example, `pool + ocean` stipulates that a document must contain both terms.| | `|` | `pool | ocean` | An OR operation finds a match when either term is found. In the example, the query engine will return match on documents containing either `pool` or `ocean` or both. Because OR is the default conjunction operator, you could also leave it out, such that `pool ocean` is the equivalent of `pool | ocean`.|
-| `-` | `pool ΓÇô ocean` | A NOT operation returns matches on documents that exclude the term. <br/><br/>To get the expected behavior on a NOT expression, consider setting **`searchMode=all`** on the request. Otherwise, under the default of **`searchMode=any`**, you will get matches on `pool`, plus matches on all documents that do not contain `ocean`, which could be a lot of documents. The **`searchMode`** parameter on a query request controls whether a term with the NOT operator is ANDed or ORed with other terms in the query (assuming there is no `+` or `|` operator on the other terms). Using **`searchMode=all`** increases the precision of queries by including fewer results, and by default - will be interpreted as "AND NOT". <br/><br/>When deciding on a **`searchMode`** setting, consider the user interaction patterns for queries in various applications. Users who are searching for information are more likely to include an operator in a query, as opposed to e-commerce sites that have more built-in navigation structures. |
+| `-` | `pool ΓÇô ocean` | A NOT operation returns matches on documents that exclude the term. </p>To get the expected behavior on a NOT expression, consider setting `"searchMode=all"` on the request. Otherwise, under the default of `"searchMode=any"`, you will get matches on `pool`, plus matches on all documents that do not contain `ocean`, which could be a lot of documents. The "searchMode" parameter on a query request controls whether a term with the NOT operator is ANDed or ORed with other terms in the query (assuming there is no `+` or `|` operator on the other terms). Using `"searchMode=all"` increases the precision of queries by including fewer results, and by default - will be interpreted as "AND NOT". </p>When deciding on a "searchMode" setting, consider the user interaction patterns for queries in various applications. Users who are searching for information are more likely to include an operator in a query, as opposed to e-commerce sites that have more built-in navigation structures. |
<a name="prefix-search"></a>
search Search Create Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-create-service-portal.md
A second service is not required for high availability. High availability for qu
Cognitive Search restricts the [number of resources](search-limits-quotas-capacity.md#subscription-limits) you can initially create in a subscription. If you exhaust your maximum limit, file a new support request to add more search services.
-1. Sign in to the Azure portal, and find your search service.
+1. Sign in to the Azure portal and find your search service.
+ 1. On the left-navigation pane, scroll down and select **New Support Request.**
-1. ForΓÇ»**issue type**, chooseΓÇ»**Service and subscription limits (quotas).**
+
+1. In **Issue type**, chooseΓÇ»**Service and subscription limits (quotas).**
+ 1. Select the subscription that needs more quota.
-1. Under **Quota type**, select **Search**. Then select **Next**.
+
+1. Under **Quota type**, select **Search** and then select **Next**.
+ 1. In the **Problem details** section, select **Enter details**.
-1. Follow the prompts to select location and tier.
-1. Add the new limit you would like on the subscription. The value must not be empty and must between 0 to 100.
- For example: The maximum number of S2 services is 8 and you would like to have 12 services, then request to add 4 of S2 services."
+
+1. Follow the prompts to select the location and tier for which you want to increase the limit.
+
+1. Add the number of new services you would like to add to your quota. The value must not be empty and must between 0 to 100. For example, the maximum number of S2 services is 8. If you want 12 services, you would request 4 of S2 services.
+ 1. When you're finished, select **Save and continue** to continue creating your support request.
-1. Complete the rest of the additional information requested, and then select **Next**.
-1. On the **review + create** screen, review the details that you'll send to support, and then select **Create**.
+
+1. Provide the additional information required to file the request, and then select **Next**.
+
+1. On **Review + create**, review the details that you'll send to support, and then select **Create**.
## Next steps
search Search Howto Index Json Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-json-blobs.md
- Previously updated : 02/01/2021+ Last updated : 03/16/2022
-# How to index JSON blobs and files in Azure Cognitive Search
+# Index JSON blobs and files in Azure Cognitive Search
**Applies to**: [Blob indexers](search-howto-indexing-azure-blob-storage.md), [File indexers](search-file-storage-integration.md)
This article shows you how to set JSON-specific properties for blobs or files th
+ A JSON document containing an array of well-formed JSON elements + A JSON document containing multiple entities, separated by a newline
-The blob indexer provides a **`parsingMode`** parameter to optimize the output of the search document based on the structure Parsing modes consist of the following options:
+The blob indexer provides a "parsingMode" parameter to optimize the output of the search document based on the structure. Parsing modes consist of the following options:
| parsingMode | JSON document | Description | |--|-|--|
The blob indexer provides a **`parsingMode`** parameter to optimize the output o
For both **`jsonArray`** and **`jsonLines`**, you should review [Indexing one blob to produce many search documents](search-howto-index-one-to-many-blobs.md) to understand how the blob indexer handles disambiguation of the document key for multiple search documents produced from the same blob.
-Within the indexer definition, you can optionally set [field mappings](search-indexer-field-mappings.md) to choose which properties of the source JSON document are used to populate your target search index. For example, when using the **`jsonArray`** parsing mode, if the array exists as a lower-level property, you can set a **`document root`** property indicating where the array is placed within the blob.
+Within the indexer definition, you can optionally set [field mappings](search-indexer-field-mappings.md) to choose which properties of the source JSON document are used to populate your target search index. For example, when using the **`jsonArray`** parsing mode, if the array exists as a lower-level property, you can set a "documentRoot" property indicating where the array is placed within the blob.
-The following sections describe each mode in more detail. If you are unfamiliar with indexer clients and concepts, see [Create a search indexer](search-howto-create-indexers.md). You should also be familiar with the details of [basic blob indexer configuration](search-howto-indexing-azure-blob-storage.md), which isn't repeated here.
+The following sections describe each mode in more detail. If you're unfamiliar with indexer clients and concepts, see [Create a search indexer](search-howto-create-indexers.md). You should also be familiar with the details of [basic blob indexer configuration](search-howto-indexing-azure-blob-storage.md), which isn't repeated here.
<a name="parsing-single-blobs"></a>
api-key: [admin key]
### json example (single hotel JSON files)
-The [hotel JSON document data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/hotels/hotel-json-documents) on GitHub is helpful for testing JSON parsing, where each blob represents a structured JSON file. You can upload the data files to Blob storage and use the **Import data** wizard to quickly evaluate how this content is parsed into individual search documents.
+The [hotel JSON document data set](https://github.com/Azure-Samples/azure-search-sample-dat) to quickly evaluate how this content is parsed into individual search documents.
The data set consists of five blobs, each containing a hotel document with an address collection and a rooms collection. The blob indexer detects both collections and reflects the structure of the input documents in the index schema.
Alternatively, you can use the JSON array option. This option is useful when blo
] ```
-The **`parameters`** property on the indexer contains parsing mode values. For a JSON array, the indexer definition should look similar to the following example.
+The "parameters" property on the indexer contains parsing mode values. For a JSON array, the indexer definition should look similar to the following example.
```http POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
api-key: [admin key]
### jsonArrays example (clinical trials sample data)
-The [clinical trials JSON data set](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/clinical-trials/clinical-trials-json) on GitHub is helpful for testing JSON array parsing. You can upload the data files to Blob storage and use the **Import data** wizard to quickly evaluate how this content is parsed into individual search documents.
+The [clinical trials JSON data set](https://github.com/Azure-Samples/azure-search-sample-dat) to quickly evaluate how this content is parsed into individual search documents.
The data set consists of eight blobs, each containing a JSON array of entities, for a total of 100 entities. The entities vary as to which fields are populated, but the end result is one search document per entity, from all arrays, in all blobs.
The data set consists of eight blobs, each containing a JSON array of entities,
### Parsing nested JSON arrays
-For JSON arrays having nested elements, you can specify a **`documentRoot`** to indicate a multi-level structure. For example, if your blobs look like this:
+For JSON arrays having nested elements, you can specify a "documentRoot" to indicate a multi-level structure. For example, if your blobs look like this:
```http {
You can also refer to individual array elements by using a zero-based index. For
``` > [!NOTE]
-> If **`sourceFieldName`** refers to a property that doesn't exist in the JSON blob, that mapping is skipped without an error. This behavior allows indexing to continue for JSON blobs that have a different schema (which is a common use case). Because there is no validation check, check the mappings carefully for typos so that you aren't losing documents for the wrong reason.
+> If "sourceFieldName" refers to a property that doesn't exist in the JSON blob, that mapping is skipped without an error. This behavior allows indexing to continue for JSON blobs that have a different schema (which is a common use case). Because there is no validation check, check the mappings carefully for typos so that you aren't losing documents for the wrong reason.
> ## Next steps
search Search Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-limits-quotas-capacity.md
Previously updated : 02/28/2022 Last updated : 03/16/2022 # Service limits in Azure Cognitive Search
search Search Normalizers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-normalizers.md
- Previously updated : 02/23/2021+ Last updated : 03/23/2022 # Text normalization for case-insensitive filtering, faceting and sorting
Last updated 02/23/2021
> [!IMPORTANT] > This feature is in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [preview REST API](/rest/api/searchservice/index-preview) supports this feature.
-Searching and retrieving documents from an Azure Cognitive Search index requires matching the query to the contents of the document. The content can be analyzed to produce tokens for matching as is the case when `search` parameter is used, or can be used as-is for strict keyword matching as seen with `$filter`, `facets`, and `$orderby`. This all-or-nothing approach covers most scenarios but falls short where simple pre-processing like casing, accent removal, asciifolding and so forth is required without undergoing through the entire analysis chain.
+In Azure Cognitive Search, a *normalizer* is a component of the search engine responsible for pre-processing text for keyword matching in filters, facets, and sorts. Normalizers behave similar to [analyzers](search-analyzers.md) in how they process text, except they don't tokenize the query. Some of the transformations that can be achieved using normalizers are:
-Consider the following examples:
++ Convert to lowercase or upper-case++ Normalize accents and diacritics like ö or ê to ASCII equivalent characters "o" and "e"++ Map characters like `-` and whitespace into a user-specified character
-+ `$filter=City eq 'Las Vegas'` will only return documents that contain the exact text "Las Vegas" and exclude documents with "LAS VEGAS" and "las vegas" which is inadequate when the use-case requires all documents regardless of the casing.
+Normalizers are specified on string fields in the index and are applied during indexing and query execution.
-+ `search=*&facet=City,count:5` will return "Las Vegas", "LAS VEGAS" and "las vegas" as distinct values despite being the same city.
+## Benefits of normalizers
-+ `search=usa&$orderby=City` will return the cities in lexicographical order: "Las Vegas", "Seattle", "las vegas", even if the intent is to order the same cities together irrespective of the case.
+Searching and retrieving documents from a search index requires matching the query to the contents of the document. The content can be analyzed to produce tokens for matching as is the case when "search" parameter is used, or can be used as-is for strict keyword matching as seen with "$filter", "facets", and "$orderby". This all-or-nothing approach covers most scenarios but falls short where simple pre-processing like casing, accent removal, asciifolding and so forth is required without undergoing through the entire analysis chain.
-## Normalizers
+Consider the following examples:
-A *normalizer* is a component of the search engine responsible for pre-processing text for keyword matching. Normalizers are similar to analyzers except they do not tokenize the query. Some of the transformations that can be achieved using normalizers are:
++ `$filter=City eq 'Las Vegas'` will only return documents that contain the exact text "Las Vegas" and exclude documents with "LAS VEGAS" and "las vegas" which is inadequate when the use-case requires all documents regardless of the casing.
-+ Convert to lowercase or upper-case.
-+ Normalize accents and diacritics like ö or ê to ASCII equivalent characters "o" and "e".
-+ Map characters like `-` and whitespace into a user-specified character.
++ `search=*&facet=City,count:5` will return "Las Vegas", "LAS VEGAS" and "las vegas" as distinct values despite being the same city.
-Normalizers can be specified on text fields in the index and is applied both at indexing and query execution.
++ `search=usa&$orderby=City` will return the cities in lexicographical order: "Las Vegas", "Seattle", "las vegas", even if the intent is to order the same cities together irrespective of the case. ## Predefined and custom normalizers
-Azure Cognitive Search supports predefined normalizers for common use-cases along with the capability to customize as required.
+Azure Cognitive Search provides built-in normalizers for common use-cases along with the capability to customize as required.
| Category | Description | |-|-| | [Predefined normalizers](#predefined-normalizers) | Provided out-of-the-box and can be used without any configuration. |
-|[Custom normalizers](#add-custom-normalizers) | For advanced scenarios. Requires user-defined configuration of a combination of existing elements, consisting of char and token filters.<sup>1</sup>|
+|[Custom normalizers](#add-custom-normalizers) <sup>1</sup> | For advanced scenarios. Requires user-defined configuration of a combination of existing elements, consisting of char and token filters.|
-<sup>(1)</sup> Custom normalizers do not specify tokenizers since normalizers always produce a single token.
+<sup>(1)</sup> Custom normalizers don't specify tokenizers since normalizers always produce a single token.
## How to specify normalizers
-Normalizers can be specified per-field on text fields (`Edm.String` and `Collection(Edm.String)`) that have at least one of `filterable`, `sortable`, or `facetable` properties set to true. Setting a normalizer is optional and it's `null` by default. We recommended evaluating predefined normalizers before configuring a custom one for ease of use. Try a different normalizer if results are not expected.
+Normalizers are specified in an index definition, on a per-field basis, on text fields (`Edm.String` and `Collection(Edm.String)`) that have at least one of "filterable", "sortable", or "facetable" properties set to true. Setting a normalizer is optional and it's null by default. We recommend evaluating predefined normalizers before configuring a custom one.
-Normalizers can only be specified when a new field is added to the index. It's encouraged to assess the normalization needs upfront and assign normalizers in the initial stages of development when dropping and recreating indexes is routine. Normalizers cannot be specified on a field that has already been created.
+Normalizers can only be specified when a new field is added to the index. Try to assess the normalization needs upfront and assign normalizers in the initial stages of development when dropping and recreating indexes is routine. Normalizers can't be specified on a field that has already been created.
-1. When creating a field definition in the [index](/rest/api/searchservice/create-index), set the **normalizer** property to one of the following: a [predefined normalizer](#predefined-normalizers) such as `lowercase`, or a custom normalizer (defined in the same index schema).
+1. When creating a field definition in the [index](/rest/api/searchservice/create-index), set the "normalizer" property to one of the following: a [predefined normalizer](#predefined-normalizers) such as "lowercase", or a custom normalizer (defined in the same index schema).
```json "fields": [
Normalizers can only be specified when a new field is added to the index. It's e
}, ```
-2. Custom normalizers have to be defined in the **[normalizers]** section of the index first, and then be assigned to the field definition as shown in the previous step. For more information, see [Create Index](/rest/api/searchservice/create-index) and also [Add custom normalizers](#add-custom-normalizers).
+1. Custom normalizers are defined in the "normalizers" section of the index first, and then assigned to the field definition as shown in the previous step. For more information, see [Create Index](/rest/api/searchservice/create-index) and also [Add custom normalizers](#add-custom-normalizers).
```json
Normalizers can only be specified when a new field is added to the index. It's e
"normalizer": "my_custom_normalizer" }, ```- > [!NOTE] > To change the normalizer of an existing field, you'll have to rebuild the index entirely (you cannot rebuild individual fields).
A good workaround for production indexes, where rebuilding indexes is costly, is
## Add custom normalizers
-Custom normalizers are defined within the index schema and can be specified using the field property. The definition of custom normalizer includes a name, a type, one or more char filters and token filters. The char filters and token filters are the building blocks for a custom normalizer and responsible for the processing of the text.These filters are applied from left to right.
-
- The `token_filter_name_1` is the name of token filter, and `char_filter_name_1` and `char_filter_name_2` are the names of char filters (see [Supported token filters](#supported-token-filters) and Char filters tables below for valid values).
+Custom normalizers are [defined within the index schema](/rest/api/searchservice/create-index). The definition includes a name, a type, one or more character filters and token filters. The character filters and token filters are the building blocks for a custom normalizer and responsible for the processing of the text. These filters are applied from left to right.
-The normalizer definition is a part of the larger index. See [Create Index API](/rest/api/searchservice/create-index) for information about the rest of the index.
+ The `token_filter_name_1` is the name of token filter, and `char_filter_name_1` and `char_filter_name_2` are the names of char filters (see [supported token filters](#supported-token-filters) and [supported char filters](#supported-char-filters)tables below for valid values).
-```
+```json
"normalizers":(optional)[ { "name":"name of normalizer",
The normalizer definition is a part of the larger index. See [Create Index API](
] ```
-Custom normalizers can be added during index creation or later by updating an existing one. Adding a custom normalizer to an existing index requires the **allowIndexDowntime** flag to be specified in [Update Index](/rest/api/searchservice/update-index) and will cause the index to be unavailable for a few seconds.
+Custom normalizers can be added during index creation or later by updating an existing one. Adding a custom normalizer to an existing index requires the "allowIndexDowntime" flag to be specified in [Update Index](/rest/api/searchservice/update-index) and will cause the index to be unavailable for a few seconds.
## Normalizers reference ### Predefined normalizers+ |**Name**|**Description and Options**| |-|-| |standard| Lowercases the text followed by asciifolding.| |lowercase| Transforms characters to lowercase.| |uppercase| Transforms characters to uppercase.|
-|asciifolding| Transforms characters that are not in the Basic Latin Unicode block to their ASCII equivalent, if one exists. For example, changing à to a.|
+|asciifolding| Transforms characters that aren't in the Basic Latin Unicode block to their ASCII equivalent, if one exists. For example, changing à to a.|
|elision| Removes elision from beginning of the tokens.| ### Supported char filters
-For more details on the char filters, refer to [Char Filters Reference](index-add-custom-analyzers.md#CharFilter).
-+ [mapping](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/charfilter/MappingCharFilter.html)
+
+Normalizers support two character filters that are identical to their counterparts in [custom analyzer character filters](index-add-custom-analyzers.md#CharFilter):
+++ [mapping](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/charfilter/MappingCharFilter.html) + [pattern_replace](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/pattern/PatternReplaceCharFilter.html) ### Supported token filters
-The list below shows the token filters supported for normalizers and is a subset of the overall token filters involved in the lexical analysis. For more details on the filters, refer to [Token Filters Reference](index-add-custom-analyzers.md#TokenFilters).
+
+The list below shows the token filters supported for normalizers and is a subset of the overall [token filters used in custom analyzers](index-add-custom-analyzers.md#TokenFilters).
+ [arabic_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ar/ArabicNormalizationFilter.html) + [asciifolding](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html)
The list below shows the token filters supported for normalizers and is a subset
+ [lowercase](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/LowerCaseFilter.html) + [uppercase](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/UpperCaseFilter.html) - ## Custom normalizer example
-The example below illustrates a custom normalizer definition with corresponding char filters and token filters. Custom options for char filters and token filters are specified separately as named constructs, and then referenced in the normalizer definition as illustrated below.
+The example below illustrates a custom normalizer definition with corresponding character filters and token filters. Custom options for character filters and token filters are specified separately as named constructs, and then referenced in the normalizer definition as illustrated below.
+
+* A custom normalizer named "my_custom_normalizer" is defined in the "normalizers" section of the index definition.
-* A custom normalizer "my_custom_normalizer" is defined in the `normalizers` section of the index definition.
-* The normalizer is composed of two char filters and three token filters: elision, lowercase, and customized asciifolding filter "my_asciifolding".
-* The first char filter "map_dash" replaces all dashes with underscores while the second one "remove_whitespace" removes all spaces.
+* The normalizer is composed of two character filters and three token filters: elision, lowercase, and customized asciifolding filter "my_asciifolding".
+
+* The first character filter "map_dash" replaces all dashes with underscores while the second one "remove_whitespace" removes all spaces.
```json {
The example below illustrates a custom normalizer definition with corresponding
``` ## See also+ + [Analyzers for linguistic and text processing](search-analyzers.md) + [Search Documents REST API](/rest/api/searchservice/search-documents)
search Search Query Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-create.md
Previously updated : 02/03/2021 Last updated : 03/16/2022 # Creating queries in Azure Cognitive Search
-If you are building a query for the first time, this article describes approaches and methods for setting up queries. It also introduces a query request, and explains how field attributes and linguistic analyzers can impact query outcomes.
+If you are building a query for the first time, this article describes approaches and methods for setting up the request. It also introduces a query structure, and explains how field attributes and linguistic analyzers can impact query outcomes.
## What's a query request?
For Cognitive Search, the Azure SDKs implement generally available features. As
## Choose a query type: simple | full
-If your query is full text search, a query parser will be used to process any text that's passed as search terms and phrases.Azure Cognitive Search offers two query parsers.
+If your query is full text search, a query parser will be used to process any text that's passed as search terms and phrases. Azure Cognitive Search offers two query parsers.
+ The simple parser understands the [simple query syntax](query-simple-syntax.md). This parser was selected as the default for its speed and effectiveness in free form text queries. The syntax supports common search operators (AND, OR, NOT) for term and phrase searches, and prefix (`*`) search (as in "sea*" for Seattle and Seaside). A general recommendation is to try the simple parser first, and then move on to full parser if application requirements call for more powerful queries.
Search is fundamentally a user-driven exercise, where terms or phrases are colle
| Input | Experience | |-||
-| [Search method](/rest/api/searchservice/search-documents) | A user types terms or phrases into a search box, with or without operators, and clicks Search to send the request. Search can be used with filters on the same request, but not with autocomplete or suggestions. |
+| [Search method](/rest/api/searchservice/search-documents) | A user types the terms or phrases into a search box, with or without operators, and clicks Search to send the request. Search can be used with filters on the same request, but not with autocomplete or suggestions. |
| [Autocomplete method](/rest/api/searchservice/autocomplete) | A user types a few characters, and queries are initiated after each new character is typed. The response is a completed string from the index. If the string provided is valid, the user clicks Search to send that query to the service. | | [Suggestions method](/rest/api/searchservice/suggestions) | As with autocomplete, a user types a few characters and incremental queries are generated. The response is a dropdown list of matching documents, typically represented by a few unique or descriptive fields. If any of the selections are valid, the user clicks one and the matching document is returned. | | [Faceted navigation](/rest/api/searchservice/search-documents#query-parameters) | A page shows clickable navigation links or breadcrumbs that narrow the scope of the search. A faceted navigation structure is composed dynamically based on an initial query. For example, `search=*` to populate a faceted navigation tree composed of every possible category. A faceted navigation structure is created from a query response, but it's also a mechanism for expressing the next query. n REST API reference, `facets` is documented as a query parameter of a Search Documents operation, but it can be used without the `search` parameter.| | [Filter method](/rest/api/searchservice/search-documents#query-parameters) | Filters are used with facets to narrow results. You can also implement a filter behind the page, for example to initialize the page with language-specific fields. In REST API reference, `$filter` is documented as a query parameter of a Search Documents operation, but it can be used without the `search` parameter.|
-## Know your field attributes
+## Effect of field attributes on queries
-If you previously reviewed [query types and composition](search-query-overview.md), you might remember that the parameters on the query request depend on how fields are attributed in an index. For example, to be used in a query, filter, or sort order, a field must be *searchable*, *filterable*, and *sortable*. Similarly, only fields marked as *retrievable* can appear in results. As you begin to specify the `search`, `filter`, and `orderby` parameters in your request, be sure to check attributes as you go to avoid unexpected results.
+If you're familiar with [query types and composition](search-query-overview.md), you might remember that the parameters on a query request depend on field attributes in an index. For example, only fields marked as *searchable* and *retrievable* can be used in queries and search results. When setting the `search`, `filter`, and `orderby` parameters in your request, you should check attributes to avoid unexpected results.
In the portal screenshot below of the [hotels sample index](search-get-started-portal.md), only the last two fields "LastRenovationDate" and "Rating" can be used in an `"$orderby"` only clause. ![Index definition for the hotel sample](./media/search-query-overview/hotel-sample-index-definition.png "Index definition for the hotel sample")
-For a description of field attributes, see [Create Index (REST API)](/rest/api/searchservice/create-index).
+For field attribute definitions, see [Create Index (REST API)](/rest/api/searchservice/create-index).
-## Know your tokens
+## Effect of tokens on queries
-During indexing, the search engine uses an analyzer to perform text analysis on strings, maximizing the potential for matching at query time. At a minimum, strings are lower-cased, but might also undergo lemmatization and stop word removal. Larger strings or compound words are typically broken up by whitespace, hyphens, or dashes, and indexed as separate tokens.
+During indexing, the search engine uses a text analyzer on strings to maximize the potential for finding a match at query time. At a minimum, strings are lower-cased, but depending on the analyzer, might also undergo lemmatization and stop word removal. Larger strings or compound words are typically broken up by whitespace, hyphens, or dashes, and indexed as separate tokens.
The point to take away here is that what you think your index contains, and what's actually in it, can be different. If queries do not return expected results, you can inspect the tokens created by the analyzer through the [Analyze Text (REST API)](/rest/api/searchservice/test-analyzer). For more information about tokenization and the impact on queries, see [Partial term search and patterns with special characters](search-query-partial-matching.md).
search Search Semi Structured Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-semi-structured-data.md
Previously updated : 01/25/2021 Last updated : 03/16/2022 #Customer intent: As a developer, I want an introduction the indexing Azure blob data for Azure Cognitive Search.
search Semantic Answers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-answers.md
Previously updated : 01/02/2021 Last updated : 03/16/2022 # Return a semantic answer in Azure Cognitive Search > [!IMPORTANT]
-> Semantic search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and beta SDKs. These features are billable. For more information about, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
+> Semantic search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and beta SDKs. This feature is billable (see [Availability and pricing](semantic-search-overview.md#availability-and-pricing)).
When invoking [semantic ranking and captions](semantic-how-to-query-request.md), you can optionally extract content from the top-matching documents that "answers" the query directly. One or more answers can be included in the response, which you can then render on a search page to improve the user experience of your app.
All prerequisites that apply to [semantic queries](semantic-how-to-query-request
+ Query strings entered by the user must be recognizable as a question (what, where, when, how).
-+ Search documents in the index must contain text having the characteristics of an answer, and that text must exist in one of the fields listed in the [semantic configuration](semantic-how-to-query-request.md#create-a-semantic-configuration). For example, given a query "what is a hash table", if none of the fields in the semantic configuration contain passages that include "A hash table is ..." , then it's unlikely an answer will be returned.
++ Search documents in the index must contain text having the characteristics of an answer, and that text must exist in one of the fields listed in the [semantic configuration](semantic-how-to-query-request.md#create-a-semantic-configuration). For example, given a query "what is a hash table", if none of the fields in the semantic configuration contain passages that include "A hash table is ...", then it's unlikely an answer will be returned. ## What is a semantic answer?
-A semantic answer is a substructure of a [semantic query response](semantic-how-to-query-request.md). It consists of one or more verbatim passages from a search document, formulated as an answer to a query that looks like a question. For an answer to be returned, phrases or sentences must exist in a search document that have the language characteristics of an answer, and the query itself must be posed as a question.
+A semantic answer is a substructure of a [semantic query response](semantic-how-to-query-request.md). It consists of one or more verbatim passages from a search document, formulated as an answer to a query that looks like a question. To return an answer, phrases or sentences must exist in a search document that have the language characteristics of an answer, and the query itself must be posed as a question.
Cognitive Search uses a machine reading comprehension model to pick the best answer. The model produces a set of potential answers from the available content, and when it reaches a high enough confidence level, it will propose one as an answer.
Answers are returned as an independent, top-level object in the query response p
## Formulate a query rest for "answers"
-The approach for listing fields in priority order has changed recently, with "semanticConfiguration" replacing "searchFields". If you are currently using searchFields, update your code to the 2021-04-30-Preview API version and use "semanticConfiguration" instead.
+The approach for listing fields in priority order has changed recently, with "semanticConfiguration" replacing "searchFields". If you're currently using searchFields, update your code to the 2021-04-30-Preview API version and use "semanticConfiguration" instead.
### [**Semantic Configuration (recommended)**](#tab/semanticConfiguration)
-To return a semantic answer, the query must have the semantic "queryType", "queryLanguage", "semanticConfiguration", and the "answers" parameter. Specifying the "answers" parameter does not guarantee that you will get an answer, but the request must include this parameter if answer processing is to be invoked at all.
+To return a semantic answer, the query must have the semantic "queryType", "queryLanguage", "semanticConfiguration", and the "answers" parameters. Specifying these parameters doesn't guarantee an answer, but the request must include them for answer processing to occur.
The "semanticConfiguration" parameter is crucial to returning a high-quality answer.
The "semanticConfiguration" parameter is crucial to returning a high-quality ans
+ "queryLanguage" must be one of the values from the [supported languages list (REST API)](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
-+ A "semanticConfiguration" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. For precise guidance on how to create an effective semantic configuration, see [Create a semantic configuration](semantic-how-to-query-request.md#searchfields).
++ A "semanticConfiguration" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Create a semantic configuration](semantic-how-to-query-request.md#searchfields) for details.
-+ For "answers", parameter construction is `"answers": "extractive"`, where the default number of answers returned is one. You can increase the number of answers by adding a `count` as shown in the above example, up to a maximum of ten. Whether you need more than one answer depends on the user experience of your app, and how you want to render results.
++ For "answers", parameter construction is `"answers": "extractive"`, where the default number of answers returned is one. You can increase the number of answers by adding a `count` as shown in the above example, up to a maximum of 10. Whether you need more than one answer depends on the user experience of your app, and how you want to render results. ### [**searchFields**](#tab/searchFields)
-To return a semantic answer, the query must have the semantic "queryType", "queryLanguage", "searchFields", and the "answers" parameter. Specifying the "answers" parameter does not guarantee that you will get an answer, but the request must include this parameter if answer processing is to be invoked at all.
+To return a semantic answer, the query must have the semantic "queryType", "queryLanguage", "searchFields", and the "answers" parameter. Specifying the "answers" parameter doesn't guarantee that you'll get an answer, but the request must include this parameter if answer processing is to be invoked at all.
The "searchFields" parameter is crucial to returning a high-quality answer, both in terms of content and order (see below).
The "searchFields" parameter is crucial to returning a high-quality answer, both
+ "queryLanguage" must be one of the values from the [supported languages list (REST API)](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
-+ "searchFields" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. For precise guidance on how to set this field so that it works for both captions and answers, see [Set searchFields](semantic-how-to-query-request.md#searchfields).
++ "searchFields" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Set searchFields](semantic-how-to-query-request.md#searchfields) for details.
-+ For "answers", parameter construction is `"answers": "extractive"`, where the default number of answers returned is one. You can increase the number of answers by adding a `count` as shown in the above example, up to a maximum of ten. Whether you need more than one answer depends on the user experience of your app, and how you want to render results.
++ For "answers", parameter construction is `"answers": "extractive"`, where the default number of answers returned is one. You can increase the number of answers by adding a `count` as shown in the above example, up to a maximum of 10. Whether you need more than one answer depends on the user experience of your app, and how you want to render results.
Within @search.answers:
+ **"text"** and **"highlights"** provide identical content, in both plain text and with highlights.
- By default, highlights are styled as `<em>`, which you can override using the existing highlightPreTag and highlightPostTag parameters. As noted elsewhere, the substance of an answer is verbatim content from a search document. The extraction model looks for characteristics of an answer to find the appropriate content, but does not compose new language in the response.
+ By default, highlights are styled as `<em>`, which you can override using the existing highlightPreTag and highlightPostTag parameters. As noted elsewhere, the substance of an answer is verbatim content from a search document. The extraction model looks for characteristics of an answer to find the appropriate content, but doesn't compose new language in the response.
+ **"score"** is a confidence score that reflects the strength of the answer. If there are multiple answers in the response, this score is used to determine the order. Top answers and top captions can be derived from different search documents, where the top answer originates from one document, and the top caption from another, but in general you will see the same documents in the top positions within each array.
-Answers are followed by the **"value"** array, which always includes scores, captions, and any fields that are retrievable by default. If you specified the select parameter, the "value" array is limited to the fields that you specified. For more information about items in the response, see [Create a semantic query](semantic-how-to-query-request.md).
+Answers are followed by the **"value"** array, which always includes scores, captions, and any fields that are retrievable by default. If you specified the select parameter, the "value" array is limited to the fields that you specified. See [Create a semantic query](semantic-how-to-query-request.md) for details.
## Tips for producing high-quality answers
For best results, return semantic answers on a document corpus having the follow
+ The "semanticConfiguration" must include fields that offer sufficient text in which an answer is likely to be found. Fields more likely to contain answers should be listed first in "prioritizedContentFields". Only verbatim text from a document can appear as an answer.
-+ Query strings must not be null (search=`*`) and the string should have the characteristics of a question, as opposed to a keyword search (a sequential list of arbitrary terms or phrases). If the query string does not appear to be answer, answer processing is skipped, even if the request specifies "answers" as a query parameter.
++ Query strings must not be null (search=`*`) and the string should have the characteristics of a question, as opposed to a keyword search (a sequential list of arbitrary terms or phrases). If the query string doesn't appear to be answer, answer processing is skipped, even if the request specifies "answers" as a query parameter.
-+ Semantic extraction and summarization have limits over how many tokens per document can be analyzed in a timely fashion. In practical terms, if you have large documents that run into hundreds of pages, you should try to break the content up into smaller documents first.
++ Semantic extraction and summarization have limits over how many tokens per document can be analyzed in a timely fashion. In practical terms, if you have large documents that run into hundreds of pages, try to break up the content into smaller documents first. ## Next steps
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
The following tables display the current Microsoft Sentinel feature availability
| - [Azure Purview](../../sentinel/data-connectors-reference.md#azure-purview) | Public Preview | Not Available | | - [Microsoft Defender for Cloud](../../sentinel/connect-azure-security-center.md) | GA | GA | | - [Microsoft Defender for IoT](../../sentinel/data-connectors-reference.md#microsoft-defender-for-iot) | Public Preview | Not Available |
-| - [Microsoft Insider Risk Management](/azure/sentinel/sentinel-solutions-catalog#domain-solutions) | Public Preview | Not Available |
+| - [Microsoft Insider Risk Management](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available |
| - [Azure Firewall ](../../sentinel/data-connectors-reference.md#azure-firewall) | GA | GA | | - [Azure Information Protection](../../sentinel/data-connectors-reference.md#azure-information-protection-preview) | Public Preview | Not Available | | - [Azure Key Vault ](../../sentinel/data-connectors-reference.md#azure-key-vault) | Public Preview | Not Available |
For more information, see Azure Attestation [public documentation](../../attesta
- Understand the [shared responsibility](shared-responsibility.md) model and which security tasks are handled by the cloud provider and which tasks are handled by you. - Understand the [Azure Government Cloud](../../azure-government/documentation-government-welcome.md) capabilities and the trustworthy design and security used to support compliance applicable to federal, state, and local government organizations and their partners. - Understand the [Office 365 Government plan](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/office-365-us-government#about-office-365-government-environments).-- Understand [compliance in Azure](../../compliance/index.yml) for legal and regulatory standards.
+- Understand [compliance in Azure](../../compliance/index.yml) for legal and regulatory standards.
sentinel Ama Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ama-migrate.md
This article describes the migration process to the Azure Monitor Agent (AMA) wh
> The Log Analytics agent will be [retired on **31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you start planning your migration to the AMA. ## Prerequisites
-Start with the [Azure Monitor documentation](/azure/azure-monitor/agents/azure-monitor-agent-migration) which provides an agent comparison and general information for this migration process.
+Start with the [Azure Monitor documentation](../azure-monitor/agents/azure-monitor-agent-migration.md) which provides an agent comparison and general information for this migration process.
This article provides specific details and differences for Microsoft Sentinel.
Each organization will have different metrics of success and internal migration
5. Check your Microsoft Sentinel workspace to make sure that all your data streams have been replaced using the new AMA-based connectors.
-6. Uninstall the legacy agent. For more information, see [Manage the Azure Log Analytics agent ](/azure/azure-monitor/agents/agent-manage#uninstall-agent).
+6. Uninstall the legacy agent. For more information, see [Manage the Azure Log Analytics agent ](../azure-monitor/agents/agent-manage.md#uninstall-agent).
## FAQs The following FAQs address issues specific to AMA migration with Microsoft Sentinel. For more information, see also the [Frequently asked questions for AMA migration](/azure/azure-monitor/faq#azure-monitor-agent) in the Azure Monitor documentation.
While you can run the MMA and AMA simultaneously, you may want to migrate each c
For more information, see: - [Frequently asked questions for AMA migration](/azure/azure-monitor/faq#azure-monitor-agent)-- [Overview of the Azure Monitor agents](/azure/azure-monitor/agents/agents-overview)-- [Migrate from Log Analytics agents](/azure/azure-monitor/agents/azure-monitor-agent-migration)
+- [Overview of the Azure Monitor agents](../azure-monitor/agents/agents-overview.md)
+- [Migrate from Log Analytics agents](../azure-monitor/agents/azure-monitor-agent-migration.md)
- [Windows Security Events via AMA](data-connectors-reference.md#windows-security-events-via-ama) - [Security events via Legacy Agent (Windows)](data-connectors-reference.md#security-events-via-legacy-agent-windows) - [Windows agent-based connections](connect-azure-windows-microsoft-services.md#windows-agent-based-connections)
sentinel Connect Azure Windows Microsoft Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-azure-windows-microsoft-services.md
To ingest data into Microsoft Sentinel:
1. Select **Save** at the top of the screen.
-For more information, see also [Create diagnostic settings to send Azure Monitor platform logs and metrics to different destinations](/azure/azure-monitor/essentials/diagnostic-settings) in the Azure Monitor documentation.
+For more information, see also [Create diagnostic settings to send Azure Monitor platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md) in the Azure Monitor documentation.
# [Azure Policy](#tab/AP)
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/network-normalization-schema.md
Microsoft Sentinel provides the following out-of-the-box, product-specific Netwo
| | | | | **AWS VPC logs** collected using the AWS S3 connector |`_ASim_NetworkSession_AWSVPC` (regular)<br> `_Im_NetworkSession_AWSVPC` (filtering) | `ASimNetworkSessionAWSVPC` (regular)<br> `vimNetworkSessionAWSVPC` (filtering) | | **Azure Firewall logs** |`_ASim_NetworkSession_AzureFirewall` (regular)<br> `_Im_NetworkSession_AzureFirewall` (filtering) | `ASimNetworkSessionAzureFirewall` (regular)<br> `vimNetworkSessionAzureFirewall` (filtering) |
-| **Azure Monitor VMConnection** collected as part of the Azure Monitor [VM Insights solution](/azure/azure-monitor/vm/vminsights-overview) |`_ASim_NetworkSession_VMConnection` (regular)<br> `_Im_NetworkSession_VMConnection` (filtering) | `ASimNetworkSessionVMConnection` (regular)<br> `vimNetworkSessionVMConnection` (filtering) |
-| **Azure Network Security Groups (NSG) logs** collected as part of the Azure Monitor [VM Insights solution](/azure/azure-monitor/vm/vminsights-overview) |`_ASim_NetworkSession_AzureNSG` (regular)<br> `_Im_NetworkSession_AzureNSG` (filtering) | `ASimNetworkSessionAzureNSG` (regular)<br> `vimNetworkSessionAzureNSG` (filtering) |
+| **Azure Monitor VMConnection** collected as part of the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-overview.md) |`_ASim_NetworkSession_VMConnection` (regular)<br> `_Im_NetworkSession_VMConnection` (filtering) | `ASimNetworkSessionVMConnection` (regular)<br> `vimNetworkSessionVMConnection` (filtering) |
+| **Azure Network Security Groups (NSG) logs** collected as part of the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-overview.md) |`_ASim_NetworkSession_AzureNSG` (regular)<br> `_Im_NetworkSession_AzureNSG` (filtering) | `ASimNetworkSessionAzureNSG` (regular)<br> `vimNetworkSessionAzureNSG` (filtering) |
| **Microsoft 365 Defender for Endpoint** | `_ASim_NetworkSession_Microsoft365Defender` (regular)<br><br>`_Im_NetworkSession_Microsoft365Defender` (filtering) | `ASimNetworkSessionMicrosoft365Defender` (regular)<br><br> `vimNetworkSessionMicrosoft365Defender` (filtering) | | **Microsoft Defender for IoT - Endpoint** |`_ASim_NetworkSession_MD4IoT` (regular)<br><br>`_Im_NetworkSession_MD4IoT` (filtering) | `ASimNetworkSessionMD4IoT` (regular)<br><br> `vimNetworkSessionMD4IoT` (filtering) | | **Palo Alto PanOS traffic logs** collected using CEF |`_ASim_NetworkSession_PaloAltoCEF` (regular)<br> `_Im_NetworkSession_PaloAltoCEF` (filtering) | `ASimNetworkSessionPaloAltoCEF` (regular)<br> `vimNetworkSessionPaloAltoCEF` (filtering) |
For more information, see:
- [Advanced Security Information Model (ASIM) overview](normalization.md) - [Advanced Security Information Model (ASIM) schemas](normalization-about-schemas.md) - [Advanced Security Information Model (ASIM) parsers](normalization-parsers-overview.md)-- [Advanced Security Information Model (ASIM) content](normalization-content.md)
+- [Advanced Security Information Model (ASIM) content](normalization-content.md)
sentinel Purview Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/purview-solution.md
> The *Azure Purview* solution is in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
-[Azure Purview](/azure/purview/) provides organizations with visibility into where sensitive information is stored, helping prioritize at-risk data for protection.
+[Azure Purview](../purview/index.yml) provides organizations with visibility into where sensitive information is stored, helping prioritize at-risk data for protection.
Integrate Azure Purview with Microsoft Sentinel to help narrow down the high volume of incidents and threats surfaced in Microsoft Sentinel, and understand the most critical areas to start.
In this tutorial, you:
## Prerequisites
-Before you start, make sure you have both a [Microsoft Sentinel workspace](quickstart-onboard.md) and [Azure Purview](/azure/purview/create-catalog-portal) onboarded, and that your user has the following roles:
+Before you start, make sure you have both a [Microsoft Sentinel workspace](quickstart-onboard.md) and [Azure Purview](../purview/create-catalog-portal.md) onboarded, and that your user has the following roles:
-- **An Azure Purview account [Owner](/azure/role-based-access-control/built-in-roles) or [Contributor](/azure/role-based-access-control/built-in-roles) role**, to set up diagnostic settings and configure the data connector.
+- **An Azure Purview account [Owner](../role-based-access-control/built-in-roles.md) or [Contributor](../role-based-access-control/built-in-roles.md) role**, to set up diagnostic settings and configure the data connector.
- **A [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) role**, with write permissions to enable data connector, view the workbook, and create analytic rules.
For more information, see [Connect Microsoft Sentinel to Azure, Windows, Microso
**To run an Azure Purview scan and view data in Microsoft Sentinel**:
-1. In Azure Purview, run a full scan of your resources. For more information, see [Manage data sources in Azure Purview](/azure/purview/manage-data-sources).
+1. In Azure Purview, run a full scan of your resources. For more information, see [Manage data sources in Azure Purview](../purview/manage-data-sources.md).
1. After your Azure Purview scans have completed, go back to the Azure Purview data connector in Microsoft Sentinel and confirm that data has been received.
Use this procedure to customize the Azure Purview analytics rules' queries to de
1. On the **Set rule logic** tab, adjust the **Rule query** to query for the data fields and classifications you want to generate alerts for. For more information on what can be included in your query, see: - Supported data fields are the columns of the [PurviewDataSensitivityLogs](/azure/azure-monitor/reference/tables/purviewdatasensitivitylogs) table
- - [Supported classifications](/azure/purview/supported-classifications)
+ - [Supported classifications](../purview/supported-classifications.md)
Formatted queries have the following syntax: `| where {data-field} contains {specified-string}`.
For more information, see:
- [Create custom analytics rules to detect threats](detect-threats-custom.md) - [Investigate incidents with Microsoft Sentinel](investigate-cases.md) - [About Microsoft Sentinel content and solutions](sentinel-solutions.md)-- [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions (Public preview)](sentinel-solutions-deploy.md)
+- [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions (Public preview)](sentinel-solutions-deploy.md)
sentinel Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/roles.md
Use Azure RBAC to create and assign roles within your security operations team t
- [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) can, in addition to the above, create and edit workbooks, analytics rules, and other Microsoft Sentinel resources. -- [Microsoft Sentinel Automation Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) allows Microsoft Sentinel to add playbooks to automation rules. It is not meant for user accounts.
+- [Microsoft Sentinel Automation Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-automation-contributor) allows Microsoft Sentinel to add playbooks to automation rules. It is not meant for user accounts.
> [!NOTE] >
sentinel Sap Solution Deploy Alternate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-deploy-alternate.md
az keyvault secret set \
--value "<abapuserpass>" \ --description SECRET_ABAP_PASSWORD --vault-name $kvname
-#Add java Username
+#Add Java Username
az keyvault secret set \ --name <SID>-JAVAOSUSER \ --value "<javauser>" \ --description SECRET_JAVAOS_USER --vault-name $kvname
-#Add java Username password
+#Add Java Username password
az keyvault secret set \ --name <SID>-JAVAOSPASS \ --value "<javauserpass>" \
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-log-reference.md
The tables listed below are required to enable functions that identify privilege
## Functions available from the SAP solution
-This section describes the [functions](/azure/azure-monitor/logs/functions) that are available in your workspace after you've deployed the Continuous Threat Monitoring for SAP solution. Find these functions in the Microsoft Sentinel **Logs** page to use in your KQL queries, listed under **Workspace functions**.
+This section describes the [functions](../azure-monitor/logs/functions.md) that are available in your workspace after you've deployed the Continuous Threat Monitoring for SAP solution. Find these functions in the Microsoft Sentinel **Logs** page to use in your KQL queries, listed under **Workspace functions**.
### SAPUsersAssignments
For more information, see:
- [Deploy the Microsoft Sentinel SAP data connector with SNC](sap-solution-deploy-snc.md) - [Expert configuration options, on-premises deployment, and SAPControl log sources](sap-solution-deploy-alternate.md) - [Microsoft Sentinel SAP solution: built-in security content](sap-solution-security-content.md)-- [Troubleshooting your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Troubleshooting your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
service-bus-messaging Service Bus Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-troubleshooting-guide.md
This article provides troubleshooting tips and recommendations for a few issues
## Connectivity, certificate, or timeout issues The following steps may help you with troubleshooting connectivity/certificate/timeout issues for all services under *.servicebus.windows.net. -- Browse to or [wget](https://www.gnu.org/software/wget/) `https://<yournamespace>.servicebus.windows.net/`. It helps with checking whether you have IP filtering or virtual network or certificate chain issues, which are common when using java SDK.
+- Browse to or [wget](https://www.gnu.org/software/wget/) `https://<yournamespace>.servicebus.windows.net/`. It helps with checking whether you have IP filtering or virtual network or certificate chain issues, which are common when using Java SDK.
An example of successful message:
service-fabric Run To Completion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/run-to-completion.md
The following points should be noted for the current RunToCompletion support.
* These semantics are only supported for [containers][containers-introduction-link] and [guest executable][guest-executables-introduction-link] applications. * Upgrade scenarios for applications with RunToCompletion semantics are not allowed. Users should delete and recreate such applications, if necessary. * Failover events can cause CodePackages to re-execute after successful completion, on the same node, or other nodes of the cluster. Examples of failover events are, node restarts and Service Fabric runtime upgrades on a node.
+* RunToCompletion is incompatible with ServicePackageActivationMode="SharedProcess". Users must specify ServicePackageActivationMode="ExclusiveProcess", given that SharedProcess is the default value. Service Fabric runtime version 9.0 and higher will fail validation for such services.
## Next steps
service-fabric Service Fabric Best Practices Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-networking.md
The integration of Azure API Management (Service Tag: ApiManagement) need Client
* Use a reverse proxy such as [Traefik](https://docs.traefik.io/v1.6/configuration/backends/servicefabric/) or the [Service Fabric reverse proxy](service-fabric-reverseproxy.md) to expose common application ports such as 80 or 443.
-* For Windows Containers hosted on air-gapped machines that can't pull base layers from Azure cloud storage, override the foreign layer behavior, by using the [--allow-nondistributable-artifacts](https://docs.microsoft.com/virtualization/windowscontainers/about/faq#how-do-i-make-my-container-images-available-on-air-gapped-machines) flag in the Docker daemon.
+* For Windows Containers hosted on air-gapped machines that can't pull base layers from Azure cloud storage, override the foreign layer behavior, by using the [--allow-nondistributable-artifacts](/virtualization/windowscontainers/about/faq#how-do-i-make-my-container-images-available-on-air-gapped-machines) flag in the Docker daemon.
## Next steps
service-fabric Service Fabric Debugging Your Application Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-debugging-your-application-java.md
1. Start a local development cluster by following the steps in [Setting up your Service Fabric development environment](service-fabric-get-started-linux.md).
-2. Update entryPoint.sh of the service you wish to debug, so that it starts the java process with remote debug parameters. This file can be found at the following location: `ApplicationName\ServiceNamePkg\Code\entrypoint.sh`. Port 8001 is set for debugging in this example.
+2. Update entryPoint.sh of the service you wish to debug, so that it starts the Java process with remote debug parameters. This file can be found at the following location: `ApplicationName\ServiceNamePkg\Code\entrypoint.sh`. Port 8001 is set for debugging in this example.
```sh java -Xdebug -Xrunjdwp:transport=dt_socket,address=8001,server=y,suspend=n -Djava.library.path=$LD_LIBRARY_PATH -jar myapp.jar
service-fabric Service Fabric Deploy Existing App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-deploy-existing-app.md
You can use Visual Studio to produce an application package that contains multip
## Use Yeoman to package and deploy an existing executable on Linux
-The procedure for creating and deploying a guest executable on Linux is the same as deploying a csharp or java application.
+The procedure for creating and deploying a guest executable on Linux is the same as deploying a csharp or Java application.
1. In a terminal, type `yo azuresfguest`. 2. Name your application.
service-fabric Service Fabric Java Rest Api Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-java-rest-api-usage.md
Follow the steps mentioned below to generate Service Fabric Java client code usi
> If your cluster version is not 6.0.* then go to the appropriate directory in the stable folder. >
-5. Run the following autorest command to generate the java client code.
+5. Run the following autorest command to generate the Java client code.
```bash autorest --input-file= servicefabric.json --java --output-folder=[output-folder-name] --namespace=[namespace-of-generated-client]
Follow the steps mentioned below to generate Service Fabric Java client code usi
autorest --input-file=servicefabric.json --java --output-folder=java-rest-api-code --namespace=servicefabricrest ```
- The following command takes ``servicefabric.json`` specification file as input and generates java client code in ``java-rest-api- code`` folder and encloses the code in ``servicefabricrest`` namespace. After this step you would find two folders ``models``, ``implementation`` and two files ``ServiceFabricClientAPIs.java`` and ``package-info.java`` generated in the ``java-rest-api-code`` folder.
+ The following command takes ``servicefabric.json`` specification file as input and generates Java client code in ``java-rest-api- code`` folder and encloses the code in ``servicefabricrest`` namespace. After this step you would find two folders ``models``, ``implementation`` and two files ``ServiceFabricClientAPIs.java`` and ``package-info.java`` generated in the ``java-rest-api-code`` folder.
## Include and use the generated client in your project
service-fabric Service Fabric Startupservices Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-startupservices-model.md
Sample StartupServices.xml file
</StartupServicesManifest> ```
-The startupServices.xml feature is enabled for all new project in SDK version 5.0.516.9590 and above. For actor services, this is enabled in Microsoft.ServiceFabric.Actors NuGet version 5.0.516 and above. Projects created with older version of SDK are are fully backward compatible with latest SDK. Migration of old projects into new design is not supported. If user wants to create an Service Fabric Application without StartupServices.xml in newer version of SDK, user should click on "Help me choose a project template" link as shown in picture below.
+The startupServices.xml feature is enabled for all new project in SF SDK version 5.0.516.9590 and above. Projects created with older version of SDK are are fully backward compatible with latest SDK. Migration of old projects into new design is not supported. If user wants to create an Service Fabric Application without StartupServices.xml in newer version of SDK, user should click on "Help me choose a project template" link as shown in picture below.
![Create New Application option in New Design][create-new-project]
site-recovery Azure To Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-common-questions.md
The Site Recovery team and Azure capacity management team plan for sufficient in
### Does Site Recovery work with Capacity Reservation?
-Yes, you can create a Capacity Reservation for your VM SKU in the disaster recovery region and/or zone, and configure it in the Compute properties of the Target VM. Once done, site recovery will use the earmarked capacity for the failover. [Learn more](https://aka.ms/on-demand-capacity-reservations-docs).
+Yes, you can create a Capacity Reservation for your VM SKU in the disaster recovery region and/or zone, and configure it in the Compute properties of the Target VM. Once done, site recovery will use the earmarked capacity for the failover. [Learn more](../virtual-machines/capacity-reservation-overview.md).
### Why should I reserve capacity using Capacity Reservation at the destination location?
Yes, both encryption in transit and [encryption at rest in Azure](../storage/com
- [Review Azure-to-Azure support requirements](azure-to-azure-support-matrix.md). - [Set up Azure-to-Azure replication](azure-to-azure-tutorial-enable-replication.md).-- If you have questions after reading this article, post them on the [Microsoft Q&A question page for Azure Recovery Services](/answers/topics/azure-site-recovery.html).
+- If you have questions after reading this article, post them on the [Microsoft Q&A question page for Azure Recovery Services](/answers/topics/azure-site-recovery.html).
site-recovery Azure To Azure How To Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication.md
You can modify the default target settings used by Site Recovery.
5. Click **View or Edit Capacity Reservation group assignment** to modify the capacity reservation settings. On triggering Failover, the new VM will be created in the assigned Capacity Reservation Group.
- Capacity Reservation lets you purchase capacity in the recovery region, and then failover to that capacity. You can either create a new Capacity Reservation Group, or use an existing one. For more information on how capacity reservation works, [read here](https://aka.ms/on-demand-capacity-reservations-docs).
+ Capacity Reservation lets you purchase capacity in the recovery region, and then failover to that capacity. You can either create a new Capacity Reservation Group, or use an existing one. For more information on how capacity reservation works, [read here](../virtual-machines/capacity-reservation-overview.md).
![Screenshot that shows the Capacity Reservation settings.](./media/azure-to-azure-how-to-enable-replication/capacity-reservation-edit-button.png) 1. Click **Create target resource** > **Enable Replication**.
You can modify the default target settings used by Site Recovery.
## Next steps
-[Learn more](site-recovery-test-failover-to-azure.md) about running a test failover.
+[Learn more](site-recovery-test-failover-to-azure.md) about running a test failover.
site-recovery Azure To Azure Troubleshoot Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-network-connectivity.md
Try to access the DNS server from the virtual machine. If the DNS server isn't a
### Issue 2: Site Recovery configuration failed (151196) > [!NOTE]
-> If the VMs are behind a **Standard** internal load balancer, by default, it wouldn't have access to the Microsoft 365 IPs such as `login.microsoftonline.com`. Either change it to **Basic** internal load balancer type or create outbound access as mentioned in the article [Configure load balancing and outbound rules in Standard Load Balancer using Azure CLI](../load-balancer/quickstart-load-balancer-standard-public-cli.md?tabs=option-1-create-load-balancer-standard#create-outbound-rule-configuration).
+> If the VMs are behind a **Standard** internal load balancer, by default, it wouldn't have access to the Microsoft 365 IPs such as `login.microsoftonline.com`. For outbound access create an Azure NAT gateway. For more information see [Tutorial: Create a NAT gateway - Azure CLI](../virtual-network/nat-gateway/tutorial-create-nat-gateway-cli.md).
#### Possible cause
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Windows 7 with SP1 64-bit | Supported from [Update rollup 36](https://support.mi
**Operating system** | **Details** |
-Linux | Only 64-bit system is supported. 32-bit system isn't supported.<br/><br/>Every Linux server should have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) installed. It is required to boot the server in Azure after test failover/failover. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure. <br/><br/> Site Recovery orchestrates failover to run Linux servers in Azure. However Linux vendors might limit support to only distribution versions that haven't reached end-of-life.<br/><br/> On Linux distributions, only the stock kernels that are part of the distribution minor version release/update are supported.<br/><br/> Upgrading protected machines across major Linux distribution versions isn't supported. To upgrade, disable replication, upgrade the operating system, and then enable replication again.<br/><br/> [Learn more](https://support.microsoft.com/help/2941892/support-for-linux-and-open-source-technology-in-azure) about support for Linux and open-source technology in Azure.
+Linux | Only 64-bit system is supported. 32-bit system isn't supported.<br/><br/>Every Linux server should have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) installed. It is required to boot the server in Azure after test failover/failover. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure. <br/><br/> Site Recovery orchestrates failover to run Linux servers in Azure. However Linux vendors might limit support to only distribution versions that haven't reached end-of-life.<br/><br/> On Linux distributions, only the stock kernels that are part of the distribution minor version release/update are supported.<br/><br/> Upgrading protected machines across major Linux distribution versions isn't supported. To upgrade, disable replication, upgrade the operating system, and then enable replication again.<br/><br/> [Learn more](https://support.microsoft.com/help/2941892/support-for-linux-and-open-source-technology-in-azure) about support for Linux and open-source technology in Azure.<br/><br/> Chained IO is not supported by Site Recovery.
Linux Red Hat Enterprise | 5.2 to 5.11</b><br/> 6.1 to 6.10</b> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9 Beta version](https://support.microsoft.com/help/4578241/), [7.9](https://support.microsoft.com/help/4590304/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), [8.4](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-305.30.1.el8_4.x86_64 or higher), [8.5](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-348.5.1.el8_5.x86_64 or higher) <br/> Few older kernels on servers running Red Hat Enterprise Linux 5.2-5.11 & 6.1-6.10 do not have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure. Linux: CentOS | 5.2 to 5.11</b><br/> 6.1 to 6.10</b><br/> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4, 8.5 <br/><br/> Few older kernels on servers running CentOS 5.2-5.11 & 6.1-6.10 do not have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure. Ubuntu | Ubuntu 14.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions)<br/>Ubuntu 16.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 18.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 20.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> (*includes support for all 14.04.*x*, 16.04.*x*, 18.04.*x*, 20.04.*x* versions)
spring-cloud How To Config Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-config-server.md
Title: Set up your Config Server instance in Azure Spring Cloud
-description: Learn how to set up a Spring Cloud Config Server instance for Azure Spring Cloud on the Azure portal
+ Title: Configure your managed Spring Cloud Config Server in Azure Spring Cloud
+description: Learn how to configure a managed Spring Cloud Config Server in Azure Spring Cloud on the Azure portal
Last updated 12/10/2021
-# Set up a Spring Cloud Config Server instance for your service
+# Configure a managed Spring Cloud Config Server in Azure Spring Cloud
**This article applies to:** ✔️ Java ✔️ C# **This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier
-This article shows you how to connect a Spring Cloud Config Server instance to your Azure Spring Cloud service.
+This article shows you how to configure a managed Spring Cloud Config Server in Azure Spring Cloud service.
-Spring Cloud Config provides server and client-side support for an externalized configuration in a distributed system. The Config Server instance provides a central place to manage external properties for applications across all environments. For more information, see the [Spring Cloud Config Server reference](https://spring.io/projects/spring-cloud-config).
+Spring Cloud Config Server provides server and client-side support for an externalized configuration in a distributed system. The Config Server instance provides a central place to manage external properties for applications across all environments. For more information, see the [Spring Cloud Config Server reference](https://spring.io/projects/spring-cloud-config).
## Prerequisites
spring-cloud How To Deploy In Azure Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-deploy-in-azure-virtual-network.md
This table shows the maximum number of app instances Azure Spring Cloud supports
| App subnet CIDR | Total IPs | Available IPs | Maximum app instances | | | | - | |
-| /28 | 16 | 8 | <p>App with one core: 96 <br/> App with two cores: 48<br/> App with three cores: 32<br/> App with four cores: 24</p> |
-| /27 | 32 | 24 | <p>App with one core: 228<br/> App with two cores: 144<br/> App with three cores: 96<br/> App with four cores: 72</p> |
-| /26 | 64 | 56 | <p>App with one core: 500<br/> App with two cores: 336<br/> App with three cores: 224<br/> App with four cores: 168</p> |
-| /25 | 128 | 120 | <p>App with one core: 500<br/> App with two cores: 500<br/> App with three cores: 480<br> App with four cores: 360</p> |
-| /24 | 256 | 248 | <p>App with one core: 500<br/> App with two cores: 500<br/> App with three cores: 500<br/> App with four cores: 500</p> |
+| /28 | 16 | 8 | <p>App with 0.5 core: 192 <br/> App with one core: 96 <br/> App with two cores: 48<br/> App with three cores: 32<br/> App with four cores: 24</p> |
+| /27 | 32 | 24 | <p>App with 0.5 core: 456 <br/> App with one core: 228<br/> App with two cores: 144<br/> App with three cores: 96<br/> App with four cores: 72</p> |
+| /26 | 64 | 56 | <p>App with 0.5 core: 500 <br/> App with one core: 500<br/> App with two cores: 336<br/> App with three cores: 224<br/> App with four cores: 168</p> |
+| /25 | 128 | 120 | <p>App with 0.5 core: 500 <br/> App with one core: 500<br/> App with two cores: 500<br/> App with three cores: 480<br> App with four cores: 360</p> |
+| /24 | 256 | 248 | <p>App with 0.5 core: 500 <br/> App with one core: 500<br/> App with two cores: 500<br/> App with three cores: 500<br/> App with four cores: 500</p> |
For subnets, five IP addresses are reserved by Azure, and at least three IP addresses are required by Azure Spring Cloud. At least eight IP addresses are required, so /29 and /30 are nonoperational.
spring-cloud How To Service Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-service-registration.md
Title: Automate service registry and discovery
-description: Learn how to automate service discovery and registration using Spring Cloud Service Registry
+ Title: Discover and register your Spring Boot applications in Azure Spring Cloud
+description: Discover and register your Spring Boot applications with managed Spring Cloud Service Registry (OSS) in Azure Spring Cloud
zone_pivot_groups: programming-languages-spring-cloud
-# Discover and register your Spring Cloud services
+# Discover and register your Spring Boot applications
**This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier
-Service registration and discovery are key requirements for maintaining a list of live app instances to call, and routing and load balancing inbound requests. Configuring each client manually takes time and introduces the possibility of human error. Azure Spring Cloud Service Registry solves this problem. Once configured, a Service Registry server will control service registration and discovery for your applications. The Service Registry server maintains a registry of live app instances, enables client-side load-balancing, and decouples service providers from clients without relying on DNS.
+Service registration and discovery are key requirements for maintaining a list of live app instances to call, and routing and load balancing inbound requests. Configuring each client manually takes time and introduces the possibility of human error. Managed Spring Cloud Service Registry (OSS) in Azure Spring Cloud solves this problem. Once configured, a Service Registry server will control service registration and discovery for your applications. The Service Registry server maintains a registry of live app instances, enables client-side load-balancing, and decouples service providers from clients without relying on DNS.
::: zone pivot="programming-language-csharp" For information about how to set up service registration for a Steeltoe app, see [Prepare a Java Spring application for deployment in Azure Spring Cloud](how-to-prepare-app-deployment.md).
For information about how to set up service registration for a Steeltoe app, see
## Register your application using Spring Cloud Service Registry
-Before your application can manage service registration and discovery using Spring Cloud Service Registry, several dependencies must be included in the application's *pom.xml* file.
-Include dependencies for *spring-cloud-starter-netflix-eureka-client* and *spring-cloud-starter-azure-spring-cloud-client* to your *pom.xml*
+Before your application can manage service registration and discovery using Spring Cloud Service Registry, you must include the dependency for *spring-cloud-starter-netflix-eureka-client* to your *pom.xml*
```xml <dependency>
static-web-apps Password Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/password-protection.md
Last updated 03/13/2022
-# Configure password protection
+# Configure password protection (preview)
You can use a password to protect your app's pre-production environments or all environments. Scenarios when password protection is useful include:
storage Blob Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-cli.md
You can also specify the conditions under which the blob will be copied. These c
You can use the `az storage blob copy start-batch` command to recursively copy multiple blobs between storage containers within the same storage account. This command requires values for the `--source-container` and `--destination-container` parameters, and can copy all files between the source and destination. Like other CLI batch commands, this command supports Unix filename pattern matching with the `--pattern` parameter. The supported patterns are `*`, `?`, `[seq]`, and `[!seq]`. To learn more, refer to the Python documentation on [Unix filename pattern matching](https://docs.python.org/3.7/library/fnmatch.html). > [!NOTE]
-> Consider the use of AzCopy for ease and performance, especially when copying blobs between storage accounts. AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. Find out more about how to [Get started with AzCopy](/azure/storage/common/storage-use-azcopy-v10).
+> Consider the use of AzCopy for ease and performance, especially when copying blobs between storage accounts. AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. Find out more about how to [Get started with AzCopy](../common/storage-use-azcopy-v10.md).
For more information, see the [az storage blob copy](/cli/azure/storage/blob/copy) reference.
done
## Next steps -- [Choose how to authorize access to blob data with Azure CLI](/azure/storage/blobs/authorize-data-operations-cli)-- [Run PowerShell commands with Azure AD credentials to access blob data](/azure/storage/blobs/authorize-data-operations-cli)-- [Manage blob containers using CLI](blob-containers-cli.md)
+- [Choose how to authorize access to blob data with Azure CLI](./authorize-data-operations-cli.md)
+- [Run PowerShell commands with Azure AD credentials to access blob data](./authorize-data-operations-cli.md)
+- [Manage blob containers using CLI](blob-containers-cli.md)
storage Quickstart Blobs Javascript Browser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/quickstart-blobs-javascript-browser.md
This code calls the [ContainerClient.deleteBlob](/javascript/api/@azure/storage-
## Use the storage emulator
-This quickstart created a container and blob on the Azure cloud. You can also use the Azure Blob storage npm package to create these resources locally on the [Azure Storage emulator](/azure/storage/common/storage-use-emulator) for development and testing.
+This quickstart created a container and blob on the Azure cloud. You can also use the Azure Blob storage npm package to create these resources locally on the [Azure Storage emulator](../common/storage-use-emulator.md) for development and testing.
## Clean up resources
For tutorials, samples, quickstarts, and other documentation, visit:
> [Azure for JavaScript documentation](/azure/developer/javascript/) - To learn more, see the [Azure Blob storage client library for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/storage/storage-blob).-- To see Blob storage sample apps, continue to [Azure Blob storage client library v12 JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob/samples).
+- To see Blob storage sample apps, continue to [Azure Blob storage client library v12 JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob/samples).
storage Storage Quickstart Blobs Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-nodejs.md
Step through the code in your debugger and check your [Azure portal](https://por
## Use the storage emulator
-This quickstart created a container and blob on the Azure cloud. You can also use the Azure Blob storage npm package to create these resources locally on the [Azure Storage emulator](/azure/storage/common/storage-use-emulator) for development and testing.
+This quickstart created a container and blob on the Azure cloud. You can also use the Azure Blob storage npm package to create these resources locally on the [Azure Storage emulator](../common/storage-use-emulator.md) for development and testing.
## Clean up
For tutorials, samples, quickstarts, and other documentation, visit:
- To learn how to deploy a web app that uses Azure Blob storage, see [Tutorial: Upload image data in the cloud with Azure Storage](./storage-upload-process-images.md?preserve-view=true&tabs=javascript) - To see Blob storage sample apps, continue to [Azure Blob storage package library v12 JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob/samples).-- To learn more, see the [Azure Blob storage client library for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/storage/storage-blob).
+- To learn more, see the [Azure Blob storage client library for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/storage/storage-blob).
storage Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md
You can access resources in a storage account by any language that can make HTTP
- [Azure Storage REST API](/rest/api/storageservices/) - [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage) - [Azure Storage client library for Java/Android](/java/api/overview/azure/storage)-- [Azure Storage client library for Node.js](/azure/storage/blobs/reference#javascript-client-libraries)
+- [Azure Storage client library for Node.js](../blobs/reference.md#javascript-client-libraries)
- [Azure Storage client library for Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/storage/azure-storage-blob) - [Azure Storage client library for PHP](https://github.com/Azure/azure-storage-php) - [Azure Storage client library for Ruby](https://github.com/Azure/azure-storage-ruby)
You can access resources in a storage account by any language that can make HTTP
## Next steps
-To get up and running with Azure Storage, see [Create a storage account](storage-account-create.md).
+To get up and running with Azure Storage, see [Create a storage account](storage-account-create.md).
stream-analytics Move Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/move-cluster.md
New-AzResourceGroupDeployment `
-ResourceGroupName <name of your resource group> ` -TemplateFile <path-to-template> ```
-For more information on how to deploy a template using Azure PowerShell, see [Deploy a template](https://docs.microsoft.com/azure/azure-resource-manager/management/manage-resources-powershell#deploy-a-template).
+For more information on how to deploy a template using Azure PowerShell, see [Deploy a template](../azure-resource-manager/management/manage-resources-powershell.md#deploy-a-template).
## Next steps
stream-analytics Stream Analytics Sql Output Perf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-sql-output-perf.md
Here are some configurations within each service that can help improve overall t
- **Inherit Partitioning** ΓÇô This SQL output configuration option enables inheriting the partitioning scheme of your previous query step or input. With this enabled, writing to a disk-based table and having a [fully parallel](stream-analytics-parallelization.md#embarrassingly-parallel-jobs) topology for your job, expect to see better throughputs. This partitioning already automatically happens for many other [outputs](stream-analytics-parallelization.md#partitions-in-inputs-and-outputs). Table locking (TABLOCK) is also disabled for bulk inserts made with this option.
-> [!NOTE]
-> When there are more than 8 input partitions, inheriting the input partitioning scheme might not be an appropriate choice. This upper limit was observed on a table with a single identity column and a clustered index. In this case, consider using [INTO](/stream-analytics-query/into-azure-stream-analytics#into-shard-count) 8 in your query, to explicitly specify the number of output writers. Based on your schema and choice of indexes, your observations may vary.
+ > [!NOTE]
+ > When there are more than 8 input partitions, inheriting the input partitioning scheme might not be an appropriate choice. This upper limit was observed on a table with a single identity column and a clustered index. In this case, consider using [INTO](/stream-analytics-query/into-azure-stream-analytics#into-shard-count) 8 in your query, to explicitly specify the number of output writers. Based on your schema and choice of indexes, your observations may vary.
- **Batch Size** - SQL output configuration allows you to specify the maximum batch size in an Azure Stream Analytics SQL output based on the nature of your destination table/workload. Batch size is the maximum number of records that sent with every bulk insert transaction. In clustered columnstore indexes, batch sizes around [100K](/sql/relational-databases/indexes/columnstore-indexes-data-loading-guidance) allow for more parallelization, minimal logging, and locking optimizations. In disk-based tables, 10K (default) or lower may be optimal for your solution, as higher batch sizes may trigger lock escalation during bulk inserts.
synapse-analytics Source Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/cicd/source-control.md
By default, Synapse Studio authors directly against the Synapse service. If you
This article will outline how to configure and work in a Synapse workspace with git repository enabled. And we also highlight some best practices and a troubleshooting guide. > [!NOTE]
-> Synapse Studio git integration is not available in the Azure Government Cloud.
+>To use GitHub in Azure Gov and Azure China, you can bring your own GitHub OAuth application in Synapse Studio for git integration. The configure experience is same with ADF. You can refer to the [announcement blog](https://techcommunity.microsoft.com/t5/azure-data-factory/cicd-improvements-with-github-support-in-azure-government-and/ba-p/2686918).
## Prerequisites Users must have the Azure Contributor (Azure RBAC) or higher role on the Synapse workspace to configure, edit settings and disconnect a Git repository with Synapse.
You can associate a Synapse workspace with an Azure DevOps Repository for source
When connecting to your git repository, first select your repository type as Azure DevOps git, and then select one Azure AD tenant from the dropdown list, and click **Continue**.
-![Configure the code repository settings](media/connect-with-azuredevops-repo-selected.png)
+![Configure the code repository settings](media/connect-with-azure-devops-repo-selected.png)
The configuration pane shows the following Azure DevOps git settings: | Setting | Description | Value | |: |: |: | | **Repository Type** | The type of the Azure Repos code repository.<br/> | Azure DevOps Git or GitHub |
+| **Cross tenant sign in** | Checkbox to sign in with cross tenant account. | unselected (default) |
| **Azure Active Directory** | Your Azure AD tenant name. | `<your tenant name>` | | **Azure DevOps account** | Your Azure Repos organization name. You can locate your Azure Repos organization name at `https://{organization name}.visualstudio.com`. You can [sign in to your Azure Repos organization](https://www.visualstudio.com/team-services/git/) to access your Visual Studio profile and see your repositories and projects. | `<your organization name>` | | **ProjectName** | Your Azure Repos project name. You can locate your Azure Repos project name at `https://{organization name}.visualstudio.com/{project name}`. | `<your Azure Repos project name>` |
After these configuration steps, your personal repo is available when you set up
For more info about connecting Azure Repos to your organization's Active Directory, see [Connect your organization to Azure Active Directory](/azure/devops/organizations/accounts/connect-organization-to-azure-ad).
+### Use a cross tenant Azure DevOps account
+
+When your Azure DevOps is not in the same tenant as the Synapse workspace, you can configure the workspace with your cross tenant Azure DevOps account with the guide below.
+
+1. Select the **Cross tenant sign in** option and click **Continue**
+
+ ![Select the cross tenant sign in ](media/cross-tenant-sign-in.png)
+
+1. Select **OK** in the dialog box.
+
+ ![Confirm the cross tenant sign in ](media/cross-tenant-sign-in-confirm.png)
+
+1. click **Use another account** and login with your Azure DevOps account.
+
+ ![Use another account ](media/use-another-account.png)
+
+1. After signing in, choose the directory and repository and configure it accordingly.
+
+ ![Choose the directory ](media/cross-tenant-aad.png)
+
+ > [!NOTE]
+ > To login the workspace, you need to use the first sign-in to log into to your Synapse workspace user account. Your cross tenant Azure DevOps account is only used for signing into and getting access to the Azure DevOps repo associated with this Synapse workspace.
++ ## Connect with GitHub You can associate a workspace with a GitHub repository for source control, collaboration, versioning. If you don't have a GitHub account or repository, follow [these instructions](https://github.com/join) to create your resources.
synapse-analytics Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new-archive.md
Improvements to the Synapse Machine Learning library v0.9.5 (previously called M
### Data Integration
-* Data quality validation rules using Assert transformation - You can now easily add data quality, data validation, and schema validation to your Synapse ETL jobs by leveraging Assert transformation in Synapse data flows. To learn more, see the [Assert transformation in mapping data flow article](/azure/data-factory/data-flow-assert) or [the blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_8).
+* Data quality validation rules using Assert transformation - You can now easily add data quality, data validation, and schema validation to your Synapse ETL jobs by leveraging Assert transformation in Synapse data flows. To learn more, see the [Assert transformation in mapping data flow article](../data-factory/data-flow-assert.md) or [the blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_8).
* Native data flow connector for Dynamics - Synapse data flows can now read and write data directly to Dynamics through the new data flow Dynamics connector. Learn more on how to [Create data sets in data flows to read, transform, aggregate, join, etc. using this article](../data-factory/connector-dynamics-crm-office-365.md) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_9). You can then write the data back into Dynamics using the built-in Synapse Spark compute.
The following updates are new to Azure Synapse Analytics this month.
## Next steps
-[Get started with Azure Synapse Analytics](get-started.md)
+[Get started with Azure Synapse Analytics](get-started.md)
time-series-insights Breaking Changes Long Data Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/breaking-changes-long-data-type.md
Last updated 12/07/2020
# Adding support for long data type in Azure Time Series Insights Gen2 + The addition of support for long data type affects how we store and index numeric data in Azure Time Series Insights Gen2 environments only. If you have a Gen1 environment, you can disregard these changes. Beginning June 29 or June 30, 2020, depending on your region, your data will be indexed as **Long** and **Double**. If you have any questions or concerns about this change, submit a support ticket through the Azure portal and mention this communication.
time-series-insights Concepts Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-access-policies.md
# Grant data access to an environment + This article discusses the two types of Azure Time Series Insights access policies. > [!Warning]
time-series-insights Concepts Ingestion Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-ingestion-overview.md
# Azure Time Series Insights Gen2 data ingestion overview + Your Azure Time Series Insights Gen2 environment contains an *ingestion engine* to collect, process, and store streaming time series data. As data arrives into your event source(s), Azure Time Series Insights Gen2 will consume and store your data in near real time. [![Ingestion overview](media/concepts-ingress-overview/ingress-overview.png)](media/concepts-ingress-overview/ingress-overview.png#lightbox)
time-series-insights Concepts Json Flattening Escaping Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-json-flattening-escaping-rules.md
Last updated 01/21/2021
# JSON Flattening, Escaping, and Array Handling + Your Azure Time Series Insights Gen2 environment will dynamically create the columns of your warm and cold stores, following a particular set of naming conventions. When an event is ingested, a set of rules is applied to the JSON payload and property names. These include escaping certain special characters and flattening nested JSON objects. It's important to know these rules so that you understand how the shape of your JSON will influence how your events are stored and queried. See the table below for the full list of rules. Examples A & B also demonstrate how you're able to efficiently batch multiple time series in an array. > [!IMPORTANT]
time-series-insights Concepts Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-model-overview.md
# Time Series Model in Azure Time Series Insights Gen2 + This article describes Time Series Model, the capabilities, and how to start building and updating your own models in the Azure Time Series Insights Gen2 environment. > [!TIP]
time-series-insights Concepts Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-power-bi.md
Last updated 09/28/2020
# Connect Azure Time Series Insights Gen 2 to Power BI + Are you looking for a solution to help correlate your time-series data, create vivid visualizations, and share insights across your organization? Azure Time Series Insights now seamlessly integrates with [Power BI](https://powerbi.microsoft.com/), providing you with more powerful visualization and dashboarding capabilities over your streaming data and allows you to share insights and results across your organization.
time-series-insights Concepts Private Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-private-links.md
Last updated 09/01/2021
# Private network access with Azure Private Link (preview) + [Azure Private Link](../private-link/private-link-overview.md) is a service that enables you to access Azure resources (like [Azure Event Hubs](../event-hubs/event-hubs-about.md), [Azure Storage](../storage/common/storage-introduction.md), and [Azure Cosmos DB](../cosmos-db/introduction.md)) and Azure-hosted customer and partner services over a private endpoint in your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md). Similarly, you can use private endpoints for your Time Series Insights instance to allow clients located in your virtual network to securely access the instance over Private Link.
time-series-insights Concepts Query Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-query-overview.md
# Querying Data from Azure Time Series Insights Gen2 + Azure Time Series Insights Gen2 enables data querying on events and metadata stored in the environment via public surface APIs. These APIs also are used by the [Azure Time Series Insights TSI Explorer](./concepts-ux-panels.md). Three primary API categories are available in Azure Time Series Insights Gen2:
time-series-insights Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-storage.md
# Data Storage + This article describes data storage in Azure Time Series Insights Gen2. It covers warm and cold, data availability, and best practices. ## Provisioning
time-series-insights Concepts Streaming Ingestion Event Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-streaming-ingestion-event-sources.md
Last updated 03/18/2021
# Azure Time Series Insights Gen2 event sources + Your Azure Time Series Insights Gen2 environment can have up to two streaming event sources. Two types of Azure resources are supported as inputs: - [Azure IoT Hub](../iot-hub/about-iot-hub.md)
time-series-insights Concepts Streaming Ingress Throughput Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-streaming-ingress-throughput-limits.md
# Streaming Ingestion Throughput Limits + Azure Time Series Insights Gen2 streaming data ingress limitations are described below. > [!TIP]
time-series-insights Concepts Supported Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-supported-data-types.md
Last updated 01/19/2021
# Supported data types + The following table lists the data types supported by Azure Time Series Insights Gen2 | Data type | Description | Example | [Time Series Expression syntax](/rest/api/time-series-insights/reference-time-series-expression-syntax) | Property column name in Parquet
time-series-insights Concepts Ux Panels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-ux-panels.md
# Azure Time Series Insights Explorer + This article describes the various features and options available within the Azure Time Series Insights Gen2 [Demo environment](https://insights.timeseries.azure.com/preview/demo). ## Prerequisites
time-series-insights Concepts Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-variables.md
Last updated 01/22/2021
# Time Series Model variables + This article describes the Time Series Model variables that specify formula and computation rules on events. Each variable can be one of three kinds: *numeric*, *categorical*, and *aggregate*.
time-series-insights How To Api Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-api-migration.md
# Migrating to new Azure Time Series Insights Gen2 API versions + ## Overview If you have created an Azure Time Series Insights Gen2 environment when it was in Public Preview (before July 16th, 2020), please update your TSI environment to use the new generally available versions of APIs by following the steps described in this article. This change does not affect any users who are using the Gen1 version of Azure Time Series Insights.
time-series-insights How To Connect Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-connect-power-bi.md
Last updated 12/14/2020
# Visualize data from Azure Time Series Insights in Power BI + Azure Time Series Insights is a platform for storing, managing, querying, and visualizing time-series data in the cloud. [Power BI](https://powerbi.microsoft.com) is a business analytics tool with rich visualization capabilities that allows you to share insights and results across your organization. Both services can now be integrated allowing you to augment the powerful analytics of Azure Time Series Insights with the strong data visualization and easy sharing capabilities of Power BI. You'll learn how to:
time-series-insights How To Create Environment Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-create-environment-using-cli.md
# Create an Azure Time Series Insights Gen2 environment using the Azure CLI + This document will guide you through creating a new Time Series Insights Gen2 Environment. [!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
time-series-insights How To Create Environment Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-create-environment-using-portal.md
# Create an Azure Time Series Insights Gen2 environment using the Azure portal + This article describes how to create an Azure Time Series Insights Gen2 environment by using the [Azure portal](https://portal.azure.com/). The environment provisioning tutorial will walk you through the process. You'll learn about selecting the correct Time Series ID and view examples from two JSON payloads.</br>
time-series-insights How To Diagnose Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-diagnose-troubleshoot.md
# Diagnose and troubleshoot an Azure Time Series Insights Gen2 environment + This article summarizes several common problems you might encounter when you work with your Azure Time Series Insights Gen2 environment. The article also describes potential causes and solutions for each problem. ## Problem: I can't find my environment in the Gen2 Explorer
time-series-insights How To Edit Your Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-edit-your-model.md
# Data modeling in Azure Time Series Insights Gen2 + This article describes how to work with Time Series Model in Azure Time Series Insights Gen2. It details several common data scenarios. > [!TIP]
time-series-insights How To Ingest Data Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-ingest-data-event-hub.md
# Add an event hub event source to your Azure Time Series Insights environment + This article describes how to use the Azure portal to add an event source that reads data from Azure Event Hubs to your Azure Time Series Insights environment. > [!NOTE]
time-series-insights How To Ingest Data Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-ingest-data-iot-hub.md
# Add an IoT hub event source to your Azure Time Series Insight environment + This article describes how to use the Azure portal to add an event source that reads data from Azure IoT Hub to your Azure Time Series Insights environment. > [!NOTE]
time-series-insights How To Monitor Tsi Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-monitor-tsi-reference.md
Last updated 12/10/2020
# Monitoring Azure Time Series Insights data reference + Learn about the data and resources collected by Azure Monitor from your Azure Time Series Insights environment. See [Monitoring Time Series Insights]( ./how-to-monitor-tsi.md) for details on collecting and analyzing monitoring data. ## Metrics
time-series-insights How To Monitor Tsi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-monitor-tsi.md
Last updated 12/10/2020
# Monitoring Time Series Insights + When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by Time Series Insights and how you can use the features of Azure Monitor to analyze and alert on this data. ## Monitor overview
time-series-insights How To Plan Your Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-plan-your-environment.md
# Plan your Azure Time Series Insights Gen2 environment + This article describes best practices to plan and get started quickly by using Azure Time Series Insights Gen2. ## Best practices for planning and preparation
time-series-insights How To Private Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-private-links.md
Last updated 09/01/2021
# Enable private access for TSI with Private Link (preview) + This article describes how to [enable Private Link with a private endpoint for an Azure Time Series Insights Gen2 environment](concepts-private-links.md) (currently in preview). Configuring a private endpoint for your Azure Time Series Insights Gen2 environment enables you to secure your Azure Time Series Insights environment and eliminate public exposure, as well as avoid data exfiltration from your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md). This article walks through the process using the [**Azure portal**](https://portal.azure.com).
time-series-insights How To Provision Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-provision-manage.md
# Manage Azure Time Series Insights Gen2 + After you've created your Azure Time Series Insights Gen2 environment by using [the Azure CLI](./how-to-create-environment-using-cli.md) or [the Azure portal](./how-to-create-environment-using-portal.md), you can modify your access policies and other environment attributes to suit your business needs. ## Manage the environment
time-series-insights How To Select Tsid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-select-tsid.md
# Best practices for choosing a Time Series ID + This article summarizes the importance of the Time Series ID for your Azure Time Series Insights Gen2 environment, and best practices for choosing one. ## Choose a Time Series ID
time-series-insights How To Tsi Gen1 Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-tsi-gen1-migration.md
# Migrating Time Series Insights Gen1 to Azure Data Explorer + ## Overview The recommendation is to set up Azure Data Explorer cluster with a new consumer group from the Event Hub or IoT Hub and wait for retention period to pass and fill Azure Data Explorer with the same data as Time Series Insights environment.
time-series-insights How To Tsi Gen2 Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-tsi-gen2-migration.md
# Migrating Time Series Insights (TSI) Gen2 to Azure Data Explorer + ## Overview High-level migration recommendations.
time-series-insights Ingestion Rules Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/ingestion-rules-update.md
# Upcoming changes to JSON flattening and escaping rules for new environments + > [!IMPORTANT] > These changes will be applied to *newly created* Microsoft Azure Time Series Insights Gen2 environments only. The changes don't apply to Gen1 environments.
time-series-insights Migration To Adx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/migration-to-adx.md
# Migrating to Azure Data Explorer + ## Overview Time Series Insights (TSI) service provides access to historical data ingested through hubs for operational analytics and reporting. Service features are:
time-series-insights Overview Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/overview-use-cases.md
# Azure Time Series Insights Gen2 use cases + This article summarizes several common use cases for Azure Time Series Insights Gen2. The recommendations in this article serve as a starting point to develop your applications and solutions with Azure Time Series Insights Gen2. Specifically, this article answers the following questions:
time-series-insights Overview What Is Tsi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/overview-what-is-tsi.md
# What is Azure Time Series Insights Gen2 + Azure Time Series Insights Gen2 is an open and scalable end-to-end IoT analytics service featuring best-in-class user experiences and rich APIs to integrate its powerful capabilities into your existing workflow or application. You can use it to collect, process, store, query and visualize data at Internet of Things (IoT) scale--data that's highly contextualized and optimized for time series.
time-series-insights Quickstart Explore Tsi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/quickstart-explore-tsi.md
Last updated 03/01/2021
# Quickstart: Explore the Azure Time Series Insights Gen2 demo environment + This quickstart gets you started with an Azure Time Series Insights Gen2 environment. In the free demo, you tour key features that have been added to Azure Time Series Insights Gen2. The Azure Time Series Insights Gen2 demo environment contains a scenario company, Contoso, that operates two wind turbine farms. Each farm has 10 turbines. Each turbine has 20 sensors that report data every minute to Azure IoT Hub. The sensors gather information about weather conditions, blade pitch, and yaw position. Information about generator performance, gearbox behavior, and safety monitors also is recorded.
time-series-insights Time Series Insights Add Reference Data Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-add-reference-data-set.md
# Create a reference data set for your Azure Time Series Insights Gen1 environment using the Azure portal + > [!CAUTION] > This is a Gen1 article.
time-series-insights Time Series Insights Authentication And Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-authentication-and-authorization.md
# Authentication and authorization for Azure Time Series Insights API + Depending on your business needs, your solution might include one or more client applications that you use to interact with your Azure Time Series Insights environment's [APIs](/rest/api/time-series-insights/reference-data-access-overview). Azure Time Series Insights performs authentication using [Azure AD Security Tokens based on OAUTH 2.0](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims). To authenticate your client(s), you'll need to get a bearer token with the right permissions, and pass it along with your API calls. This document describes several methods for getting credentials that you can use to get a bearer token and authenticate, including using managed identity and Azure Active Directory app registration. ## Managed identities
time-series-insights Time Series Insights Concepts Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-concepts-retention.md
# Understand data retention in Azure Time Series Insights Gen1 + > [!CAUTION] > This is a Gen1 article.
time-series-insights Time Series Insights Customer Data Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-customer-data-requests.md
# Summary of customer data request features + Azure Time Series Insights is a managed cloud service with storage, analytics, and visualization components that make it easy to ingest, store, explore, and analyze billions of events simultaneously. [!INCLUDE [gdpr-intro-sentence](../../includes/gdpr-intro-sentence.md)]
time-series-insights Time Series Insights Diagnose And Solve Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-diagnose-and-solve-problems.md
# Diagnose and solve issues in your Azure Time Series Insights Gen1 environment + > [!CAUTION] > This is a Gen1 article.
time-series-insights Time Series Insights Environment Mitigate Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-environment-mitigate-latency.md
# Monitor and mitigate throttling to reduce latency in Azure Time Series Insights Gen1 + > [!CAUTION] > This is a Gen1 article.
time-series-insights Time Series Insights Environment Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-environment-planning.md
# Plan your Azure Time Series Insights Gen1 environment + > [!CAUTION] > This is a Gen1 article.
time-series-insights Time Series Insights Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-explorer.md
# Azure Time Series Insights Gen1 Explorer + > [!CAUTION] > This is a Gen1 article.
time-series-insights Time Series Insights Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-get-started.md
# Create a new Azure Time Series Insights Gen1 environment in the Azure portal + > [!CAUTION] > This is a Gen1 article.
time-series-insights Time Series Insights How To Configure Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-how-to-configure-retention.md
# Configuring retention in Azure Time Series Insights Gen1 + > [!CAUTION] > This is a Gen1 article.
time-series-insights Time Series Insights How To Scale Your Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-how-to-scale-your-environment.md
# How to scale your Azure Time Series Insights Gen1 environment + > [!CAUTION] > This is a Gen1 article.
time-series-insights Time Series Insights Manage Reference Data Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-manage-reference-data-csharp.md
# Manage reference data for an Azure Time Series Insights Gen 1 environment using C Sharp + > [!CAUTION] > This is a Gen1 article.
time-series-insights Time Series Insights Manage Resources Using Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-manage-resources-using-azure-resource-manager-template.md
# Create Azure Time Series Insights Gen 1 resources using Azure Resource Manager templates + > [!CAUTION] > This is a Gen1 article.
time-series-insights Time Series Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-overview.md
# What is Azure Time Series Insights Gen1? + > [!CAUTION] > This is a Gen1 article.
time-series-insights Time Series Insights Parameterized Urls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-parameterized-urls.md
# Share a custom view using a parameterized URL + To share a custom view in Azure Time Series Insights Explorer, you can programmatically create a parameterized URL of the custom view. Azure Time Series Insights Explorer supports URL query parameters to specify views in the experience directly from the URL. For example, using only the URL, you can specify a target environment, a search predicate, and desired time span. When a user selects the customized URL, the interface provides a link directly to that asset in the Azure Time Series Insights portal. Data access policies apply.
time-series-insights Time Series Insights Query Data Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-query-data-csharp.md
# Query data from the Azure Time Series Insights Gen1 environment using C Sharp + > [!CAUTION] > This is a Gen1 article.
time-series-insights Time Series Insights Send Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-send-events.md
# Send events to an Azure Time Series Insights Gen1 environment by using an event hub + > [!CAUTION] > This is a Gen1 article.
time-series-insights Time Series Insights Update Query Data Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-update-query-data-csharp.md
# Query data from the Azure Time Series Insights Gen2 environment using C Sharp + This C# example demonstrates how to query data from the [Gen2 Data Access APIs](/rest/api/time-series-insights/reference-data-access-overview) in Azure Time Series Insights Gen2 environments. > [!TIP]
time-series-insights Time Series Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-quickstart.md
Last updated 09/30/2020
# Quickstart: Explore Azure Time Series Insights Gen1 + > [!CAUTION] > This is a Gen1 article.
time-series-insights Tutorial Create Populate Tsi Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/tutorial-create-populate-tsi-environment.md
# Tutorial: Create an Azure Time Series Insights Gen1 environment + > [!CAUTION] > This is a Gen1 article.
time-series-insights Tutorial Set Up Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/tutorial-set-up-environment.md
# Tutorial: Set up an Azure Time Series Insights Gen2 environment + This tutorial guides you through the process of creating an Azure Time Series Insights Gen2 *pay-as-you-go* (PAYG) environment. In this tutorial, you learn how to:
time-series-insights Tutorials Model Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/tutorials-model-sync.md
# Model synchronization between Azure Digital Twins and Time Series Insights Gen2 + This article explains best practices and tools used to translate asset model in Azure Digital Twins (ADT) to asset model in Azure Time Series Insights (TSI). This article is the second part of a two-part tutorial series explaining the integration of Azure Digital Twins with Azure Time Series Insights. Integration of Azure Digital Twins with Time Series Insights enables archival and tracking the history of telemetries and calculated properties of Digital Twins. This series of tutorials are aimed at developers working to integrate Time Series Insights with Azure Digital Twins. Part 1 explains [Establishing data pipeline that brings in the actual time series data from Azure Digital Twins to Time Series Insights](../digital-twins/how-to-integrate-time-series-insights.md) and this, second part of the tutorial series explains Asset model synchronization between Azure Digital Twins and Time Series Insights. This tutorial explains the best practices in choosing and establishing naming convention for Time Series ID (TS ID) and manually establishing hierarchies in Time Series Model (TSM). ## Choosing a Time Series ID
virtual-desktop Azure Stack Hci https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-stack-hci.md
To create a profile container using a file share:
### Download supported OS images from Azure Marketplace
-You can run any OS images that both Azure Virtual Desktop and Azure Stack HCI support on your deployment. To learn which OSes Azure Virtual Desktop supports, see [Supported VM OS images](overview.md#supported-virtual-machine-os-images).
+You can run any OS images that both Azure Virtual Desktop and Azure Stack HCI support on your deployment. To learn which OSes Azure Virtual Desktop supports, see [Supported VM OS images](prerequisites.md#operating-systems-and-licenses).
You have two options to download an image:
virtual-desktop Connection Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/connection-latency.md
Title: Azure Virtual Desktop user connection latency - Azure
description: Connection latency for Azure Virtual Desktop users. Previously updated : 10/30/2019 Last updated : 03/16/2022
-# Determine user connection latency in Azure Virtual Desktop
+# Connection quality in Azure Virtual Desktop
-Azure Virtual Desktop is globally available. Administrators can create virtual machines (VMs) in any Azure region they want. Connection latency will vary depending on the location of the users and the virtual machines. Azure Virtual Desktop services will continuously roll out to new geographies to improve latency.
+Azure Virtual Desktop helps users host client sessions on their session hosts running on Azure. When a user starts a session, they connect from their end-user device, also known as a "client," over a network to access the session host. It's important that the user experience feels as much like a local session on a physical device as possible. In this article, we'll talk about how you can measure and improve the connection quality of your end-users.
-The [Azure Virtual Desktop Experience Estimator tool](https://azure.microsoft.com/services/virtual-desktop/assessment/) can help you determine the best location to optimize the latency of your VMs. We recommend you use the tool every two to three months to make sure the optimal location hasn't changed as Azure Virtual Desktop rolls out to new areas.
+There are currently two ways you can analyze connection quality in your Azure Virtual Desktop deployment: Azure Log Analytics and Azure Front Door. This article will describe how to use each method to optimize graphics quality and improve end-user experience.
-## Interpreting results from the Azure Virtual Desktop Experience Estimator tool
+## Monitor connection quality with Azure Log Analytics
-In Azure Virtual Desktop, latency up to 150 ms shouldnΓÇÖt impact user experience that doesn't involve rendering or video. Latencies between 150 ms and 200 ms should be fine for text processing. Latency above 200 ms may impact user experience.
+>[!NOTE]
+> Azure Log Analytics currently only supports Azure Virtual Desktop connection network data in commercial clouds.
-In addition, the Azure Virtual Desktop connection depends on the internet connection of the machine the user is using the service from. Users may lose connection or experience input delay in one of the following situations:
+If you're already using [Azure Log Analytics](diagnostics-log-analytics.md), you can monitor network data with the Azure Virtual Desktop connection network data diagnostics. The connection network data Log Analytics collects can help you discover areas that impact your end-user's graphical experience. The service collects data for reports regularly throughout the session. Azure Virtual Desktop connection network data reports have the following advantages over RemoteFX network performance counters:
+- Each record is connection-specific and includes the correlation ID of the connection that can be tied back to the user.
-We recommend you choose VMs locations that are as close to your users as possible. For example, if the user is located in India but the VM is in the United States, there will be latency that will affect the overall user experience.
+- The round trip time measured in this table is protocol-agnostic and will record the measured latency for Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) connections.
+
+To start collecting this data, youΓÇÖll need to make sure you have diagnostics and the **NetworkData** table enabled in your Azure Virtual Desktop host pools.
+
+To check and modify your diagnostics settings in the Azure portal:
+
+1. Sign in to the Azure portal, then go to **Azure Virtual Desktop** and select **Host pools**.
+
+2. Select the host pool you want to collect network data for.
+
+3. Select **Diagnostic settings**, then create a new setting if you haven't configured your diagnostic settings yet. If you've already configured your diagnostic settings, select **Edit setting**.
+
+4. Select **allLogs** or select the names of the diagnostics tables you want to collect data for, including **NetworkData**. The *allLogs* parameter will automatically add new tables to your data table in the future.
+
+5. Select where you want to send the collected data. Azure Virtual Desktop Insights users should select a Log Analytics workspace.
+
+6. Select **Save** to apply your changes.
+
+7. Repeat this process for all other host pools you want to measure.
+
+8. Make sure the network data is going to your selected destination by returning to the host pool's resource page, selecting **Logs**, then running one of the queries in [Sample queries for Azure Log Analytics](#sample-queries-for-azure-log-analytics). In order for your query to get results, your host pool must have active users who have been connecting to sessions. Keep in mind that it can take up to 15 minutes for network data to appear in the Azure portal.
+
+### Connection network data
+
+The network data you collect for your data tables includes the following information:
+
+- The **estimated available bandwidth (kilobytes per second)** is the average estimated available network bandwidth during each connection time interval.
+
+- The **estimated round trip time (milliseconds)**, which is the average estimated round trip time during each connection time interval. Round trip time is how long it takes a network request takes to go from the end-user's device over the network to the session host, then return to the device.
+
+- The **Correlation ID**, which is the activity ID of a specific Azure Virtual Desktop connection that's assigned to every diagnostic within that connection.
+
+- The **time generated**, which is a timestamp in UTC time that marks when an event the data counter is tracking happened on the virtual machine (VM). All averages are measured by the time window that ends that the marked timestamp.
+
+- The **Resource ID**, which is a unique ID assigned to the Azure Virtual Desktop host pool associated with the data the diagnostics service collects for this table.
+
+- The **source system**, **Subscription ID**, **Tenant ID**, and **type** (table name).
+
+## Sample queries for Azure Log Analytics
+
+In this section, we have a list of queries that will help you review connection quality information. You can run queries in the [Log Analytics query editor](../azure-monitor/logs/log-analytics-tutorial.md#write-a-query).
+
+>[!NOTE]
+>For each example, replace the *userupn* variable with the UPN of the user you want to look up.
+
+### Query average RTT and bandwidth
+
+To look up the average round trip time and bandwidth:
+
+```kusto
+// 90th, 50th, 10th Percentile for RTT in 10 min increments
+WVDConnectionNetworkData
+| summarize RTTP90=percentile(EstRoundTripTimeInMs,90),RTTP50=percentile(EstRoundTripTimeInMs,50),RTTP10=percentile(EstRoundTripTimeInMs,10) by bin(TimeGenerated,10m)
+| render timechart
+// 90th, 50th, 10th Percentile for BW in 10 min increments
+WVDConnectionNetworkData
+| summarize BWP90=percentile(EstAvailableBandwidthKBps,90),BWP50=percentile(EstAvailableBandwidthKBps,50),BWP10=percentile(EstAvailableBandwidthKBps,10) by bin(TimeGenerated,10m)
+| render timechart
+```
+To look up the round-trip time and bandwidth per connection:
+
+```kusto
+// RTT and BW Per Connection Summary
+// Returns P90 Round Trip Time (ms) and Bandwidth (KBps) per connection with connection details.
+WVDConnectionNetworkData
+| summarize RTTP90=percentile(EstRoundTripTimeInMs,90),BWP90=percentile(EstAvailableBandwidthKBps,90),StartTime=min(TimeGenerated), EndTime=max(TimeGenerated) by CorrelationId
+| join kind=leftouter (
+WVDConnections
+| extend Protocol = iff(UdpUse in ("0","<>"),"TCP","UDP")
+| distinct CorrelationId, SessionHostName, Protocol, ClientOS, ClientType, ClientVersion, ConnectionType, ResourceAlias, SessionHostSxSStackVersion, UserName
+) on CorrelationId
+| project CorrelationId, StartTime, EndTime, UserName, SessionHostName, RTTP90, BWP90, Protocol, ClientOS, ClientType, ClientVersion, ConnectionType, ResourceAlias, SessionHostSxSStackVersion
+```
+
+### Query data for a specific user
+
+To look up the bandwidth for a specific user:
+
+```kusto
+let user = "alias@domain";
+WVDConnectionNetworkData
+| join kind=leftouter (
+ WVDConnections
+ | distinct CorrelationId, UserName
+) on CorrelationId
+| where UserName == user
+| project EstAvailableBandwidthKBps, TimeGenerated
+| render columnchart
+```
+
+To look up the round trip time for a specific user:
+
+```kusto
+let user = "alias@domain";
+WVDConnectionNetworkData
+| join kind=leftouter (
+WVDConnections
+| distinct CorrelationId, UserName
+) on CorrelationId
+| where UserName == user
+| project EstRoundTripTimeInMs, TimeGenerated
+| render columnchart
+```
+
+To look up the top 10 users with the highest round trip time:
+
+```kusto
+WVDConnectionNetworkData
+| join kind=leftouter (
+ WVDConnections
+ | distinct CorrelationId, UserName
+) on CorrelationId
+| summarize AvgRTT=avg(EstRoundTripTimeInMs),RTT_P95=percentile(EstRoundTripTimeInMs,95) by UserName
+| top 10 by AvgRTT desc
+```
+
+To look up the 10 users with the lowest bandwidth:
+
+```kusto
+WVDConnectionNetworkData
+| join kind=leftouter (
+ WVDConnections
+ | distinct CorrelationId, UserName
+) on CorrelationId
+| summarize AvgBW=avg(EstAvailableBandwidthKBps),BW_P95=percentile(EstAvailableBandwidthKBps,95) by UserName
+| top 10 by AvgBW asc
+```
## Azure Front Door
Azure Virtual Desktop uses [Azure Front Door](https://azure.microsoft.com/servic
## Next steps
+- Troubleshoot connection and latency issues at [Troubleshoot connection quality for Azure Virtual Desktop](troubleshoot-connection-quality.md).
- To check the best location for optimal latency, see the [Azure Virtual Desktop Experience Estimator tool](https://azure.microsoft.com/services/virtual-desktop/assessment/). - For pricing plans, see [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/).-- To get started with your Azure Virtual Desktop deployment, check out [our tutorial](./create-host-pools-azure-marketplace.md).
+- To get started with your Azure Virtual Desktop deployment, check out [our tutorial](./create-host-pools-azure-marketplace.md).
+- To learn about bandwidth requirements for Azure Virtual Desktop, see [Understanding Remote Desktop Protocol (RDP) Bandwidth Requirements for Azure Virtual Desktop](rdp-bandwidth.md).
+- To learn about Azure Virtual Desktop network connectivity, see [Understanding Azure Virtual Desktop network connectivity](network-connectivity.md).
+- Learn how to use Azure Monitor at [Get started with Azure Monitor for Azure Virtual Desktop](azure-monitor.md).
virtual-desktop Create Host Pools Azure Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-azure-marketplace.md
This article will walk you through the setup process for creating a host pool fo
## Prerequisites
-There are two different sets of requirements depending on if you're an IT professional setting up a deployment for your organization or an app developer serving applications to customers.
-
-### Requirements for IT professionals
-
-You'll need to enter the following parameters to create a host pool:
--- The VM image name-- VM configuration-- Domain and network properties-- Azure Virtual Desktop host pool properties-
-You'll also need to know the following things:
--- Where the source of the image you want to use is. Is it from Azure Gallery or is it a custom image?-- Your domain join credentials.-
-### Requirements for app developers
-
-If you're an app developer who's using remote app streaming for Azure Virtual Desktop to deliver apps to your customers, here's what you'll need to get started:
--- If you plan on serving your organization's app to end-users, make sure you actually have that app ready. For more information, see [How to host custom apps with Azure Virtual Desktop](./remote-app-streaming/custom-apps.md).-- If existing Azure Gallery image options don't meet your needs, you'll also need to create your own custom image for your session host VMs. To learn more about how to create VM images, see [Prepare a Windows VHD or VHDX to upload to Azure](../virtual-machines/windows/prepare-for-upload-vhd-image.md) and [Create a managed image of a generalized VM in Azure](../virtual-machines/windows/capture-image-resource.md).-- Your domain join credentials. If you don't already have an identity management system compatible with Azure Virtual Desktop, you'll need to set up identity management for your host pool. To learn more, see [Set up managed identities](./remote-app-streaming/identities.md).-
-### Final requirements
-
-Finally, make sure you've registered the Microsoft.DesktopVirtualization resource provider. If you haven't already, go to **Subscriptions**, select the name of your subscription, and then select **Resource providers**. Search for **DesktopVirtualization**, select **Microsoft.DesktopVirtualization**, and then select **Register**.
-
-If you're an IT professional creating a network, when you create a Azure Virtual Desktop host pool with the Azure Resource Manager template, you can create a virtual machine from the Azure gallery, a managed image, or an unmanaged image. To learn more about how to create VM images, see [Prepare a Windows VHD or VHDX to upload to Azure](../virtual-machines/windows/prepare-for-upload-vhd-image.md) and [Create a managed image of a generalized VM in Azure](../virtual-machines/windows/capture-image-resource.md). (If you're an app developer, you don't need to worry about this part.)
-
-Last but not least, if you don't have an Azure subscription already, make sure to [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you start following these instructions.
+Before you can create a host pool, make sure you've completed the prerequisites. For more information, see [Prerequisites for Azure Virtual Desktop](prerequisites.md).
## Begin the host pool setup process
To start creating your new host pool:
1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com/). >[!NOTE]
- > If you're signing in to the US Gov portal, go to [https://portal.azure.us/](https://portal.azure.us/) instead.
+ > If you need to access the Azure US Gov portal, go to [https://portal.azure.us/](https://portal.azure.us/) instead.
>
- >If you're accessing the Azure China portal, go to [https://portal.azure.cn/](https://portal.azure.cn/).
+ > If you need to access the Azure China portal, go to [https://portal.azure.cn/](https://portal.azure.cn/) instead.
2. Enter **Azure Virtual Desktop** into the search bar, then find and select **Azure Virtual Desktop** under Services.
To set up your virtual machine within the Azure portal host pool setup process:
- Windows 10 Enterprise multi-session, Version 2004 - Windows 10 Enterprise multi-session, Version 2004 + Microsoft 365 Apps
- If you don't see the image you want, select **See all images**, which lets you select either another image in your gallery or an image provided by Microsoft and other publishers. Make sure that the image you choose is one of the [supported OS images](overview.md#supported-virtual-machine-os-images).
+ If you don't see the image you want, select **See all images**, which lets you select either another image in your gallery or an image provided by Microsoft and other publishers. Make sure that the image you choose is one of the [supported OS images](prerequisites.md#operating-systems-and-licenses).
> [!div class="mx-imgBorder"] > ![A screenshot of the Azure portal with a list of images from Microsoft displayed.](media/marketplace-images.png)
virtual-desktop Fslogix Office App Rule Editor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/fslogix-office-app-rule-editor.md
You'll need the following things to set up the rule editor:
## Install Office
-To install Office on your VHD or VHDX, enable the Remote Desktop Protocol in your VM, then follow the instructions in [Install Office on a VHD master image](install-office-on-wvd-master-image.md). When installing, make sure you're using [the correct licenses](overview.md#requirements).
+To install Office on your VHD or VHDX, enable the Remote Desktop Protocol in your VM, then follow the instructions in [Install Office on a VHD master image](install-office-on-wvd-master-image.md). When installing, make sure you're using [the correct licenses](prerequisites.md#operating-systems-and-licenses).
>[!NOTE] >Azure Virtual Desktop requires Share Computer Activation (SCA).
virtual-desktop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/overview.md
Title: What is Azure Virtual Desktop? - Azure
description: An overview of Azure Virtual Desktop. Previously updated : 07/14/2021 Last updated : 03/16/2022
Azure Virtual Desktop is a desktop and app virtualization service that runs on t
Here's what you can do when you run Azure Virtual Desktop on Azure:
-* Set up a multi-session Windows 10 deployment that delivers a full Windows 10 with scalability
-* Virtualize Microsoft 365 Apps for enterprise and optimize it to run in multi-user virtual scenarios
-* Provide Windows 7 virtual desktops with free Extended Security Updates
-* Bring your existing Remote Desktop Services (RDS) and Windows Server desktops and apps to any computer
-* Virtualize both desktops and apps
-* Manage Windows 10, Windows Server, and Windows 7 desktops and apps with a unified management experience
+- Set up a multi-session Windows 11 or Windows 10 deployment that delivers a full Windows experience with scalability
+- Present Microsoft 365 Apps for enterprise and optimize it to run in multi-user virtual scenarios
+- Provide Windows 7 virtual desktops with free Extended Security Updates
+- Bring your existing Remote Desktop Services (RDS) and Windows Server desktops and apps to any computer
+- Virtualize both desktops and apps
+- Manage desktops and apps from different Windows and Windows Server operating systems with a unified management experience
## Introductory video
-Learn about Azure Virtual Desktop, why it's unique, and what's new in this video:
+Learn about Azure Virtual Desktop (formerly Windows Virtual Desktop), why it's unique, and what's new in this video:
-<br></br><iframe src="https://www.youtube.com/embed/NQFtI3JLtaU" width="640" height="320" allowFullScreen="true" frameBorder="0"></iframe>
+> [!VIDEO https://www.youtube.com/embed/NQFtI3JLtaU]
For more videos about Azure Virtual Desktop, see [our playlist](https://www.youtube.com/watch?v=NQFtI3JLtaU&list=PLXtHYVsvn_b8KAKw44YUpghpD6lg-EHev).
For more videos about Azure Virtual Desktop, see [our playlist](https://www.yout
With Azure Virtual Desktop, you can set up a scalable and flexible environment:
-* Create a full desktop virtualization environment in your Azure subscription without running any gateway servers.
-* Publish as many host pools as you need to accommodate your diverse workloads.
-* Bring your own image for production workloads or test from the Azure Gallery.
-* Reduce costs with pooled, multi-session resources. With the new Windows 10 Enterprise multi-session capability, exclusive to Azure Virtual Desktop and Remote Desktop Session Host (RDSH) role on Windows Server, you can greatly reduce the number of virtual machines and operating system (OS) overhead while still providing the same resources to your users.
-* Provide individual ownership through personal (persistent) desktops.
+- Create a full desktop virtualization environment in your Azure subscription without running any gateway servers.
+- Publish host pools as you need to accommodate your diverse workloads.
+- Bring your own image for production workloads or test from the Azure Gallery.
+- Reduce costs with pooled, multi-session resources. With the new Windows 11 and Windows 10 Enterprise multi-session capability, exclusive to Azure Virtual Desktop and Remote Desktop Session Host (RDSH) role on Windows Server, you can greatly reduce the number of virtual machines and operating system overhead while still providing the same resources to your users.
+- Provide individual ownership through personal (persistent) desktops.
+- Use autoscale to automatically increase or decrease capacity based on time of day, specific days of the week, or as demand changes, helping to manage cost.
You can deploy and manage virtual desktops:
-* Use the Azure portal, Azure Virtual Desktop PowerShell and REST interfaces to configure the host pools, create app groups, assign users, and publish resources.
-* Publish full desktop or individual remote apps from a single host pool, create individual app groups for different sets of users, or even assign users to multiple app groups to reduce the number of images.
-* As you manage your environment, use built-in delegated access to assign roles and collect diagnostics to understand various configuration or user errors.
-* Use the new Diagnostics service to troubleshoot errors.
-* Only manage the image and virtual machines, not the infrastructure. You don't need to personally manage the Remote Desktop roles like you do with Remote Desktop Services, just the virtual machines in your Azure subscription.
+- Use the Azure portal, Azure CLI, PowerShell and REST API to configure the host pools, create app groups, assign users, and publish resources.
+- Publish full desktop or individual remote apps from a single host pool, create individual app groups for different sets of users, or even assign users to multiple app groups to reduce the number of images.
+- As you manage your environment, use built-in delegated access to assign roles and collect diagnostics to understand various configuration or user errors.
+- Use the new Diagnostics service to troubleshoot errors.
+- Only manage the image and virtual machines, not the infrastructure. You don't need to personally manage the Remote Desktop roles like you do with Remote Desktop Services, just the virtual machines in your Azure subscription.
You can also assign and connect users to your virtual desktops:
-* Once assigned, users can launch any Azure Virtual Desktop client to connect to their published Windows desktops and applications. Connect from any device through either a native application on your device or the Azure Virtual Desktop HTML5 web client.
-* Securely establish users through reverse connections to the service, so you never have to leave any inbound ports open.
+- Once assigned, users can launch any Azure Virtual Desktop client to connect to their published Windows desktops and applications. Connect from any device through either a native application on your device or the Azure Virtual Desktop HTML5 web client.
+- Securely establish users through reverse connections to the service, so you don't need to open any inbound ports.
-## Requirements
-
-There are a few things you need to set up Azure Virtual Desktop and successfully connect your users to their Windows desktops and applications.
-
-We support the following operating systems, so make sure you have the [appropriate licenses](https://azure.microsoft.com/pricing/details/virtual-desktop/) for your users based on the desktop and apps you plan to deploy:
-
-|OS|Required license|
-|||
-|Windows 10 Enterprise multi-session or Windows 10 Enterprise|Microsoft 365 E3, E5, A3, A5, F3, Business Premium<br>Windows E3, E5, A3, A5|
-|Windows 7 Enterprise |Microsoft 365 E3, E5, A3, A5, F3, Business Premium<br>Windows E3, E5, A3, A5|
-|Windows Server 2012 R2, 2016, 2019, 2022|RDS Client Access License (CAL) with Software Assurance|
-
-Your infrastructure needs the following things to support Azure Virtual Desktop:
-
-* An [Azure Active Directory](../active-directory/index.yml).
-* A Windows Server Active Directory in sync with Azure Active Directory. You can configure this using Azure AD Connect (for hybrid organizations) or Azure AD Domain Services (for hybrid or cloud organizations).
- * A Windows Server AD in sync with Azure Active Directory. User is sourced from Windows Server AD and the Azure Virtual Desktop VM is joined to Windows Server AD domain.
- * A Windows Server AD in sync with Azure Active Directory. User is sourced from Windows Server AD and the Azure Virtual Desktop VM is joined to Azure AD Domain Services domain.
- * An Azure AD Domain Services domain. User is sourced from Azure Active Directory, and the Azure Virtual Desktop VM is joined to Azure AD Domain Services domain.
-* An Azure subscription, parented to the same Azure AD tenant, that contains a virtual network that either contains or is connected to the Windows Server Active Directory or Azure AD DS instance.
-
-User requirements to connect to Azure Virtual Desktop:
-
-* The user must be sourced from the same Active Directory that's connected to Azure AD. Azure Virtual Desktop does not support B2B or MSA accounts.
-* The UPN you use to subscribe to Azure Virtual Desktop must exist in the Active Directory domain the VM is joined to.
-
-The Azure virtual machines you create for Azure Virtual Desktop must be:
-
-* [Standard domain-joined](../active-directory-domain-services/compare-identity-solutions.md) or [Hybrid AD-joined](../active-directory/devices/hybrid-azuread-join-plan.md). [Azure AD-joined](deploy-azure-ad-joined-vm.md) virtual machines are available in preview.
-* Running one of the following [supported OS images](#supported-virtual-machine-os-images).
-
->[!NOTE]
->If you need an Azure subscription, you can [sign up for a one-month free trial](https://azure.microsoft.com/free/). If you're using the free trial version of Azure, you should use Azure AD Domain Services to keep your Windows Server Active Directory in sync with Azure Active Directory.
-
-For a list of URLs you should unblock for your Azure Virtual Desktop deployment to work as intended, see our [Required URL list](safe-url-list.md).
-
-Azure Virtual Desktop includes the Windows desktops and apps you deliver to users and the management solution, which is hosted as a service on Azure by Microsoft. Desktops and apps can be deployed on virtual machines (VMs) in any Azure region, and the management solution and data for these VMs will reside in the United States. This may result in data transfer to the United States.
-
-For optimal performance, make sure your network meets the following requirements:
-
-* Round-trip (RTT) latency from the client's network to the Azure region where host pools have been deployed should be less than 150 ms. Use the [Experience Estimator](https://azure.microsoft.com/services/virtual-desktop/assessment) to view your connection health and recommended Azure region.
-* Network traffic may flow outside country/region borders when VMs that host desktops and apps connect to the management service.
-* To optimize for network performance, we recommend that the session host's VMs are located in the Azure region that is closest to the user.
-
-You can see a typical architectural setup of Azure Virtual Desktop for the enterprise in our [architecture documentation](/azure/architecture/example-scenario/wvd/windows-virtual-desktop).
-
-## Supported Remote Desktop clients
-
-The following Remote Desktop clients support Azure Virtual Desktop:
-
-* [Windows Desktop](./user-documentation/connect-windows-7-10.md)
-* [Web](./user-documentation/connect-web.md)
-* [macOS](./user-documentation/connect-macos.md)
-* [iOS](./user-documentation/connect-ios.md)
-* [Android](./user-documentation/connect-android.md)
-* Microsoft Store Client
-
-> [!IMPORTANT]
-> Azure Virtual Desktop doesn't support the RemoteApp and Desktop Connections (RADC) client or the Remote Desktop Connection (MSTSC) client.
-
-To learn more about URLs you must unblock to use the clients, see the [Safe URL list](safe-url-list.md).
-
-## Supported virtual machine OS images
-
-Azure Virtual Desktop follows the [Microsoft Lifecycle Policy](/lifecycle/) and supports the following x64 operating system images:
-
-* Windows 11 Enterprise multi-session
-* Windows 11 Enterprise
-* Windows 10 Enterprise multi-session
-* Windows 10 Enterprise
-* Windows 7 Enterprise
-* Windows Server 2022
-* Windows Server 2019
-* Windows Server 2016
-* Windows Server 2012 R2
-
-Azure Virtual Desktop doesn't support x86 (32-bit), Windows 10 Enterprise N, Windows 10 LTSB, Windows 10 LTSC, Windows 10 Pro, or Windows 10 Enterprise KN operating system images. Windows 7 also doesn't support any VHD or VHDX-based profile solutions hosted on managed Azure Storage due to a sector size limitation.
-
-Available automation and deployment options depend on which OS and version you choose, as shown in the following table:
-
-|Operating system|Azure Image Gallery|Manual VM deployment|Azure Resource Manager template integration|Provision host pools on Azure Marketplace|
-|--|::|::|::|::|
-|Windows 11 Enterprise multi-session|Yes|Yes|Yes|Yes|
-|Windows 11 Enterprise|Yes|Yes|Yes|Yes|
-|Windows 10 Enterprise multi-session, version 1909 and later|Yes|Yes|Yes|Yes|
-|Windows 10 Enterprise, version 1909 and later|Yes|Yes|Yes|Yes|
-|Windows 7 Enterprise|Yes|Yes|No|No|
-|Windows Server 2022|Yes|Yes|No|No|
-|Windows Server 2019|Yes|Yes|No|No|
-|Windows Server 2016|Yes|Yes|Yes|Yes|
-|Windows Server 2012 R2|Yes|Yes|No|No|
+You can see a typical architectural setup of Azure Virtual Desktop for the enterprise in our [architecture documentation](/azure/architecture/example-scenario/wvd/windows-virtual-desktop?context=/azure/virtual-desktop/context/context).
## Next steps
-If you're using Azure Virtual Desktop (classic), you can get started with our tutorial at [Create a tenant in Azure Virtual Desktop](./virtual-desktop-fall-2019/tenant-setup-azure-active-directory.md).
-
-If you're using the Azure Virtual Desktop with Azure Resource Manager integration, you'll need to create a host pool instead. Head to the following tutorial to get started.
+Read through the prerequisites for Azure Virtual Desktop before getting started creating a host pool.
> [!div class="nextstepaction"]
-> [Create a host pool with the Azure portal](create-host-pools-azure-marketplace.md)
+> [Prerequisites](prerequisites.md)
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
+
+ Title: Prerequisites for Azure Virtual Desktop
+description: Find what prerequisites you need to complete to successfully connect your users to their Windows desktops and applications.
++ Last updated : 03/09/2022+++
+# Prerequisites for Azure Virtual Desktop
+
+There are a few things you need to start using Azure Virtual Desktop. Here you can find what prerequisites you need to complete to successfully provide your users with virtual desktops and remote apps.
+
+At a high level, you'll need:
+
+> [!div class="checklist"]
+> - An Azure account with an active subscription
+> - An identity provider
+> - A supported operating system
+> - Appropriate licenses
+> - Network connectivity
+> - A Remote Desktop client
+
+## Azure account with an active subscription
+
+You'll need an Azure account with an active subscription to deploy Azure Virtual Desktop. If you don't have one already, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+You also need to make sure you've registered the *Microsoft.DesktopVirtualization* resource provider for your subscription.
+
+> [!IMPORTANT]
+> You must have permission to register a resource provider, which requires the `*/register/action` operation. This is included if you are assigned the [contributor or owner role](../role-based-access-control/built-in-roles.md) on your subscription.
+
+To check the status of the resource provider and register if needed:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select **Subscriptions**.
+1. Select the name of your subscription.
+1. Select **Resource providers**.
+1. Search for **Microsoft.DesktopVirtualization**.
+1. If the status is *NotRegistered*, select **Microsoft.DesktopVirtualization**, and then select **Register**.
+1. Verify that the status of Microsoft.DesktopVirtualization is **Registered**.
+
+## Identity
+
+To access virtual desktops and remote apps from your session hosts, your users need to be able to authenticate. [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) is Microsoft's centralized cloud identity service that enables this capability. Azure AD is always used to authenticate users for Azure Virtual Desktop. Session hosts can be joined to the same Azure AD tenant, or to an Active Directory domain using [Active Directory Domain Services](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) (AD DS) or [Azure Active Directory Domain Services](../active-directory-domain-services/overview.md) (Azure AD DS), providing you with a choice of flexible configuration options.
+
+### Session hosts
+
+You need to join session hosts that provide virtual desktops and remote apps to an AD DS domain, Azure AD DS domain, or the same Azure AD tenant as your users.
+
+- If you're joining session hosts to an AD DS domain and you want to manage them using [Intune](/mem/intune/fundamentals/what-is-intune), you'll need to configure [Azure AD Connect](../active-directory/hybrid/whatis-azure-ad-connect.md) to enable [hybrid Azure AD join](../active-directory/devices/hybrid-azuread-join-plan.md).
+- If you're joining session hosts to an Azure AD DS domain, you can't manage them using [Intune](/mem/intune/fundamentals/what-is-intune).
+
+### Users
+
+Your users need accounts that are in Azure AD. If you're also using AD DS or Azure AD DS in your deployment of Azure Virtual Desktop, these accounts will need to be [hybrid identities](../active-directory/hybrid/whatis-hybrid-identity.md), which means the user account is synchronized. You'll need to keep the following things in mind based on which account you use:
+
+- If you're using Azure AD with AD DS, you'll need to configure [Azure AD Connect](../active-directory/hybrid/whatis-azure-ad-connect.md) to synchronize user identity data between AD DS and Azure AD.
+- If you're using Azure AD with Azure AD DS, user accounts are synchronized one way from Azure AD to Azure AD DS. This synchronization process is automatic.
+
+### Supported identity scenarios
+
+The following table summarizes identity scenarios that Azure Virtual Desktop currently supports:
+
+| Identity scenario | Session hosts | User accounts |
+|--|--|--|
+| Azure AD + AD DS | Joined to AD DS | In AD DS and Azure AD, synchronized |
+| Azure AD + Azure AD DS | Joined to Azure AD DS | In Azure AD and Azure AD DS, synchronized |
+| Azure AD + Azure AD DS + AD DS | Joined to Azure AD DS | In Azure AD and AD DS, synchronized |
+| Azure AD only | Joined to Azure AD | In Azure AD |
+
+> [!NOTE]
+> If you're planning on using Azure AD only with [FSLogix Profile Container](/fslogix/configure-profile-container-tutorial), you will need to [store profiles on Azure Files](create-profile-container-azure-ad.md), which is currently in public preview. In this scenario, user accounts must be [hybrid identities](../active-directory/hybrid/whatis-hybrid-identity.md), which means you'll also need AD DS and [Azure AD Connect](../active-directory/hybrid/whatis-azure-ad-connect.md). You must create these accounts in AD DS and synchronize them to Azure AD. The service doesn't currently support environments where users are managed with Azure AD and synchronized to Azure AD DS.
+
+> [!IMPORTANT]
+> The user account must exist in the Azure AD tenant you use for Azure Virtual Desktop. Azure Virtual Desktop doesn't support [B2B](../active-directory/external-identities/what-is-b2b.md), [B2C](../active-directory-b2c/overview.md), or personal Microsoft accounts.
+>
+> The [UserPrincipalName (UPN)](../active-directory/hybrid/plan-connect-userprincipalname.md) you use to subscribe to Azure Virtual Desktop must exist in the Active Directory domain you're joining the session host to.
+
+### Deployment parameters
+
+You'll need to enter the following identity parameters when deploying session hosts:
+
+- Domain name, if using AD DS or Azure AD DS.
+- Credentials to join session hosts to the domain.
+- Organizational Unit (OU), which is an optional parameter that lets you place session hosts in the desired OU at deployment time.
+
+> [!IMPORTANT]
+> The account you use for joining a domain can't have multi-factor authentication (MFA) enabled.
+>
+> When joining an Azure AD DS domain, the account you use must be part of the Azure AD DC administrators group.
+
+## Operating systems and licenses
+
+You have a choice of operating systems that you can use for session hosts to provide virtual desktops and remote apps. You can use different operating systems with different host pools to provide flexibility to your users. Supported dates are inline with the [Microsoft Lifecycle Policy](/lifecycle/). We support the following 64-bit versions of these operating systems:
+
+|Operating system |Applicable license|
+|||
+|<ul><li>Windows 11 Enterprise multi-session</li><li>Windows 11 Enterprise</li><li>Windows 10 Enterprise multi-session, version 1909 and later</li><li>Windows 10 Enterprise, version 1909 and later</li><li>Windows 7 Enterprise</li></ul>|<ul><li>Microsoft 365 E3, E5, A3, A5, F3, Business Premium, Student Use Benefit</li><li>Windows Enterprise E3, E5</li><li>Windows VDA E3, E5</li><li>Windows Education A3, A5</li></ul>|
+|<ul><li>Windows Server 2022</li><li>Windows Server 2019</li><li>Windows Server 2016</li><li>Windows Server 2012 R2</li></ul>|<ul><li>Remote Desktop Services (RDS) Client Access License (CAL) with Software Assurance (per-user or per-device), or RDS User Subscription Licenses</li></ul>|
+
+> [!NOTE]
+> Azure Virtual Desktop doesn't support 32-bit operating systems or SKUs not listed in the previous table. In addition, Windows 7 doesn't support any VHD or VHDX-based profile solutions hosted on managed Azure Storage due to a sector size limitation.
+
+You can use operating system images provided by Microsoft in the [Azure Marketplace](https://azuremarketplace.microsoft.com), or your own custom images stored in an Azure Compute Gallery, as a managed image, or storage blob. To learn more about how to create custom images, see:
+
+- [Store and share images in an Azure Compute Gallery](../virtual-machines/shared-image-galleries.md).
+- [Create a managed image of a generalized VM in Azure](../virtual-machines/windows/capture-image-resource.md).
+- [Prepare a Windows VHD or VHDX to upload to Azure](../virtual-machines/windows/prepare-for-upload-vhd-image.md).
+
+You can deploy virtual machines (VMs) to be used as session hosts from these images with any of the following methods:
+
+- Automatically, as part of the [host pool setup process](create-host-pools-azure-marketplace.md).
+- Manually, in the Azure portal and [adding to a host pool after you've created it](expand-existing-host-pool.md).
+- Programmatically, with [Azure CLI, PowerShell](create-host-pools-powershell.md), or [REST API](/rest/api/desktopvirtualization/).
+
+There are different automation and deployment options available depending on which operating system and version you choose, as shown in the following table:
+
+|Operating system|Azure Image Gallery|Manual VM deployment|Azure Resource Manager template integration|Deploy host pools from Azure Marketplace|
+|--|::|::|::|::|
+|Windows 11 Enterprise multi-session|Yes|Yes|Yes|Yes|
+|Windows 11 Enterprise|Yes|Yes|No|No|
+|Windows 10 Enterprise multi-session, version 1909 and later|Yes|Yes|Yes|Yes|
+|Windows 10 Enterprise, version 1909 and later|Yes|Yes|No|No|
+|Windows 7 Enterprise|Yes|Yes|No|No|
+|Windows Server 2022|Yes|Yes|No|No|
+|Windows Server 2019|Yes|Yes|Yes|Yes|
+|Windows Server 2016|Yes|Yes|No|No|
+|Windows Server 2012 R2|Yes|Yes|No|No|
+
+## Network
+
+There are several network requirements you'll need to meet to successfully deploy Azure Virtual Desktop. This lets users connect to their virtual desktops and remote apps while also giving them the best possible user experience.
+
+Users connecting to Azure Virtual Desktop use Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) on port 443, which securely establishes a reverse connection to the service. This means you don't need to open any inbound ports.
+
+To successfully deploy Azure Virtual Desktop, you'll need to meet the following network requirements:
+
+- You'll need a virtual network for your session hosts. If you create your session hosts at the same time as a host pool, you must create this virtual network in advance for it to appear in the drop-down list. Your virtual network must be in the same Azure region as the session host.
+
+- Make sure this virtual network can connect to your domain controllers and relevant DNS servers if you're using AD DS or Azure AD DS, since you'll need to join session hosts to the domain.
+
+- Your session hosts and users need to be able to connect to the Azure Virtual Desktop service. This connection also uses TCP on port 443 to a specific list of URLs. For more information, see [Required URL list](safe-url-list.md). You must make sure these URLs aren't blocked by network filtering or a firewall in order for your deployment to work properly and be supported. If your users need to access Microsoft 365, make sure your session hosts can connect to [Microsoft 365 endpoints](/microsoft-365/enterprise/microsoft-365-endpoints).
+
+Also consider the following:
+
+- Your users may need access to applications and data that is hosted on different networks, so make sure your session hosts can connect to them.
+
+- Round-trip time (RTT) latency from the client's network to the Azure region that contains the host pools should be less than 150 ms. Use the [Experience Estimator](https://azure.microsoft.com/services/virtual-desktop/assessment/) to view your connection health and recommended Azure region. To optimize for network performance, we recommend you create session hosts in the Azure region closest to your users.
+
+- Use [Azure Firewall for Azure Virtual Desktop deployments](../firewall/protect-azure-virtual-desktop.md) to help you lock down your environment and filter outbound traffic.
+
+> [!NOTE]
+> To keep Azure Virtual Desktop reliable and scalable, we aggregate traffic patterns and usage to check the health and performance of the infrastructure control plane. We aggregate this information from all locations where the service infrastructure is, then send it to the US region. The data sent to the US region includes scrubbed data, but not customer data. For more information, see [Data locations for Azure Virtual Desktop](data-locations.md).
+
+To learn more, see [Understanding Azure Virtual Desktop network connectivity](network-connectivity.md).
+
+## Remote Desktop clients
+
+Your users will need a [Remote Desktop client](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients) to connect to virtual desktops and remote apps. The following clients support Azure Virtual Desktop:
+
+- [Windows Desktop client](./user-documentation/connect-windows-7-10.md)
+- [Web client](./user-documentation/connect-web.md)
+- [macOS client](./user-documentation/connect-macos.md)
+- [iOS client](./user-documentation/connect-ios.md)
+- [Android client](./user-documentation/connect-android.md)
+- [Microsoft Store client](./user-documentation/connect-microsoft-store.md)
+
+> [!IMPORTANT]
+> Azure Virtual Desktop doesn't support connections from the RemoteApp and Desktop Connections (RADC) client or the Remote Desktop Connection (MSTSC) client.
+
+To learn which URLs clients use to connect and that you must allow through firewalls and internet filters, see the [Required URL list](safe-url-list.md).
+
+## Next steps
+
+Get started with Azure Virtual Desktop by creating a host pool. Head to the following tutorial to find out more.
+
+> [!div class="nextstepaction"]
+> [Create a host pool with the Azure portal](create-host-pools-azure-marketplace.md)
virtual-desktop Set Up Golden Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-golden-image.md
This article will walk you through how to use the Azure portal to create a custom image to use for your Azure Virtual Desktop session hosts. This custom image, which we'll call a "golden image," contains all apps and configuration settings you want to apply to your deployment. There are other approaches to customizing your session hosts, such as using device management tools like [Microsoft Endpoint Manager](/mem/intune/fundamentals/azure-virtual-desktop-multi-session) or automating your image build using tools like [Azure Image Builder](../virtual-machines/windows/image-builder-virtual-desktop.md) with [Azure DevOps](/azure/devops/pipelines/get-started/key-pipelines-concepts?view=azure-devops&preserve-view=true). Which strategy works best depends on the complexity and size of your planned Azure Virtual Desktop environment and your current application deployment processes. ## Create an image from an Azure VM
-When creating a new VM for your golden image, make sure to choose an OS that's in the list of [supported virtual machine OS images](overview.md#supported-virtual-machine-os-images). We recommend using a Windows 10 multi-session (with or without Microsoft 365) or Windows Server image for pooled host pools. We recommend using Windows 10 Enterprise images for personal host pools. You can use either Generation 1 or Generation 2 VMs; Gen 2 VMs support features that aren't supported for Gen 1 machines. Learn more about Generation 1 and Generation 2 VMs at [Support for generation 2 VMs on Azure](../virtual-machines/generation-2.md).
+When creating a new VM for your golden image, make sure to choose an OS that's in the list of [supported virtual machine OS images](prerequisites.md#operating-systems-and-licenses). We recommend using a Windows 10 multi-session (with or without Microsoft 365) or Windows Server image for pooled host pools. We recommend using Windows 10 Enterprise images for personal host pools. You can use either Generation 1 or Generation 2 VMs; Gen 2 VMs support features that aren't supported for Gen 1 machines. Learn more about Generation 1 and Generation 2 VMs at [Support for generation 2 VMs on Azure](../virtual-machines/generation-2.md).
### Take your first snapshot First, [create the base VM](../virtual-machines/windows/quick-create-portal.md) for your chosen image. After you've deployed the image, take a snapshot of the disk of your image VM. Snapshots are save states that will let you roll back any changes if you run into problems while building the image. Since you'll be taking many snapshots throughout the build process, make sure to give the snapshot a name you can easily identify. ### Customize your VM
virtual-desktop Troubleshoot Connection Quality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-connection-quality.md
+
+ Title: Troubleshoot Azure Virtual Desktop connection quality
+description: How to troubleshoot connection quality issues in Azure Virtual Desktop.
++ Last updated : 03/16/2022++++
+# Troubleshooting connection quality in Azure Virtual Desktop
+
+If you experience issues with graphical quality in your Azure Virtual Desktop connection, you can use the Network Data diagnostic table to figure out what's going on. Graphical quality during a connection is affected by many factors, such as network configuration, network load, or virtual machine (VM) load. The Connection Network Data table can help you figure out which factor is causing the issue.
+
+## Addressing round trip time
+
+In Azure Virtual Desktop, latency up to 150 ms shouldnΓÇÖt impact user experience that doesn't involve rendering or video. Latencies between 150 ms and 200 ms should be fine for text processing. Latency above 200 ms may impact user experience.
+
+In addition, the Azure Virtual Desktop connection depends on the internet connection of the machine the user is using the service from. Users may lose connection or experience input delay in one of the following situations:
+
+ - The user doesn't have a stable local internet connection and the latency is over 200 ms.
+ - The network is saturated or rate-limited.
+
+To reduce round trip time:
+
+- Reduce the physical distance between end-users and the server. When possible, your end-users should connect to VMs in the Azure region closest to them.
+
+- Check your network configuration. Firewalls, ExpressRoutes, and other network configuration features can affect round trip time.
+
+- Check if something is interfering with your network bandwidth. If your network's available bandwidth is too low, you may need to change your network settings to improve connection quality. Make sure your configured settings follow our [network guidelines](/windows-server/remote/remote-desktop-services/network-guidance).
+
+- Check your compute resources by looking at CPU utilization and available memory on your VM. You can view your compute resources by following the instructions in [Configuring performance counters](../azure-monitor/agents/data-sources-performance-counters.md#configuring-performance-counters) to set up a performance counter to track certain information. For example, you can use the Processor Information(_Total)\\% Processor Time counter to track CPU utilization, or the Memory(\*)\\Available Mbytes counter for available memory. Both of these counters are enabled by default in Azure Virtual Desktop Insights. If both counters show that CPU usage is too high or available memory is too low, your VM size or storage may be too small to support your users' workloads, and you'll need to upgrade to a larger size.
+
+## Optimize VM latency with the Azure Virtual Desktop Experience Estimator tool
+
+The [Azure Virtual Desktop Experience Estimator tool](https://azure.microsoft.com/services/virtual-desktop/assessment/) can help you determine the best location to optimize the latency of your VMs. We recommend you use the tool every two to three months to make sure the optimal location hasn't changed as Azure Virtual Desktop rolls out to new areas.
+
+## Next steps
+
+For more information about how to diagnose connection quality, see [Connection quality in Azure Virtual Desktop](connection-latency.md).
virtual-machines Dedicated Host Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-migration-guide.md
Once all VMs have been migrated from your old Dedicated Host to the target Dedic
## Help and support
-If you have questions, ask community experts in [Microsoft Q&A](https://aka.ms/azure-dedicated-host-qa).
+If you have questions, ask community experts in [Microsoft Q&A](/answers/topics/azure-dedicated-host.html).
virtual-machines Migration Classic Resource Manager Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-deep-dive.md
You can find the classic deployment model and Resource Manager representations o
| Multiple network interfaces on a VM |Network interfaces |If a VM has multiple network interfaces associated with it, each network interface becomes a top-level resource as part of the migration, along with all the properties. | | Load-balanced endpoint set |Load balancer |In the classic deployment model, the platform assigned an implicit load balancer for every cloud service. During migration, a new load-balancer resource is created, and the load-balancing endpoint set becomes load-balancer rules. | | Inbound NAT rules |Inbound NAT rules |Input endpoints defined on the VM are converted to inbound network address translation rules under the load balancer during the migration. |
-| VIP address |Public IP address with DNS name |The virtual IP address becomes a public IP address, and is associated with the load balancer. A virtual IP can only be migrated if there is an input endpoint assigned to it. To retain the IP, you can [convert it to Reserved IP](https://docs.microsoft.com/previous-versions/azure/virtual-network/virtual-networks-reserved-public-ip#reserve-the-ip-address-of-an-existing-cloud-service) before migration. There will be downtime of about 60 seconds during this change.|
+| VIP address |Public IP address with DNS name |The virtual IP address becomes a public IP address, and is associated with the load balancer. A virtual IP can only be migrated if there is an input endpoint assigned to it. To retain the IP, you can [convert it to Reserved IP](/previous-versions/azure/virtual-network/virtual-networks-reserved-public-ip#reserve-the-ip-address-of-an-existing-cloud-service) before migration. There will be downtime of about 60 seconds during this change.|
| Virtual network |Virtual network |The virtual network is migrated, with all its properties, to the Resource Manager deployment model. A new resource group is created with the name `-migrated`. | | Reserved IPs |Public IP address with static allocation method |Reserved IPs associated with the load balancer are migrated, along with the migration of the cloud service or the virtual machine. Unassociated reserved IPs can be migrated using [Move-AzureReservedIP](/powershell/module/servicemanagement/azure.service/move-azurereservedip). | | Public IP address per VM |Public IP address with dynamic allocation method |The public IP address associated with the VM is converted as a public IP address resource, with the allocation method set to dynamic. |
As part of migrating your resources from the classic deployment model to the Res
* [Migrate ExpressRoute circuits and associated virtual networks from the classic to the Resource Manager deployment model](../expressroute/expressroute-migration-classic-resource-manager.md) * [Community tools for assisting with migration of IaaS resources from classic to Azure Resource Manager](migration-classic-resource-manager-community-tools.md) * [Review most common migration errors](migration-classic-resource-manager-errors.md)
-* [Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource Manager](migration-classic-resource-manager-faq.yml)
+* [Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource Manager](migration-classic-resource-manager-faq.yml)
virtual-machines Monitor Vm Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/monitor-vm-reference.md
For more information, see a list of [platform metrics that are supported in Azur
## Metric dimensions
-For more information about metric dimensions, see [Multi-dimensional metrics](/azure/azure-monitor/essentials/data-platform-metrics#multi-dimensional-metrics).
+For more information about metric dimensions, see [Multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics).
Azure virtual machines and virtual machine scale sets have the following dimensions that are associated with their metrics.
virtual-machines Nda100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nda100-v4-series.md
These instances provide excellent performance for many AI, ML, and analytics too
[Live Migration](maintenance-and-updates.md): Not Supported<br> [Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br> [VM Generation Support](generation-2.md): Generation 2<br>
-[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Not Supported<br>
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br>
[Ephemeral OS Disks](ephemeral-os-disks.md): Supported<br> InfiniBand: Supported, GPUDirect RDMA, 8 x 200 Gigabit HDR<br> Nvidia NVLink Interconnect: Supported<br>
virtual-machines Sizes General https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-general.md
General purpose VM sizes provide balanced CPU-to-memory ratio. Ideal for testing
- The [Av2-series](av2-series.md) VMs can be deployed on a variety of hardware types and processors. A-series VMs have CPU performance and memory configurations best suited for entry level workloads like development and test. The size is throttled, based upon the hardware, to offer consistent processor performance for the running instance, regardless of the hardware it is deployed on. To determine the physical hardware on which this size is deployed, query the virtual hardware from within the Virtual Machine. Example use cases include development and test servers, low traffic web servers, small to medium databases, proof-of-concepts, and code repositories.
- > [!NOTE]
- > The A8, A9, A10 A11 VMs are planned for retirement on 3/2021. For more information, see [HPC Migration Guide](https://azure.microsoft.com/resources/hpc-migration-guide/). These VM sizes are in the original "A_v1" series, NOT "v2".
- - [B-series burstable](sizes-b-series-burstable.md) VMs are ideal for workloads that do not need the full performance of the CPU continuously, like web servers, small databases and development and test environments. These workloads typically have burstable performance requirements. The B-Series provides these customers the ability to purchase a VM size with a price conscious baseline performance that allows the VM instance to build up credits when the VM is utilizing less than its base performance. When the VM has accumulated credit, the VM can burst above the VMΓÇÖs baseline using up to 100% of the CPU when your application requires the higher CPU performance. - The [DCv2-series](dcv2-series.md) can help protect the confidentiality and integrity of your data and code while itΓÇÖs processed in the public cloud. These machines are backed by the latest generation of Intel XEON E-2288G Processor with SGX technology. With the Intel Turbo Boost Technology these machines can go up to 5.0GHz. DCv2 series instances enable customers to build secure enclave-based applications to protect their code and data while itΓÇÖs in use.
virtual-machines Prepare For Upload Vhd Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/prepare-for-upload-vhd-image.md
Make sure the following settings are configured correctly for remote access:
```powershell Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp' -Name UserAuthentication -Value 1 -Type DWord -Force
- Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp' -Name SecurityLayer -Value 1 -Type DWord -Force
- Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp' -Name fAllowSecProtocolNegotiation -Value 1 -Type DWord -Force
``` 1. Set the keep-alive value:
virtual-machines Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-bicep.md
+
+ Title: 'Quickstart: Use a Bicep file to create a Windows VM'
+description: In this quickstart, you learn how to use a Bicep file to create a Windows virtual machine
+++++ Last updated : 03/11/2022++++
+# Quickstart: Create a Windows virtual machine using a Bicep file
+
+**Applies to:** :heavy_check_mark: Windows VMs
+
+This quickstart shows you how to use a Bicep file to deploy a Windows virtual machine (VM) in Azure.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/vm-simple-windows/).
++
+Several resources are defined in the Bicep file:
+
+- [**Microsoft.Network/virtualNetworks/subnets**](/azure/templates/Microsoft.Network/virtualNetworks/subnets): create a subnet.
+- [**Microsoft.Storage/storageAccounts**](/azure/templates/Microsoft.Storage/storageAccounts): create a storage account.
+- [**Microsoft.Network/publicIPAddresses**](/azure/templates/Microsoft.Network/publicIPAddresses): create a public IP address.
+- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/Microsoft.Network/networkSecurityGroups): create a network security group.
+- [**Microsoft.Network/virtualNetworks**](/azure/templates/Microsoft.Network/virtualNetworks): create a virtual network.
+- [**Microsoft.Network/networkInterfaces**](/azure/templates/Microsoft.Network/networkInterfaces): create a NIC.
+- [**Microsoft.Compute/virtualMachines**](/azure/templates/Microsoft.Compute/virtualMachines): create a virtual machine.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters adminUsername=<admin-username>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -adminUsername "<admin-username>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<admin-username\>** with a unique username. You'll also be prompted to enter adminPassword. The minimum password length is 12 characters.
+
+ When the deployment finishes, you should see a messaged indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the VM and all of the resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you deployed a simple virtual machine using a Bicep file. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs.
+
+> [!div class="nextstepaction"]
+> [Azure Windows virtual machine tutorials](./tutorial-manage-vm.md)
virtual-machines Oracle Database Backup Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-database-backup-strategies.md
Because Azure Files is designed to be a multi-user file share service, there are
## Azure NetApp Files
-The [Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-introduction) service is a complete storage solution for Oracle Databases in Azure VMs. Built on an enterprise-class, high-performance, metered file storage, it supports any workload type and is highly available by default. Together with the Oracle Direct NFS (dNFS) driver, Azure NetApp Files provides a highly optimized storage layer for the Oracle Database.
+The [Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-introduction.md) service is a complete storage solution for Oracle Databases in Azure VMs. Built on an enterprise-class, high-performance, metered file storage, it supports any workload type and is highly available by default. Together with the Oracle Direct NFS (dNFS) driver, Azure NetApp Files provides a highly optimized storage layer for the Oracle Database.
-Azure NetApp Files provides efficient storage-based snapshots on the underlying storage system that uses a Redirect on Write (RoW) mechanism. While snapshots are extremely fast to take and restore, they only serve as a first-line-of-defence, which can account for the vast majority of the required restore operations of any given organization, which is often recovery from human error. However, snapshots are not a complete backup. To cover all backup and restore requirements, [external snapshot replicas](/azure/azure-netapp-files/cross-region-replication-introduction) and/or other [backup vaults](/azure/azure-netapp-files/backup-introduction) must be created in a (remote) geography to protect from regional outage. Read more about [how Azure NetApp Files snapshots work](/azure/azure-netapp-files/snapshots-introduction).
+Azure NetApp Files provides efficient storage-based snapshots on the underlying storage system that uses a Redirect on Write (RoW) mechanism. While snapshots are extremely fast to take and restore, they only serve as a first-line-of-defence, which can account for the vast majority of the required restore operations of any given organization, which is often recovery from human error. However, snapshots are not a complete backup. To cover all backup and restore requirements, [external snapshot replicas](../../../azure-netapp-files/cross-region-replication-introduction.md) and/or other [backup vaults](../../../azure-netapp-files/backup-introduction.md) must be created in a (remote) geography to protect from regional outage. Read more about [how Azure NetApp Files snapshots work](../../../azure-netapp-files/snapshots-introduction.md).
-In order to ensure the creation of database consistent snapshots the backup process must be orchestrated between the database and the storage. Azure Application Consistent Snapshot tool (AzAcSnap) is a command-line tool that enables data protection for third-party databases by handling all the orchestration required to put them into an application consistent state before taking a storage snapshot, after which it returns them to an operational state. Oracle databases are supported with AzAcSnap since [version 5.1](/azure/azure-netapp-files/azacsnap-release-notes#azacsnap-v51-preview-build-2022012585030).
+In order to ensure the creation of database consistent snapshots the backup process must be orchestrated between the database and the storage. Azure Application Consistent Snapshot tool (AzAcSnap) is a command-line tool that enables data protection for third-party databases by handling all the orchestration required to put them into an application consistent state before taking a storage snapshot, after which it returns them to an operational state. Oracle databases are supported with AzAcSnap since [version 5.1](../../../azure-netapp-files/azacsnap-release-notes.md#azacsnap-v51-preview-build-2022012585030).
-To learn more about using Azure NetApp Files for Oracle Databases on Azure, read more [here](/azure/azure-netapp-files/azure-netapp-files-solution-architectures#oracle).
+To learn more about using Azure NetApp Files for Oracle Databases on Azure, read more [here](../../../azure-netapp-files/azure-netapp-files-solution-architectures.md#oracle).
## Azure Backup service
Azure Backup is now providing an [enhanced pre-script and post-script framework]
- [Create Oracle Database quickstart](oracle-database-quick-create.md) - [Back up Oracle Database to Azure Files](oracle-database-backup-azure-storage.md)-- [Back up Oracle Database using Azure Backup service](oracle-database-backup-azure-backup.md)
+- [Back up Oracle Database using Azure Backup service](oracle-database-backup-azure-backup.md)
virtual-machines High Availability Guide Standard Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-standard-load-balancer-outbound-connections.md
The configuration would look like:
2. Create Backend pool **MyBackendPoolOfPublicILB** and add the VMs. 1. Select the Virtual network 1. Select the VMs and their IP addresses and add them to the backend pool
-3. [Create outbound rules](../../../load-balancer/quickstart-load-balancer-standard-public-cli.md?tabs=option-1-create-load-balancer-standard%3ftabs%3doption-1-create-load-balancer-standard#create-outbound-rule-configuration). Currently it is not possible to create outbound rules from the Azure portal. You can create outbound rules with [Azure CLI](../../../cloud-shell/overview.md).
-
- ```azurecli
- az network lb outbound-rule create --address-pool MyBackendPoolOfPublicILB --frontend-ip-configs MyPublicILBFrondEndIP --idle-timeout 30 --lb-name MyPublicILB --name MyOutBoundRules --outbound-ports 10000 --enable-tcp-reset true --protocol All --resource-group MyResourceGroup
- ```
-
+3. Create a NAT gateway for outbound internet access. For more information see [Tutorial: Create a NAT gateway - Azure CLI](../../../virtual-network/nat-gateway/tutorial-create-nat-gateway-cli.md).
4. Create Network Security group rules to restrict access to specific Public End Points. If there is existing Network Security Group, you can adjust it. The example below shows how to enable access to the Azure management API: 1. Navigate to the Network Security Group 1. Click Outbound Security Rules
virtual-machines Sap Rise Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-rise-integration.md
A vnet peering is the most performant way to connect securely and privately two
For SAP RISE/ECS deployments, virtual peering is the preferred way to establish connectivity with customerΓÇÖs existing Azure environment. Both the SAP vnet and customer vnet(s) are protected with network security groups (NSG), enabling communication on SAP and database ports through the vnet peering. Communication between the peered vnets is secured through these NSGs, limiting communication to customerΓÇÖs SAP environment. For details and a list of open ports, contact your SAP representative.
-SAP managed workload is preferably deployed in the same [Azure region](https://azure.microsoft.com/global-infrastructure/geographies/) as customerΓÇÖs central infrastructure and applications accessing it. Virtual network peering can be set up within the same region as your SAP managed environment, but also through [global virtual network peering](/azure/virtual-network/virtual-network-peering-overview) between any two Azure regions. With SAP RISE/ECS available in many Azure regions, the region ideally should be matched with workload running in customer vnets due to latency and vnet peering cost considerations. However, some of the scenarios (for example, central S/4HANA deployment for a multi-national, globally presented company) also require to peer networks globally.
+SAP managed workload is preferably deployed in the same [Azure region](https://azure.microsoft.com/global-infrastructure/geographies/) as customerΓÇÖs central infrastructure and applications accessing it. Virtual network peering can be set up within the same region as your SAP managed environment, but also through [global virtual network peering](../../../virtual-network/virtual-network-peering-overview.md) between any two Azure regions. With SAP RISE/ECS available in many Azure regions, the region ideally should be matched with workload running in customer vnets due to latency and vnet peering cost considerations. However, some of the scenarios (for example, central S/4HANA deployment for a multi-national, globally presented company) also require to peer networks globally.
:::image type="complex" source="./media/sap-rise-integration/sap-rise-peering.png" alt-text="Customer peering with SAP RISE/ECS"::: This diagram shows a typical SAP customer's hub and spoke virtual networks. Cross-tenant virtual network peering connects SAP RISE vnet to customer's hub vnet. :::image-end:::
-Since SAP RISE/ECS runs in SAPΓÇÖs Azure tenant and subscriptions, the virtual network peering needs to be set up between [different tenants](/azure/virtual-network/create-peering-different-subscriptions). This can be accomplished by setting up the peering with the SAP provided networkΓÇÖs Azure resource ID and have SAP approve the peering. Add a user from the opposite AAD tenant as a guest user, accept the guest user invitation and follow process documented at [Create a VNet peering - different subscriptions](/azure/virtual-network/create-peering-different-subscriptions#cli). Contact your SAP representative for the exact steps required. Engage the respective team(s) within your organization that deal with network, user administration and architecture to enable this process to be completed swiftly.
+Since SAP RISE/ECS runs in SAPΓÇÖs Azure tenant and subscriptions, the virtual network peering needs to be set up between [different tenants](../../../virtual-network/create-peering-different-subscriptions.md). This can be accomplished by setting up the peering with the SAP provided networkΓÇÖs Azure resource ID and have SAP approve the peering. Add a user from the opposite AAD tenant as a guest user, accept the guest user invitation and follow process documented at [Create a VNet peering - different subscriptions](../../../virtual-network/create-peering-different-subscriptions.md#cli). Contact your SAP representative for the exact steps required. Engage the respective team(s) within your organization that deal with network, user administration and architecture to enable this process to be completed swiftly.
## VPN Vnet-to-Vnet
With this architecture, central policies and security rules governing network co
If there's no currently existing Azure to on-premise connectivity, contact your SAP representative for details which connections models are possible to be established. Any on-premise to SAP RISE/ECS connection is then for reaching the SAP managed vnet only. The on-premise to SAP RISE/ECS connection isn't used to access customer's own Azure vnets.
-**Important to note**: A virtual network can have [only have one gateway](/azure/virtual-network/virtual-network-peering-overview#gateways-and-on-premises-connectivity), local or remote. With vnet peering established between SAP RISE/ECS using remote gateway transit like in above architecture, no gateways can be added in the SAP RISE/ECS vnet. A combination of vnet peering with remote gateway transit together with another VPN gateway in the SAP RISE/ECS vnet isn't possible.
+**Important to note**: A virtual network can have [only have one gateway](../../../virtual-network/virtual-network-peering-overview.md#gateways-and-on-premises-connectivity), local or remote. With vnet peering established between SAP RISE/ECS using remote gateway transit like in above architecture, no gateways can be added in the SAP RISE/ECS vnet. A combination of vnet peering with remote gateway transit together with another VPN gateway in the SAP RISE/ECS vnet isn't possible.
## Virtual WAN with SAP RISE/ECS managed workloads
See a series of blog posts on the architecture of the SAP BTP Private Link Servi
Check out the documentation: - [SAP workloads on Azure: planning and deployment checklist](./sap-deployment-checklist.md)-- [Virtual network peering](/azure/virtual-network/virtual-network-peering-overview)
+- [Virtual network peering](../../../virtual-network/virtual-network-peering-overview.md)
- [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md)
virtual-network Create Vm Accelerated Networking Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-cli.md
Accelerated Networking is now enabled for your VM.
## Handle dynamic binding and revocation of virtual function Applications must run over the synthetic NIC that is exposed in VM. If the application runs directly over the VF NIC, it doesn't receive **all** packets that are destined to the VM, since some packets show up over the synthetic interface. If you run an application over the synthetic NIC, it guarantees that the application receives **all** packets that are destined to it. It also makes sure that the application keeps running, even if the VF is revoked during host servicing. Applications binding to the synthetic NIC is a **mandatory** requirement for all applications taking advantage of **Accelerated Networking**.
-For more details on application binding requirements, see [How Accelerated Networking works in Linux and FreeBSD VMs](/azure/virtual-network/accelerated-networking-how-it-works#application-usage).
+For more details on application binding requirements, see [How Accelerated Networking works in Linux and FreeBSD VMs](./accelerated-networking-how-it-works.md#application-usage).
## Enable Accelerated Networking on existing VMs If you've created a VM without Accelerated Networking, it's possible to enable this feature on an existing VM. The VM must support Accelerated Networking by meeting the following prerequisites that are also outlined:
A VM with Accelerated Networking enabled can't be resized to a VM instance that
## Next steps * Learn [how Accelerated Networking works](./accelerated-networking-how-it-works.md) * Learn how to [create a VM with Accelerated Networking in PowerShell](../virtual-network/create-vm-accelerated-networking-powershell.md)
-* Improve latency with an [Azure proximity placement group](../virtual-machines/co-location.md)
-
+* Improve latency with an [Azure proximity placement group](../virtual-machines/co-location.md)
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
Previously updated : 02/25/2022 Last updated : 03/17/2022
Virtual Network NAT is a fully managed and highly resilient Network Address Tran
### Security
-With NAT, individual VMs (or other compute resources) don't need public IP addresses and can remain fully private. Resources without a public IP address can still reach external sources outside the virtual network. You can associate a public IP prefix to ensure that a contiguous set of IPs will be used for outbound. Destination firewall rules can be configured based on this predictable IP list.
+With a NAT gateway, individual VMs or other compute resources, don't need public IP addresses and can remain private. Resources without a public IP address can still reach external sources outside the virtual network. You can associate a public IP prefix to ensure that a contiguous set of IPs will be used for outbound. Destination firewall rules can be configured based on this predictable IP list.
### Resiliency
-NAT is a fully managed and distributed service. It doesn't depend on any individual compute instances such as VMs or a single physical gateway device. NAT uses software defined networking making it highly resilient.
+Virtual Network NAT is a fully managed and distributed service. It doesn't depend on individual compute instances such as VMs or a single physical gateway device. Software defined networking makes a NAT gateway highly resilient.
### Scalability
-NAT can be associated to a subnet and can be used by all compute resources in that subnet. Further, all subnets in a virtual network can use the same resource. When associated to a public IP prefix, it automatically scales to the number of IP addresses needed for outbound.
+A NAT gateway resource can be associated to a subnet and can be used by all compute resources in that subnet. All subnets in a virtual network can use the same resource. When a NAT gateway is associated to a public IP prefix, it automatically scales to the number of IP addresses needed for outbound.
### Performance
-NAT won't affect the network bandwidth of your compute resources since it's a software defined networking service. Learn more about [NAT gateway's performance](nat-gateway-resource.md#performance).
+Virtual Network NAT is a software defined networking service. A NAT gateway won't affect the network bandwidth of your compute resources. Learn more about [NAT gateway's performance](nat-gateway-resource.md#performance).
## Virtual Network NAT basics
-NAT can be created in a specific availability zone and has redundancy built in within the specified zone. NAT is non-zonal by default. A non-zonal Virtual Network NAT is one that hasn't been associated to a specific zone and instead is assigned to a specific zone by Azure. NAT can be isolated in a specific zone when you create [availability zones](../../availability-zones/az-overview.md) scenarios. This deployment is called a zonal deployment.
+A NAT gateway can be created in a specific availability zone. Redundancy is built in within the specified zone. Virtual Network NAT is non-zonal by default. A non-zonal Virtual Network NAT isn't associated to a specific zone and is assigned to a specific zone by Azure. A NAT gateway can be isolated in a specific zone when you create [availability zones](../../availability-zones/az-overview.md) scenarios. This deployment is called a zonal deployment.
-NAT is fully scaled out from the start. There's no ramp up or scale-out operation required. Azure manages the operation of NAT for you. NAT always has multiple fault domains and can sustain multiple failures without service outage.
+Virtual Network NAT is scaled out from creation. There isn't a ramp up or scale-out operation required. Azure manages the operation of Virtual Network NAT for you. A NAT gateway always has multiple fault domains and can sustain multiple failures without service outage.
-* Outbound connectivity can be defined for each subnet with NAT. Multiple subnets within the same virtual network can have different NATs. Or multiple subnets within the same virtual network can use the same NAT. A subnet is configured by specifying which NAT gateway resource to use. All outbound traffic for the subnet is processed by NAT automatically without any customer configuration. NAT takes precedence over other outbound scenarios and replaces the default Internet destination of a subnet.
+* Outbound connectivity can be defined for each subnet with a NAT gateway. Multiple subnets within the same virtual network can have different NAT gateways associated. Multiple subnets within the same virtual network can use the same NAT gateway. A subnet is configured by specifying which NAT gateway resource to use. All outbound traffic for the subnet is processed by the NAT gateway without any customer configuration. A NAT gateway takes precedence over other outbound scenarios and replaces the default Internet destination of a subnet.
-* Presence of custom UDRs for virtual appliances and VPN ExpressRoutes override NAT gateway for directing internet bound traffic (route to the 0.0.0.0/0 address prefix). See [Troubleshooting NAT gateway](./troubleshoot-nat.md#virtual-appliance-udrs-and-vpn-expressroute-override-nat-gateway-for-routing-outbound-traffic) to learn more.
+* Presence of custom UDRs for virtual appliances and ExpressRoute override NAT gateway for directing internet bound traffic (route to the 0.0.0.0/0 address prefix). See [Troubleshooting NAT gateway](./troubleshoot-nat.md#virtual-appliance-udrs-and-vpn-expressroute-override-nat-gateway-for-routing-outbound-traffic) to learn more.
-* NAT supports TCP and UDP protocols only. ICMP isn't supported.
+* Virtual Network NAT supports TCP and UDP protocols only. ICMP isn't supported.
* A NAT gateway resource can use a:
NAT is fully scaled out from the start. There's no ramp up or scale-out operatio
* Public IP prefix
-* NAT is compatible with standard SKU public IP addresses or public IP prefix resources or a combination of both. You can use a public IP prefix directly or distribute the public IP addresses of the prefix across multiple NAT gateway resources. NAT will groom all traffic to the range of IP addresses of the prefix. Basic resources, such as basic load balancer or basic public IPs aren't compatible with NAT. Basic resources must be placed on a subnet not associated to a NAT Gateway. Basic load balancer and basic public IP can be upgraded to standard to work with NAT gateway.
+* Virtual Network NAT is compatible with standard SKU public IP addresses or public IP prefix resources or a combination of both. You can use a public IP prefix directly or distribute the public IP addresses of the prefix across multiple NAT gateway resources. The NAT gateway will groom all traffic to the range of IP addresses of the prefix. Basic resources, such as basic load balancer or basic public IPs aren't compatible with Virtual Network NAT. Basic resources must be placed on a subnet not associated to a NAT gateway. Basic load balancer and basic public IP can be upgraded to standard to work with a NAT gateway.
-* To upgrade a basic load balancer to standard, see [Upgrade a public Azure Load Balancer](../../load-balancer/upgrade-basic-standard.md)
+* To upgrade a basic load balancer too standard, see [Upgrade a public Basic Azure Load Balancer](../../load-balancer/upgrade-basic-standard.md)
-* To upgrade a basic public IP to standard, see [Upgrade a public IP address](../ip-services/public-ip-upgrade-portal.md)
+* To upgrade a basic public IP too standard, see [Upgrade a public IP address](../ip-services/public-ip-upgrade-portal.md)
-* NAT is the recommended method for outbound connectivity. A NAT gateway doesn't have the same limitations of SNAT port exhaustion as does [default outbound access](../ip-services/default-outbound-access.md) and [outbound rules of a load balancer](../../load-balancer/outbound-rules.md).
+* Virtual Network NAT is the recommended method for outbound connectivity. A NAT gateway doesn't have the same limitations of SNAT port exhaustion as does [default outbound access](../ip-services/default-outbound-access.md) and [outbound rules of a load balancer](../../load-balancer/outbound-rules.md).
- * To migrate outbound access to a NAT gateway from default outbound access or from load balancer outbound rules, see [Migrate outbound access to Azure Virtual Network NAT](./tutorial-migrate-outbound-nat.md)
+ * To migrate outbound access to a NAT gateway from default outbound access or load balancer outbound rules, see [Migrate outbound access to Azure Virtual Network NAT](./tutorial-migrate-outbound-nat.md)
-* NAT canΓÇÖt be associated to an IPv6 public IP address or IPv6 public IP prefix. It can be associated to a dual stack subnet.
+* A NAT gateway canΓÇÖt be associated to an IPv6 public IP address or IPv6 public IP prefix. It can be associated to a dual stack subnet.
-* NAT allows flows to be created from the virtual network to the services outside your virtual network. Return traffic from the Internet is only allowed in response to an active flow. Services outside your virtual network canΓÇÖt initiate an inbound connection through NAT gateway.
+* A NAT gateway allows flows to be created from the virtual network to the services outside your virtual network. Return traffic from the Internet is only allowed in response to an active flow. Services outside your virtual network canΓÇÖt initiate an inbound connection through NAT gateway.
-* NAT canΓÇÖt span multiple virtual networks.
+* A NAT gateway canΓÇÖt span multiple virtual networks.
-* Multiple NATs canΓÇÖt be attached to a single subnet.
+* Multiple NAT gateways canΓÇÖt be attached to a single subnet.
-* NAT canΓÇÖt be deployed in a [gateway subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub)
+* A NAT gateway canΓÇÖt be deployed in a [gateway subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub)
-* The private side of NAT (virtual machine instances or other compute resources) sends TCP reset packets for attempts to communicate on a TCP connection that doesn't exist. One example is connections that have reached idle timeout. The next packet received will return a TCP reset to the private IP address to signal and force connection closure. The public side of NAT doesn't generate TCP reset packets or any other traffic. Only traffic produced by the customer's virtual network is emitted.
+* The private side of a NAT gateway, virtual machine instances or other compute resources, sends TCP reset packets for attempts to communicate on a TCP connection that doesn't exist. An example is connections that have reached idle timeout. The next packet received will return a TCP reset to the private IP address to signal and force connection closure. The public side of a NAT gateway doesn't generate TCP reset packets or any other traffic. Only traffic produced by the customer's virtual network is emitted.
* A default TCP idle timeout of 4 minutes is used and can be increased to up to 120 minutes. Any activity on a flow can also reset the idle timer, including TCP keepalives.
+## Pricing and SLA
+
+For Azure Virtual Network NAT pricing, see [NAT gateway pricing](https://azure.microsoft.com/pricing/details/virtual-network/#pricing).
+
+For information on the SLA, see [SLA for Virtual Network NAT](https://azure.microsoft.com/support/legal/sla/virtual-network-nat/v1_0/).
+ ## Next steps
-* Learn [how to get better outbound connectivity using an Azure NAT Gateway](https://www.youtube.com/watch?v=2Ng_uM0ZaB4).
-* Learn about [NAT gateway resource](./nat-gateway-resource.md).
-* Learn more about [NAT gateway metrics](./nat-metrics.md).
+* Learn [how to get better outbound connectivity using an Azure NAT gateway](https://www.youtube.com/watch?v=2Ng_uM0ZaB4)
+* Learn about [NAT gateway resource](./nat-gateway-resource.md)
+* Learn more about [NAT gateway metrics](./nat-metrics.md)
virtual-network Troubleshoot Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat.md
NAT gateway is not compatible with basic resources, such as Basic Load Balancer
### NAT gateway cannot be attached to a gateway subnet
-NAT gateway cannot be deployed in a gateway subnet. VPN gateway uses gateway subnets for VPN connections between site-to-site Azure virtual networks and local networks or between two Azure virtual networks. See [VPN gateway overview](/azure/vpn-gateway/vpn-gateway-about-vpngateways) to learn more about how gateway subnets are used.
+NAT gateway cannot be deployed in a gateway subnet. VPN gateway uses gateway subnets for VPN connections between site-to-site Azure virtual networks and local networks or between two Azure virtual networks. See [VPN gateway overview](../../vpn-gateway/vpn-gateway-about-vpngateways.md) to learn more about how gateway subnets are used.
### IPv6 coexistence
NAT gateway has a configurable TCP idle timeout timer that defaults to 4 minutes
| Scenario | Evidence | Mitigation | ||||
-| You would like to ensure that TCP connections stay active for long periods of time without idle timing out so you increase the TCP idle timeout timer setting. After a while you start to notice that connection failures occur more often. You suspect that you may be exhausting your inventory of SNAT ports since connections are holding on to them longer. | You check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor to determine if SNAT port exhaustion is happening: **Total SNAT Connection**: "Sum" aggregation shows high connection volume. "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume. | You have a few possible mitigation steps that you can take to resolve SNAT port exhaustion: - **Reduce the TCP idle timeout** to a lower value to free up SNAT port inventory earlier. The TCP idle timeout timer cannot be set lower than 4 minutes. - Consider **[asynchronous polling patterns](/azure/architecture/patterns/async-request-reply)** to free up connection resources for other operations. - **Use TCP keepalives or application layer keepalives** to avoid intermediate systems timing out. For examples, see [.NET examples](/dotnet/api/system.net.servicepoint.settcpkeepalive). - For connections going to Azure PaaS services, use **[Private Link](/azure/private-link/private-link-overview)**. Private Link eliminates the need to use public IPs of your NAT gateway which frees up more SNAT ports for outbound connections to the internet.|
+| You would like to ensure that TCP connections stay active for long periods of time without idle timing out so you increase the TCP idle timeout timer setting. After a while you start to notice that connection failures occur more often. You suspect that you may be exhausting your inventory of SNAT ports since connections are holding on to them longer. | You check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor to determine if SNAT port exhaustion is happening: **Total SNAT Connection**: "Sum" aggregation shows high connection volume. "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume. | You have a few possible mitigation steps that you can take to resolve SNAT port exhaustion: - **Reduce the TCP idle timeout** to a lower value to free up SNAT port inventory earlier. The TCP idle timeout timer cannot be set lower than 4 minutes. - Consider **[asynchronous polling patterns](/azure/architecture/patterns/async-request-reply)** to free up connection resources for other operations. - **Use TCP keepalives or application layer keepalives** to avoid intermediate systems timing out. For examples, see [.NET examples](/dotnet/api/system.net.servicepoint.settcpkeepalive). - For connections going to Azure PaaS services, use **[Private Link](../../private-link/private-link-overview.md)**. Private Link eliminates the need to use public IPs of your NAT gateway which frees up more SNAT ports for outbound connections to the internet.|
## Connection failures due to idle timeouts
Once the custom UDR is removed from the routing table, the NAT gateway public IP
### Private IPs are used to connect to Azure services by Private Link
-[Private Link](/azure/private-link/private-link-overview) connects your Azure virtual networks privately to Azure PaaS services such as Storage, SQL, or Cosmos DB over the Azure backbone network instead of over the internet. Private Link uses the private IP addresses of virtual machine instances in your virtual network to connect to these Azure platform services instead of the public IP of NAT gateway. As a result, when looking at the source IP address used to connect to these Azure services, you will notice that the private IPs of your instances are used. See [Azure services listed here](/azure/private-link/availability) for all services supported by Private Link.
+[Private Link](../../private-link/private-link-overview.md) connects your Azure virtual networks privately to Azure PaaS services such as Storage, SQL, or Cosmos DB over the Azure backbone network instead of over the internet. Private Link uses the private IP addresses of virtual machine instances in your virtual network to connect to these Azure platform services instead of the public IP of NAT gateway. As a result, when looking at the source IP address used to connect to these Azure services, you will notice that the private IPs of your instances are used. See [Azure services listed here](../../private-link/availability.md) for all services supported by Private Link.
When possible, Private Link should be used to connect directly from your virtual networks to Azure platform services in order to [reduce the demand on SNAT ports](#tcp-idle-timeout-timers-set-higher-than-the-default-value). Reducing the demand on SNAT ports can help reduce the risk of SNAT port exhaustion. To create a Private Link, see the following Quickstart guides to get started: -- [Create a Private Endpoint](/azure/private-link/create-private-endpoint-portal)-- [Create a Private Link](/azure/private-link/create-private-link-service-portal)
+- [Create a Private Endpoint](../../private-link/create-private-endpoint-portal.md)
+- [Create a Private Link](../../private-link/create-private-link-service-portal.md)
To check which Private Endpoints you have set up with Private Link: 1. From the Azure portal, search for Private Link in the search box.
-2. In the Private Link center, select Private Endpoints or Private Link services to see what configurations have been set up. See [Manage private endpoint connections](/azure/private-link/manage-private-endpoint#manage-private-endpoint-connections-on-azure-paas-resources) for more details.
+2. In the Private Link center, select Private Endpoints or Private Link services to see what configurations have been set up. See [Manage private endpoint connections](../../private-link/manage-private-endpoint.md#manage-private-endpoint-connections-on-azure-paas-resources) for more details.
Service endpoints can also be used to connect your virtual network to Azure PaaS services. To check if you have service endpoints configured for your virtual network: 1. From the Azure portal, navigate to your virtual network and select "Service endpoints" from Settings.
-2. All Service endpoints created will be listed along with which subnets they are configured. See [logging and troubleshooting Service endpoints](/azure/virtual-network/virtual-network-service-endpoints-overview#logging-and-troubleshooting) for more details.
+2. All Service endpoints created will be listed along with which subnets they are configured. See [logging and troubleshooting Service endpoints](../virtual-network-service-endpoints-overview.md#logging-and-troubleshooting) for more details.
>[!NOTE] >Private Link is the recommended option over Service endpoints for private access to Azure hosted services.
To learn more about NAT gateway, see:
* [Virtual Network NAT](nat-overview.md) * [NAT gateway resource](nat-gateway-resource.md)
-* [Metrics and alerts for NAT gateway resources](nat-metrics.md).
+* [Metrics and alerts for NAT gateway resources](nat-metrics.md).
virtual-network Setup Dpdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/setup-dpdk.md
All Azure regions support DPDK.
Accelerated networking must be enabled on a Linux virtual machine. The virtual machine should have at least two network interfaces, with one interface for management. Enabling Accelerated networking on management interface is not recommended. Learn how to [create a Linux virtual machine with accelerated networking enabled](create-vm-accelerated-networking-cli.md).
-On virtual machines that are using InfiniBand, ensure the appropriate `mlx4_ib` or `mlx5_ib` drivers are loaded, see [Enable InfiniBand](/azure/virtual-machines/workloads/hpc/enable-infiniband).
+On virtual machines that are using InfiniBand, ensure the appropriate `mlx4_ib` or `mlx5_ib` drivers are loaded, see [Enable InfiniBand](../virtual-machines/workloads/hpc/enable-infiniband.md).
## Install DPDK via system package (recommended)
When you're running the previous commands on a virtual machine, change *IP_SRC_A
* [EAL options](https://dpdk.org/doc/guides/testpmd_app_ug/run_app.html#eal-command-line-options) * [Testpmd commands](https://dpdk.org/doc/guides/testpmd_app_ug/run_app.html#testpmd-command-line-options)
-* [Packet dump commands](https://doc.dpdk.org/guides/tools/pdump.html#pdump-tool)
+* [Packet dump commands](https://doc.dpdk.org/guides/tools/pdump.html#pdump-tool)
virtual-network Virtual Networks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-faq.md
For more information, see [FAQ about classic to Azure Resource Manager migration
### How can I report an issue?
-You can post your questions about your migration issues to theΓÇ»[Microsoft Q&A](https://aka.ms/AAflal1) page. It's recommended that you post all your questions on this forum. If you have a support contract, you can also file a support request.
+You can post your questions about your migration issues to theΓÇ»[Microsoft Q&A](/answers/topics/azure-virtual-network.html) page. It's recommended that you post all your questions on this forum. If you have a support contract, you can also file a support request.
vpn-gateway Vpn Gateway Howto Vnet Vnet Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-cli.md
When creating additional connections, it's important to verify that the IP addre
* TestVNet5: 10.51.0.0/16 & 10.52.0.0/16 * FrontEnd: 10.51.0.0/24 * BackEnd: 10.52.0.0/24
-* GatewaySubnet: 10.52.255.0.0/27
+* GatewaySubnet: 10.52.255.0/27
* GatewayName: VNet5GW * Public IP: VNet5GWIP * VPNType: RouteBased
This step is split into two CLI sessions marked as **[Subscription 1]**, and **[
## Next steps * Once your connection is complete, you can add virtual machines to your virtual networks. For more information, see the [Virtual Machines documentation](../index.yml).
-* For information about BGP, see the [BGP Overview](vpn-gateway-bgp-overview.md) and [How to configure BGP](vpn-gateway-bgp-resource-manager-ps.md).
+* For information about BGP, see the [BGP Overview](vpn-gateway-bgp-overview.md) and [How to configure BGP](vpn-gateway-bgp-resource-manager-ps.md).