Updates from: 01/29/2022 02:08:50
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Billing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/billing.md
To change your pricing tier, follow these steps:
1. Select the pricing tier that includes the features you want to enable. ![Screenshot that shows how to select the pricing tier.](media/billing/select-tier.png)+
+Learn about the [Azure AD features, which are supported in Azure AD B2C](supported-azure-ad-features.md).
## Switch to MAU billing (pre-November 2019 Azure AD B2C tenants)
active-directory-b2c Identity Provider Adfs Saml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-adfs-saml.md
You can configure how to sign the SAML request in Azure AD B2C. The [XmlSignatur
#### Option 2: Set the signature algorithm in AD FS
-Alternatively, you can configure the expected the SAML request signature algorithm in AD FS.
+Alternatively, you can configure the expected SAML request signature algorithm in AD FS.
1. In Server Manager, select **Tools**, and then select **AD FS Management**. 1. Select the **Relying Party Trust** you created earlier. 1. Select **Properties**, then select **Advance** 1. Configure the **Secure hash algorithm**, and select **OK** to save the changes.
+### The HTTP-Redirect request does not contain the required parameter 'Signature' for a signed request (AADB2C90168)
+
+#### Option 1: Set the ResponsesSigned to false in Azure AD B2C
+
+You can disable the requirement of signed message in Azure AD B2C. The following example configures Azure AD B2C to not require 'Signature' parameter for the signed request.
+
+```xml
+<Metadata>
+ <Item Key="WantsEncryptedAssertions">false</Item>
+ <Item Key="PartnerEntity">https://your-AD-FS-domain/federationmetadata/2007-06/federationmetadata.xml</Item>
+ <Item Key="ResponsesSigned">false</Item>
+</Metadata>
+```
+
+#### Option 2: Set the relying party in AD FS to sign both Message and Assertion
+
+Alternatively, you can configure the relying party in AD FS as mentioned below:
+
+1. Open PowerShell as Administrator and run ```Set-AdfsRelyingPartyTrust -TargetName <RP Name> -SamlResponseSignature MessageAndAssertion``` cmdlet to sign both Message and Assertion.
+2. Run ```Set-AdfsRelyingPartyTrust -TargetName <RP Name>``` and confirm the **SamlResponseSignature** property is set as **MessageAndAssertion**.
+ ::: zone-end
active-directory-b2c Oauth2 Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/oauth2-technical-profile.md
The following table lists the OAuth2 identity provider generic metadata. The met
| `ProviderName` | No | The name of the identity provider. | | `ResponseErrorCodeParamName` | No | The name of the parameter that contains the error message returned over HTTP 200 (Ok). | | `IncludeClaimResolvingInClaimsHandling`  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. |
-| `ResolveJsonPathsInJsonTokens` | No | Indicates whether the technical profile resolves JSON paths. Possible values: `true`, or `false` (default). Use this metadata to read data from a nested JSON element. In an [OutputClaim](technicalprofiles.md#output-claims), set the `PartnerClaimType` to the JSON path element you want to output. For example: `firstName.localized`, or `data.0.to.0.email`.|
+| `ResolveJsonPathsInJsonTokens` | No | Indicates whether the technical profile resolves JSON paths. Possible values: `true`, or `false` (default). Use this metadata to read data from a nested JSON element. In an [OutputClaim](technicalprofiles.md#output-claims), set the `PartnerClaimType` to the JSON path element you want to output. For example: `firstName.localized`, or `data[0].to[0].email`.|
## Cryptographic keys
active-directory-b2c Restful Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/restful-technical-profile.md
The technical profile also returns claims, that aren't returned by the identity
| ClaimUsedForRequestPayload| No | Name of a string claim that contains the payload to be sent to the REST API. | | DebugMode | No | Runs the technical profile in debug mode. Possible values: `true`, or `false` (default). In debug mode, the REST API can return more information. See the [Returning error message](#returning-validation-error-message) section. | | IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. |
-| ResolveJsonPathsInJsonTokens | No | Indicates whether the technical profile resolves JSON paths. Possible values: `true`, or `false` (default). Use this metadata to read data from a nested JSON element. In an [OutputClaim](technicalprofiles.md#output-claims), set the `PartnerClaimType` to the JSON path element you want to output. For example: `firstName.localized`, or `data.0.to.0.email`.|
+| ResolveJsonPathsInJsonTokens | No | Indicates whether the technical profile resolves JSON paths. Possible values: `true`, or `false` (default). Use this metadata to read data from a nested JSON element. In an [OutputClaim](technicalprofiles.md#output-claims), set the `PartnerClaimType` to the JSON path element you want to output. For example: `firstName.localized`, or `data[0].to[0].email`.|
| UseClaimAsBearerToken| No| The name of the claim that contains the bearer token.| ## Error handling
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
Previously updated : 01/25/2022 Last updated : 01/28/2022
Exchange Online, SharePoint Online, Teams, and MS Graph can synchronize key Cond
This process enables the scenario where users lose access to organizational files, email, calendar, or tasks from Microsoft 365 client apps or SharePoint Online immediately after network location changes. > [!NOTE]
-> Not all client app and resource provider combinations are supported. See table below. The first column of this table refers to web applications launched via web browser (i.e. PowerPoint launched in web browser) while the remaining four columns refer to native applications running on each platform described. Additionally, references to "Office" encompass Word, Excel, and PowerPoint.
-
-Token lifetimes for Office web apps are reduced to 1 hour when a Conditional Access policy is set.
+> Not all client app and resource provider combinations are supported. See the following tables. The first column of this table refers to web applications launched via web browser (i.e. PowerPoint launched in web browser) while the remaining four columns refer to native applications running on each platform described. Additionally, references to "Office" encompass Word, Excel, and PowerPoint.
| | Outlook Web | Outlook Win32 | Outlook iOS | Outlook Android | Outlook Mac | | : | :: | :: | :: | :: | :: |
Token lifetimes for Office web apps are reduced to 1 hour when a Conditional Acc
| | Office web apps | Office Win32 apps | Office for iOS | Office for Android | Office for Mac | | : | :: | :: | :: | :: | :: |
-| **SharePoint Online** | Not Supported | Supported | Supported | Supported | Supported |
+| **SharePoint Online** | Not Supported \* | Supported | Supported | Supported | Supported |
| **Exchange Online** | Not Supported | Supported | Supported | Supported | Supported | | | OneDrive web | OneDrive Win32 | OneDrive iOS | OneDrive Android | OneDrive Mac |
Token lifetimes for Office web apps are reduced to 1 hour when a Conditional Acc
| | Teams web | Teams Win32 | Teams iOS | Teams Android | Teams Mac | | : | :: | :: | :: | :: | :: |
-| **Teams Service** | Supported | Supported | Supported | Supported | Supported |
-| **SharePoint Online** | Supported | Supported | Supported | Supported | Supported |
-| **Exchange Online** | Supported | Supported | Supported | Supported | Supported |
+| **Teams Service** | Partially supported | Partially supported | Partially supported | Partially supported | Partially supported |
+| **SharePoint Online** | Partially supported | Partially supported | Partially supported | Partially supported | Partially supported |
+| **Exchange Online** | Partially supported | Partially supported | Partially supported | Partially supported | Partially supported |
+
+> \* Token lifetimes for Office web apps are reduced to 1 hour when a Conditional Access policy is set.
## Client Capabilities
For an explanation of the office update channels, see [Overview of update channe
### Coauthoring in Office apps
-When multiple users are collaborating on a document at the same time, their access to the document may not be immediately revoked by CAE based on user revocation or policy change events. In this case, the user loses access completely after:
+When multiple users are collaborating on a document at the same time, their access to the document may not be immediately revoked by CAE based on policy change events. In this case, the user loses access completely after:
- Closing the document - Closing the Office app
active-directory Howto Conditional Access Policy Compliant Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/howto-conditional-access-policy-compliant-device.md
Organizations who have deployed Microsoft Intune can use the information returne
* Requiring a PIN to unlock * Requiring device encryption * Requiring a minimum or maximum operating system version
-* Requiring a device is not jailbroken or rooted
+* Requiring a device isn't jailbroken or rooted
-This policy compliance information is forwarded to Azure AD where Conditional Access can make decisions to grant or block access to resources. More information about device compliance policies can be found in the article, [Set rules on devices to allow access to resources in your organization using Intune](/intune/protect/device-compliance-get-started)
+Policy compliance information is sent to Azure AD where Conditional Access decides to grant or block access to resources. More information about device compliance policies can be found in the article, [Set rules on devices to allow access to resources in your organization using Intune](/intune/protect/device-compliance-get-started)
+
+Requiring a hybrid Azure AD joined device is dependent on your devices already being hybrid Azure AD joined. For more information, see the article [Configure hybrid Azure AD join](../devices/howto-hybrid-azure-ad-join.md).
## Template deployment
After confirming your settings using [report-only mode](howto-conditional-access
### Known behavior
-On Windows 7, iOS, Android, macOS, and some third-party web browsers Azure AD identifies the device using a client certificate that is provisioned when the device is registered with Azure AD. When a user first signs in through the browser the user is prompted to select the certificate. The end user must select this certificate before they can continue to use the browser.
+On Windows 7, iOS, Android, macOS, and some third-party web browsers, Azure AD identifies the device using a client certificate that is provisioned when the device is registered with Azure AD. When a user first signs in through the browser the user is prompted to select the certificate. The end user must select this certificate before they can continue to use the browser.
## Next steps
active-directory Workload Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/workload-identities-overview.md
Title: Workload identities
-description:
-
+description: Understand the concepts and supported scenarios for using workload identity in Azure Active Directory.
-
Here are some ways you can use workload identities:
## Next steps
-Learn how to [secure access of workload identities](../conditional-access/workload-identity.md) with adaptive policies.
+Learn how to [secure access of workload identities](../conditional-access/workload-identity.md) with adaptive policies.
active-directory Azureadjoin Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/azureadjoin-plan.md
Title: How to plan your Azure Active Directory join implementation
+ Title: Plan your Azure Active Directory join deployment
description: Explains the steps that are required to implement Azure AD joined devices in your environment. Previously updated : 11/21/2019 Last updated : 01/20/2022
# How to: Plan your Azure AD join implementation
-Azure AD join allows you to join devices directly to Azure AD without the need to join to on-premises Active Directory while keeping your users productive and secure. Azure AD join is enterprise-ready for both at-scale and scoped deployments.
+Azure AD join allows you to join devices directly to Azure AD without the need to join to on-premises Active Directory while keeping your users productive and secure. Azure AD join is enterprise-ready for both at-scale and scoped deployments. SSO access to on-premises resources is also available to devices that are Azure AD joined. For more information, see [How SSO to on-premises resources works on Azure AD joined devices](azuread-join-sso.md).
This article provides you with the information you need to plan your Azure AD join implementation.
-
+ ## Prerequisites
-This article assumes that you are familiar with the [Introduction to device management in Azure Active Directory](./overview.md).
+This article assumes that you're familiar with the [Introduction to device management in Azure Active Directory](./overview.md).
## Plan your implementation
To plan your Azure AD join implementation, you should familiarize yourself with:
## Review your scenarios
-While Hybrid Azure AD join may be preferred for certain scenarios, Azure AD join enables you to transition towards a cloud-first model with Windows. If you are planning to modernize your devices management and reduce device-related IT costs, Azure AD join provides a great foundation towards achieving those objectives.
+While hybrid Azure AD join may be preferred for certain scenarios, Azure AD join enables you to transition towards a cloud-first model with Windows. If you're planning to modernize your devices management and reduce device-related IT costs, Azure AD join provides a great foundation towards achieving those goals.
-You should consider Azure AD join if your goals align with the following criteria:
+Consider Azure AD join if your goals align with the following criteria:
-- You are adopting Microsoft 365 as the productivity suite for your users.
+- You're adopting Microsoft 365 as the productivity suite for your users.
- You want to manage devices with a cloud device management solution. - You want to simplify device provisioning for geographically distributed users. - You plan to modernize your application infrastructure. ## Review your identity infrastructure
-Azure AD join works with both, managed and federated environments.
+Azure AD join works in managed and federated environments. We think most organizations will deploy hybrid Azure AD join with managed domains. Managed domain scenarios don't require configuring a federation server.
### Managed environment A managed environment can be deployed either through [Password Hash Sync](../hybrid/how-to-connect-password-hash-synchronization.md) or [Pass Through Authentication](../hybrid/how-to-connect-pta-quick-start.md) with Seamless Single Sign On.
-These scenarios don't require you to configure a federation server for authentication.
- ### Federated environment A federated environment should have an identity provider that supports both WS-Trust and WS-Fed protocols:
When you're using AD FS, you need to enable the following WS-Trust endpoints:
`/adfs/services/trust/2005/certificatemixed` `/adfs/services/trust/13/certificatemixed`
-If your identity provider does not support these protocols, Azure AD join does not work natively.
+If your identity provider doesn't support these protocols, Azure AD join doesn't work natively.
->[!NOTE]
+> [!NOTE]
> Currently, Azure AD join does not work with [AD FS 2019 configured with external authentication providers as the primary authentication method](/windows-server/identity/ad-fs/operations/additional-authentication-methods-ad-fs#enable-external-authentication-methods-as-primary). Azure AD join defaults to password authentication as the primary method, which results in authentication failures in this scenario - ### Smartcards and certificate-based authentication You can't use smartcards or certificate-based authentication to join devices to Azure AD. However, smartcards can be used to sign in to Azure AD joined devices if you have AD FS configured.
You can't use smartcards or certificate-based authentication to join devices to
If you create users in your: - **On-premises Active Directory**, you need to synchronize them to Azure AD using [Azure AD Connect](../hybrid/how-to-connect-sync-whatis.md). -- **Azure AD**, no additional setup is required.
+- **Azure AD**, no extra setup is required.
-On-premises UPNs that are different from Azure AD UPNs are not supported on Azure AD joined devices. If your users use an on-premises UPN, you should plan to switch to using their primary UPN in Azure AD.
+On-premises UPNs that are different from Azure AD UPNs aren't supported on Azure AD joined devices. If your users use an on-premises UPN, you should plan to switch to using their primary UPN in Azure AD.
-UPN changes are only supported starting Windows 10 2004 update. Users on devices with this update will not have any issues after changing their UPNs. For devices prior to Windows 10 2004 update, users would have SSO and Conditional Access issues on their devices. They need to sign in to Windows through the "Other user" tile using their new UPN to resolve this issue.
+UPN changes are only supported starting Windows 10 2004 update. Users on devices with this update won't have any issues after changing their UPNs. For devices before the Windows 10 2004 update, users would have SSO and Conditional Access issues on their devices. They need to sign in to Windows through the "Other user" tile using their new UPN to resolve this issue.
## Assess your device management
UPN changes are only supported starting Windows 10 2004 update. Users on devices
Azure AD join: -- Is applicable to Windows 10 and Windows 11 devices. -- Is not applicable to previous versions of Windows or other operating systems. If you have Windows 7/8.1 devices, you must upgrade at least to Windows 10 to deploy Azure AD join.-- Is supported for FIPS-compliant TPM 2.0 but not supported for TPM 1.2. If your devices have FIPS-compliant TPM 1.2, you must disable them before proceeding with Azure AD join. Microsoft does not provide any tools for disabling FIPS mode for TPMs as it is dependent on the TPM manufacturer. Please contact your hardware OEM for support.
+- Supports Windows 10 and Windows 11 devices.
+- Isn't supported on previous versions of Windows or other operating systems. If you have Windows 7/8.1 devices, you must upgrade at least to Windows 10 to deploy Azure AD join.
+- Is supported for FIPS-compliant TPM 2.0 but not supported for TPM 1.2. If your devices have FIPS-compliant TPM 1.2, you must disable them before proceeding with Azure AD join. Microsoft doesn't provide any tools for disabling FIPS mode for TPMs as it is dependent on the TPM manufacturer. Contact your hardware OEM for support.
**Recommendation:** Always use the latest Windows 10 release to take advantage of updated features.
There are two approaches for managing Azure AD joined devices:
- **MDM-only** - A device is exclusively managed by an MDM provider like Intune. All policies are delivered as part of the MDM enrollment process. For Azure AD Premium or EMS customers, MDM enrollment is an automated step that is part of an Azure AD join. - **Co-management** - A device is managed by an MDM provider and SCCM. In this approach, the SCCM agent is installed on an MDM-managed device to administer certain aspects.
-If you are using Group Policies, evaluate your GPO and MDM policy parity by using [Group Policy analytics](/mem/intune/configuration/group-policy-analytics) in Microsoft Endpoint Manager.
+If you're using Group Policies, evaluate your GPO and MDM policy parity by using [Group Policy analytics](/mem/intune/configuration/group-policy-analytics) in Microsoft Endpoint Manager.
-Review supported and unsupported policies to determine whether you can use an MDM solution instead of Group policies. For unsupported policies, consider the following:
+Review supported and unsupported policies to determine whether you can use an MDM solution instead of Group policies. For unsupported policies, consider the following questions:
- Are the unsupported policies necessary for Azure AD joined devices or users?-- Are the unsupported policies applicable in a cloud driven deployment?
+- Are the unsupported policies applicable in a cloud-driven deployment?
-If your MDM solution is not available through the Azure AD app gallery, you can add it following the process
+If your MDM solution isn't available through the Azure AD app gallery, you can add it following the process
outlined in [Azure Active Directory integration with MDM](/windows/client-management/mdm/azure-active-directory-integration-with-mdm).
-Through co-management, you can use SCCM to manage certain aspects of your devices while policies are delivered through your MDM platform. Microsoft Intune enables co-management with SCCM. For more information on co-management for Windows 10 devices, see [What is co-management?](/configmgr/core/clients/manage/co-management-overview). If you use an MDM product other than Intune, please check with your MDM provider on applicable co-management scenarios.
+Through co-management, you can use SCCM to manage certain aspects of your devices while policies are delivered through your MDM platform. Microsoft Intune enables co-management with SCCM. For more information on co-management for Windows 10 devices, see [What is co-management?](/configmgr/core/clients/manage/co-management-overview). If you use an MDM product other than Intune, check with your MDM provider on applicable co-management scenarios.
**Recommendation:** Consider MDM only management for Azure AD joined devices. ## Understand considerations for applications and resources
-We recommend migrating applications from on-premises to cloud for a better user experience and access control. However, Azure AD joined devices can seamlessly provide access to both, on-premises and cloud applications. For more information, see [How SSO to on-premises resources works on Azure AD joined devices](azuread-join-sso.md).
+We recommend migrating applications from on-premises to cloud for a better user experience and access control. Azure AD joined devices can seamlessly provide access to both, on-premises and cloud applications. For more information, see [How SSO to on-premises resources works on Azure AD joined devices](azuread-join-sso.md).
The following sections list considerations for different types of applications and resources. ### Cloud-based applications
-If an application is added to Azure AD app gallery, users get SSO through Azure AD joined devices. No additional configuration is required. Users get SSO on both, Microsoft Edge and Chrome browsers. For Chrome, you need to deploy the [Windows 10 Accounts extension](https://chrome.google.com/webstore/detail/windows-10-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji).
+If an application is added to Azure AD app gallery, users get SSO through Azure AD joined devices. No other configuration is required. Users get SSO on both, Microsoft Edge and Chrome browsers. For Chrome, you need to deploy the [Windows 10 Accounts extension](https://chrome.google.com/webstore/detail/windows-10-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji).
All Win32 applications that:
Your users have SSO from Azure AD joined devices when a device has access to an
### Printers
-We recommend deploying [Universal Print](/universal-print/fundamentals/universal-print-whatis) to have a cloud based print management solution without any on-premises dependencies.
+We recommend deploying [Universal Print](/universal-print/fundamentals/universal-print-whatis) to have a cloud-based print management solution without any on-premises dependencies.
### On-premises applications relying on machine authentication
Azure AD joined devices don't support on-premises applications relying on machin
### Remote Desktop Services
-Remote desktop connection to an Azure AD joined devices requires the host machine to be either Azure AD joined or Hybrid Azure AD joined. Remote desktop from an unjoined or non-Windows device is not supported. For more information, see [Connect to remote Azure AD joined pc](/windows/client-management/connect-to-remote-aadj-pc)
+Remote desktop connection to an Azure AD joined devices requires the host machine to be either Azure AD joined or hybrid Azure AD joined. Remote desktop from an unjoined or non-Windows device isn't supported. For more information, see [Connect to remote Azure AD joined pc](/windows/client-management/connect-to-remote-aadj-pc)
Starting Windows 10 2004 update, users can also use remote desktop from an Azure AD registered Windows 10 device to an Azure AD joined device. ### RADIUS and Wi-Fi authentication
-Currently, Azure AD joined devices do not support RADIUS authentication for connecting to Wi-Fi access points, since RADIUS relies on presence of an on-premises computer object. As an alternative, you can use certificates pushed via Intune or user credentials to authenticate to Wi-Fi.
-
+Currently, Azure AD joined devices don't support RADIUS authentication for connecting to Wi-Fi access points, since RADIUS relies on presence of an on-premises computer object. As an alternative, you can use certificates pushed via Intune or user credentials to authenticate to Wi-Fi.
## Understand your provisioning options
-**Note**: Azure AD joined devices cannot be deployed using System Preparation Tool (Sysprep) or similar imaging tools
+**Note**: Azure AD joined devices canΓÇÖt be deployed using System Preparation Tool (Sysprep) or similar imaging tools
-You can provision Azure AD join using the following approaches:
+You can provision Azure AD joined devices using the following approaches:
- **Self-service in OOBE/Settings** - In the self-service mode, users go through the Azure AD join process either during Windows Out of Box Experience (OOBE) or from Windows Settings. For more information, see [Join your work device to your organization's network](https://support.microsoft.com/account-billing/join-your-work-device-to-your-work-or-school-network-ef4d6adb-5095-4e51-829e-5457430f3973). -- **Windows Autopilot** - Windows Autopilot enables pre-configuration of devices for a smoother experience in OOBE to perform an Azure AD join. For more information, see the [Overview of Windows Autopilot](/windows/deployment/windows-autopilot/windows-10-autopilot).
+- **Windows Autopilot** - Windows Autopilot enables pre-configuration of devices for a smoother Azure AD join experience in OOBE. For more information, see the [Overview of Windows Autopilot](/windows/deployment/windows-autopilot/windows-10-autopilot).
- **Bulk enrollment** - Bulk enrollment enables an administrator driven Azure AD join by using a bulk provisioning tool to configure devices. For more information, see [Bulk enrollment for Windows devices](/intune/windows-bulk-enroll). HereΓÇÖs a comparison of these three approaches
HereΓÇÖs a comparison of these three approaches
| Require device OEM support | No | Yes | No | | Supported versions | 1511+ | 1709+ | 1703+ |
-Choose your deployment approach or approaches by reviewing the table above and reviewing the following considerations for adopting either approach:
+Choose your deployment approach or approaches by reviewing the previous table and reviewing the following considerations for adopting either approach:
- Are your users tech savvy to go through the setup themselves? - Self-service can work best for these users. Consider Windows Autopilot to enhance the user experience. - Are your users remote or within corporate premises? - Self-service or Autopilot work best for remote users for a hassle-free setup. - Do you prefer a user driven or an admin-managed configuration?
- - Bulk enrollment works better for admin driven deployment to set up devices before handing over to users.
+ - Bulk enrollment works better for admin-driven deployment to set up devices before handing over to users.
- Do you purchase devices from 1-2 OEMS, or do you have a wide distribution of OEM devices? - If purchasing from limited OEMs who also support Autopilot, you can benefit from tighter integration with Autopilot.
The Azure portal allows you to control the deployment of Azure AD joined devices
### Users may join devices to Azure AD
-Set this option to **All** or **Selected** based on the scope of your deployment and who you want to allow to setup an Azure AD joined device.
+Set this option to **All** or **Selected** based on the scope of your deployment and who you want to set up an Azure AD joined device.
![Users may join devices to Azure AD](./media/azureadjoin-plan/01.png)
Choose **Selected** and selects the users you want to add to the local administr
### Require multi-factor authentication (MFA) to join devices
-Select **ΓÇ£Yes** if you require users to perform MFA while joining devices to Azure AD.
+Select **ΓÇ£Yes** if you require users to do MFA while joining devices to Azure AD.
![Require multi-factor Auth to join devices](./media/azureadjoin-plan/03.png)
Select **Some** or **All** based on the scope of your deployment.
Based on your scope, one of the following happens: - **User is in MDM scope**: If you have an Azure AD Premium subscription, MDM enrollment is automated along with Azure AD join. All scoped users must have an appropriate license for your MDM. If MDM enrollment fails in this scenario, Azure AD join will also be rolled back.-- **User is not in MDM scope**: If users are not in MDM scope, Azure AD join completes without any MDM enrollment. This results in an unmanaged device.
+- **User is not in MDM scope**: If users aren't in MDM scope, Azure AD join completes without any MDM enrollment. This scope results in an unmanaged device.
### MDM URLs
There are three URLs that are related to your MDM configuration:
:::image type="content" source="./media/azureadjoin-plan/06.png" alt-text="Screenshot of part of the Azure Active Directory M D M configuration section, with U R L fields for M D M terms of use, discovery, and compliance." border="false":::
-Each URL has a predefined default value. If these fields are empty, please contact your MDM provider for more information.
+Each URL has a predefined default value. If these fields are empty, contact your MDM provider for more information.
### MAM settings
-MAM does not apply to Azure AD join.
+MAM doesn't apply to Azure AD join.
## Configure enterprise state roaming
You can use this implementation to [require managed devices for cloud app access
## Next steps
-> [!div class="nextstepaction"]
-> [Join a new Windows 10 device with Azure AD during a first run](azuread-joined-devices-frx.md)
-> [Join your work device to your organization's network](https://support.microsoft.com/account-billing/join-your-work-device-to-your-work-or-school-network-ef4d6adb-5095-4e51-829e-5457430f3973)
-
-<!--Image references-->
-[1]: ./media/azureadjoin-plan/12.png
+- [Join a new Windows 10 device to Azure AD during a first run](azuread-joined-devices-frx.md)
+- [Join your work device to your organization's network](https://support.microsoft.com/account-billing/join-your-work-device-to-your-work-or-school-network-ef4d6adb-5095-4e51-829e-5457430f3973)
active-directory Concept Azure Ad Join Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/concept-azure-ad-join-hybrid.md
Previously updated : 06/10/2021 Last updated : 01/26/2022
Hybrid Azure AD joined devices require network line of sight to your on-premises
| **Device sign in options** | Organizational accounts using: | | | Password | | | Windows Hello for Business for Win10 |
-| **Device management** | Group Policy |
-| | Configuration Manager standalone or co-management with Microsoft Intune |
+| **Device management** | [Group Policy](/mem/configmgr/comanage/faq#my-environment-has-too-many-group-policy-objects-and-legacy-authenticated-apps--do-i-have-to-use-hybrid-azure-ad-) |
+| | [Configuration Manager standalone or co-management with Microsoft Intune](/mem/configmgr/comanage/overview) |
| **Key capabilities** | SSO to both cloud and on-premises resources | | | Conditional Access through Domain join or through Intune if co-managed |
-| | Self-service Password Reset and Windows Hello PIN reset on lock screen |
-| | Enterprise State Roaming across devices |
+| | [Self-service Password Reset and Windows Hello PIN reset on lock screen](../authentication/howto-sspr-windows.md) |
![Hybrid Azure AD joined devices](./media/concept-azure-ad-join-hybrid/azure-ad-hybrid-joined-device.png)
Hybrid Azure AD joined devices require network line of sight to your on-premises
Use Azure AD hybrid joined devices if: - You support down-level devices running Windows 7 and 8.1.-- You want to continue to use Group Policy to manage device configuration.
+- You want to continue to use [Group Policy](/mem/configmgr/comanage/faq#my-environment-has-too-many-group-policy-objects-and-legacy-authenticated-apps--do-i-have-to-use-hybrid-azure-ad-) to manage device configuration.
- You want to continue to use existing imaging solutions to deploy and configure devices. - You have Win32 apps deployed to these devices that rely on Active Directory machine authentication. ## Next steps - [Plan your hybrid Azure AD join implementation](hybrid-azuread-join-plan.md)
+- [Co-management using Configuration Manager and Microsoft Intune](/mem/configmgr/comanage/overview)
- [Manage device identities using the Azure portal](device-management-azure-portal.md) - [Manage stale devices in Azure AD](manage-stale-devices.md)
active-directory Concept Azure Ad Join https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/concept-azure-ad-join.md
Previously updated : 06/10/2021 Last updated : 01/26/2022
Any organization can deploy Azure AD joined devices no matter the size or indust
| | Windows Hello for Business | | | FIDO2.0 security keys (preview) | | **Device management** | Mobile Device Management (example: Microsoft Intune) |
-| | Co-management with Microsoft Intune and Microsoft Endpoint Configuration Manager |
+| | [Configuration Manager standalone or co-management with Microsoft Intune](/mem/configmgr/comanage/overview) |
| **Key capabilities** | SSO to both cloud and on-premises resources | | | Conditional Access through MDM enrollment and MDM compliance evaluation |
-| | Self-service Password Reset and Windows Hello PIN reset on lock screen |
-| | Enterprise State Roaming across devices |
+| | [Self-service Password Reset and Windows Hello PIN reset on lock screen](../authentication/howto-sspr-windows.md) |
Azure AD joined devices are signed in to using an organizational Azure AD account. Access to resources in the organization can be further limited based on that Azure AD account and [Conditional Access policies](../conditional-access/howto-conditional-access-policy-compliant-device.md) applied to the device identity.
Azure AD joined devices can still maintain single sign-on access to on-premises
## Scenarios
-While Azure AD join is primarily intended for organizations that do not have an on-premises Windows Server Active Directory infrastructure, you can certainly use it in scenarios where:
+While Azure AD join can be used in a variety of scenarios like:
- You want to transition to cloud-based infrastructure using Azure AD and MDM like Intune. - You canΓÇÖt use an on-premises domain join, for example, if you need to get mobile devices such as tablets and phones under control. - Your users primarily need to access Microsoft 365 or other SaaS apps integrated with Azure AD. - You want to manage a group of users in Azure AD instead of in Active Directory. This scenario can apply, for example, to seasonal workers, contractors, or students.-- You want to provide joining capabilities to workers in remote branch offices with limited on-premises infrastructure.
+- You want to provide joining capabilities to workers who work from home or are in remote branch offices with limited on-premises infrastructure.
-You can configure Azure AD joined devices for all Windows 10 devices except for Windows 10 Home.
+You can configure Azure AD join for all Windows 10 devices except for Windows 10 Home.
The goal of Azure AD joined devices is to simplify:
Azure AD Join can be deployed by using any of the following methods:
## Next steps - [Plan your Azure AD join implementation](azureadjoin-plan.md)
+- [Co-management using Configuration Manager and Microsoft Intune](/mem/configmgr/comanage/overview)
- [How to manage the local administrators group on Azure AD joined devices](assign-local-admin.md) - [Manage device identities using the Azure portal](device-management-azure-portal.md) - [Manage stale devices in Azure AD](manage-stale-devices.md)
active-directory Concept Azure Ad Register https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/concept-azure-ad-register.md
Title: What are Azure AD registered devices?
-description: Learn how Azure AD registered devices provide your users with support for the Bring Your Own Device (BYOD) or mobile device scenarios.
+description: Learn how Azure AD registered devices provide your users with support for bring your own device (BYOD) or mobile device scenarios.
Previously updated : 06/09/2021 Last updated : 01/26/2022
# Azure AD registered devices
-The goal of Azure AD registered devices is to provide your users with support for the bring your own device (BYOD) or mobile device scenarios. In these scenarios, a user can access your organizationΓÇÖs resources using a personal device.
+The goal of Azure AD registered devices is to provide your users with support for bring your own device (BYOD) or mobile device scenarios. In these scenarios, a user can access your organizationΓÇÖs resources using a personal device.
| Azure AD Registered | Description | | | |
active-directory Howto Hybrid Azure Ad Join https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-hybrid-azure-ad-join.md
+
+ Title: Configure hybrid Azure Active Directory join
+description: Learn how to configure hybrid Azure Active Directory join.
+++++ Last updated : 01/20/2022++++++++
+# Configure hybrid Azure AD join
+
+Bringing your devices to Azure AD maximizes user productivity through single sign-on (SSO) across your cloud and on-premises resources. You can secure access to your resources with [Conditional Access](../conditional-access/howto-conditional-access-policy-compliant-device.md) at the same time.
+
+> [!VIDEO https://www.youtube-nocookie.com/embed/hSCVR1oJhFI]
+
+## Prerequisites
+
+- [Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594) version 1.1.819.0 or later.
+ - Don't exclude the default device attributes from your Azure AD Connect sync configuration. To learn more about default device attributes synced to Azure AD, see [Attributes synchronized by Azure AD Connect](../hybrid/reference-connect-sync-attributes-synchronized.md#windows-10).
+ - If the computer objects of the devices you want to be hybrid Azure AD joined belong to specific organizational units (OUs), configure the correct OUs to sync in Azure AD Connect. To learn more about how to sync computer objects by using Azure AD Connect, see [Organizational unitΓÇôbased filtering](../hybrid/how-to-connect-sync-configure-filtering.md#organizational-unitbased-filtering).
+- Global administrator credentials for your Azure AD tenant.
+- Enterprise administrator credentials for each of the on-premises Active Directory Domain Services forests.
+- (**For federated domains**) At least Windows Server 2012 R2 with Active Directory Federation Services installed.
+- Users can register their devices with Azure AD. More information about this setting can be found under the heading **Configure device settings**, in the article, [Configure device settings](device-management-azure-portal.md#configure-device-settings).
+
+Hybrid Azure AD join requires devices to have access to the following Microsoft resources from inside your organization's network:
+
+- `https://enterpriseregistration.windows.net`
+- `https://login.microsoftonline.com`
+- `https://device.login.microsoftonline.com`
+- `https://autologon.microsoftazuread-sso.com` (If you use or plan to use seamless SSO)
+- Your organization's Security Token Service (STS) (**For federated domains**)
+
+> [!WARNING]
+> If your organization uses proxy servers that intercept SSL traffic for scenarios like data loss prevention or Azure AD tenant restrictions, ensure that traffic to these URLs are excluded from TLS break-and-inspect. Failure to exclude these URLs may cause interference with client certificate authentication, cause issues with device registration, and device-based Conditional Access.
+
+If your organization requires access to the internet via an outbound proxy, you can use [Web Proxy Auto-Discovery (WPAD)](/previous-versions/tn-archive/cc995261(v=technet.10)) to enable Windows 10 computers for device registration with Azure AD. To address issues configuring and managing WPAD, see [Troubleshooting Automatic Detection](/previous-versions/tn-archive/cc302643(v=technet.10)).
+
+If you don't use WPAD, you can configure WinHTTP proxy settings on your computer with a Group Policy Object (GPO) beginning with Windows 10 1709. For more information, see [WinHTTP Proxy Settings deployed by GPO](/archive/blogs/netgeeks/winhttp-proxy-settings-deployed-by-gpo).
+
+> [!NOTE]
+> If you configure proxy settings on your computer by using WinHTTP settings, any computers that can't connect to the configured proxy will fail to connect to the internet.
+
+If your organization requires access to the internet via an authenticated outbound proxy, make sure that your Windows 10 computers can successfully authenticate to the outbound proxy. Because Windows 10 computers run device registration by using machine context, configure outbound proxy authentication by using machine context. Follow up with your outbound proxy provider on the configuration requirements.
+
+Verify devices can access the required Microsoft resources under the system account by using the [Test Device Registration Connectivity](/samples/azure-samples/testdeviceregconnectivity/testdeviceregconnectivity/) script.
+
+## Managed domains
+
+We think most organizations will deploy hybrid Azure AD join with managed domains. Managed domains use [password hash sync (PHS)](../hybrid/whatis-phs.md) or [pass-through authentication (PTA)](../hybrid/how-to-connect-pta.md) with [seamless single sign-on](../hybrid/how-to-connect-sso.md). Managed domain scenarios don't require configuring a federation server.
+
+> [!NOTE]
+> Azure AD doesn't support smart cards or certificates in managed domains.
+
+Configure hybrid Azure AD join by using Azure AD Connect for a managed domain:
+
+1. Start Azure AD Connect, and then select **Configure**.
+1. In **Additional tasks**, select **Configure device options**, and then select **Next**.
+1. In **Overview**, select **Next**.
+1. In **Connect to Azure AD**, enter the credentials of a global administrator for your Azure AD tenant.
+1. In **Device options**, select **Configure Hybrid Azure AD join**, and then select **Next**.
+1. In **Device operating systems**, select the operating systems that devices in your Active Directory environment use, and then select **Next**.
+1. In **SCP configuration**, for each forest where you want Azure AD Connect to configure the SCP, complete the following steps, and then select **Next**.
+ 1. Select the **Forest**.
+ 1. Select an **Authentication Service**.
+ 1. Select **Add** to enter the enterprise administrator credentials.
+
+ ![Azure AD Connect SCP configuration managed domain](./media/howto-hybrid-azure-ad-join/azure-ad-connect-scp-configuration-managed.png)
+
+1. In **Ready to configure**, select **Configure**.
+1. In **Configuration complete**, select **Exit**.
+
+## Federated domains
+
+A federated environment should have an identity provider that supports the following requirements. If you have a federated environment using Active Directory Federation Services (AD FS), then the below requirements are already supported.
+
+- **WIAORMULTIAUTHN claim:** This claim is required to do hybrid Azure AD join for Windows down-level devices.
+- **WS-Trust protocol:** This protocol is required to authenticate Windows current hybrid Azure AD joined devices with Azure AD. When you're using AD FS, you need to enable the following WS-Trust endpoints:
+ - `/adfs/services/trust/2005/windowstransport`
+ - `/adfs/services/trust/13/windowstransport`
+ - `/adfs/services/trust/2005/usernamemixed`
+ - `/adfs/services/trust/13/usernamemixed`
+ - `/adfs/services/trust/2005/certificatemixed`
+ - `/adfs/services/trust/13/certificatemixed`
+
+> [!WARNING]
+> Both **adfs/services/trust/2005/windowstransport** and **adfs/services/trust/13/windowstransport** should be enabled as intranet facing endpoints only and must NOT be exposed as extranet facing endpoints through the Web Application Proxy. To learn more on how to disable WS-Trust Windows endpoints, see [Disable WS-Trust Windows endpoints on the proxy](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#disable-ws-trust-windows-endpoints-on-the-proxy-ie-from-extranet). You can see what endpoints are enabled through the AD FS management console under **Service** > **Endpoints**.
+
+Configure hybrid Azure AD join by using Azure AD Connect for a federated environment:
+
+1. Start Azure AD Connect, and then select **Configure**.
+1. On the **Additional tasks** page, select **Configure device options**, and then select **Next**.
+1. On the **Overview** page, select **Next**.
+1. On the **Connect to Azure AD** page, enter the credentials of a global administrator for your Azure AD tenant, and then select **Next**.
+1. On the **Device options** page, select **Configure Hybrid Azure AD join**, and then select **Next**.
+1. On the **SCP** page, complete the following steps, and then select **Next**:
+ 1. Select the forest.
+ 1. Select the authentication service. You must select **AD FS server** unless your organization has exclusively Windows 10 clients and you have configured computer/device sync, or your organization uses seamless SSO.
+ 1. Select **Add** to enter the enterprise administrator credentials.
+
+ ![Azure AD Connect SCP configuration federated domain](./media/howto-hybrid-azure-ad-join/azure-ad-connect-scp-configuration-federated.png)
+
+1. On the **Device operating systems** page, select the operating systems that the devices in your Active Directory environment use, and then select **Next**.
+1. On the **Federation configuration** page, enter the credentials of your AD FS administrator, and then select **Next**.
+1. On the **Ready to configure** page, select **Configure**.
+1. On the **Configuration complete** page, select **Exit**.
+
+### Federation caveats
+
+With Windows 10 1803 or newer, if instantaneous hybrid Azure AD join for a federated environment using AD FS fails, we rely on Azure AD Connect to sync the computer object in Azure AD that's then used to complete the device registration for hybrid Azure AD join.
+
+## Other scenarios
+
+Organizations can test hybrid Azure AD join on a subset of their environment before a full rollout. The steps to complete a targeted deployment can be found in the article [Hybrid Azure AD join targeted deployment](hybrid-azuread-join-control.md). Organizations should include a sample of users from varying roles and profiles in this pilot group. A targeted rollout will help identify any issues your plan may not have addressed before you enable for the entire organization.
+
+Some organizations may not be able to use Azure AD Connect to configure AD FS. The steps to configure the claims manually can be found in the article [Configure hybrid Azure Active Directory join manually](hybrid-azuread-join-manual.md).
+
+### Government cloud
+
+For organizations in [Azure Government](https://azure.microsoft.com/global-infrastructure/government/), hybrid Azure AD join requires devices to have access to the following Microsoft resources from inside your organization's network:
+
+- `https://enterpriseregistration.microsoftonline.us`
+- `https://login.microsoftonline.us`
+- `https://device.login.microsoftonline.us`
+- `https://autologon.microsoft.us` (If you use or plan to use seamless SSO)
+
+## Troubleshoot hybrid Azure AD join
+
+If you experience issues with completing hybrid Azure AD join for domain-joined Windows devices, see:
+
+- [Troubleshooting devices using dsregcmd command](./troubleshoot-device-dsregcmd.md)
+- [Troubleshoot hybrid Azure AD join for Windows current devices](troubleshoot-hybrid-join-windows-current.md)
+- [Troubleshoot hybrid Azure AD join for Windows downlevel devices](troubleshoot-hybrid-join-windows-legacy.md)
+
+## Next steps
+
+- [Downlevel device enablement](howto-hybrid-join-downlevel.md)
+- [Hybrid Azure AD join verification](howto-hybrid-join-verify.md)
+- [Use Conditional Access to require compliant or hybrid Azure AD joined device](../conditional-access/howto-conditional-access-policy-compliant-device.md)
active-directory Howto Hybrid Join Downlevel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-hybrid-join-downlevel.md
+
+ Title: Enable downlevel devices for hybrid Azure Active Directory join
+description: Configure older operating systems for hybrid Azure AD join
+++++ Last updated : 01/20/2022++++++++
+# Enable older operating systems
+
+If some of your domain-joined devices are Windows [downlevel devices](hybrid-azuread-join-plan.md#windows-down-level-devices), you must complete the following steps to allow them to hybrid Azure AD join:
+
+- Configure the local intranet settings for device registration
+- Install Microsoft Workplace Join for Windows downlevel computers
+
+> [!NOTE]
+> Windows 7 support ended on January 14, 2020. For more information, [Support for Windows 7 has ended](https://support.microsoft.com/en-us/help/4057281/windows-7-support-ended-on-january-14-2020).
+
+## Configure the local intranet settings for device registration
+
+To complete hybrid Azure AD join of your Windows downlevel devices, and avoid certificate prompts when devices authenticate to Azure AD, you can push a policy to your domain-joined devices to add the following URLs to the local intranet zone in Internet Explorer:
+
+- `https://device.login.microsoftonline.com`
+- `https://autologon.microsoftazuread-sso.com` (For seamless SSO)
+- Your organization's STS (**For federated domains**)
+
+You also must enable **Allow updates to status bar via script** in the userΓÇÖs local intranet zone.
+
+## Install Microsoft Workplace Join for Windows downlevel computers
+
+To register Windows downlevel devices, organizations must install [Microsoft Workplace Join for non-Windows 10 computers](https://www.microsoft.com/download/details.aspx?id=53554). Microsoft Workplace Join for non-Windows 10 computers is available in the Microsoft Download Center.
+
+You can deploy the package by using a software distribution system like [Microsoft Endpoint Configuration Manager](/configmgr/). The package supports the standard silent installation options with the `quiet` parameter. The current branch of Configuration Manager offers benefits over earlier versions, like the ability to track completed registrations.
+
+The installer creates a scheduled task on the system that runs in the user context. The task is triggered when the user signs in to Windows. The task silently joins the device with Azure AD by using the user credentials after it authenticates with Azure AD.
+
+## Next steps
+
+- [Hybrid Azure AD join verification](howto-hybrid-join-verify.md)
+- [Use Conditional Access to require compliant or hybrid Azure AD joined device](../conditional-access/howto-conditional-access-policy-compliant-device.md)
active-directory Howto Hybrid Join Verify https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-hybrid-join-verify.md
+
+ Title: Verify hybrid Azure Active Directory join state
+description: Verify configurations for hybrid Azure AD joined devices
+++++ Last updated : 01/20/2022++++++++
+# Verify hybrid Azure AD join
+
+Here are three ways to locate and verify the hybrid joined device state:
+
+## Locally on the device
+
+1. Open Windows PowerShell.
+2. Enter `dsregcmd /status`.
+3. Verify that both **AzureAdJoined** and **DomainJoined** are set to **YES**.
+4. You can use the **DeviceId** and compare the status on the service using either the Azure portal or PowerShell.
+
+For downlevel devices, see the article [Troubleshooting hybrid Azure Active Directory joined down-level devices](troubleshoot-hybrid-join-windows-legacy.md#step-1-retrieve-the-registration-status)
+
+## Using the Azure portal
+
+1. Go to the devices page using a [direct link](https://portal.azure.com/#blade/Microsoft_AAD_IAM/DevicesMenuBlade/Devices).
+2. Information on how to locate a device can be found in [How to manage device identities using the Azure portal](./device-management-azure-portal.md).
+3. If the **Registered** column says **Pending**, then hybrid Azure AD join hasn't completed. In federated environments, this state happens only if it failed to register and Azure AD Connect is configured to sync the devices.
+4. If the **Registered** column contains a **date/time**, then hybrid Azure AD join has completed.
+
+## Using PowerShell
+
+Verify the device registration state in your Azure tenant by using **[Get-MsolDevice](/powershell/module/msonline/get-msoldevice)**. This cmdlet is in the [Azure Active Directory PowerShell module](/powershell/azure/active-directory/install-msonlinev1).
+
+When you use the **Get-MSolDevice** cmdlet to check the service details:
+
+- An object with the **device ID** that matches the ID on the Windows client must exist.
+- The value for **DeviceTrustType** is **Domain Joined**. This setting is equivalent to the **Hybrid Azure AD joined** state on the **Devices** page in the Azure AD portal.
+- For devices that are used in Conditional Access, the value for **Enabled** is **True** and **DeviceTrustLevel** is **Managed**.
+
+1. Open Windows PowerShell as an administrator.
+2. Enter `Connect-MsolService` to connect to your Azure tenant.
+
+### Count all Hybrid Azure AD joined devices (excluding **Pending** state)
+
+```azurepowershell
+(Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))}).count
+```
+
+### Count all Hybrid Azure AD joined devices with **Pending** state
+
+```azurepowershell
+(Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (-not([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))}).count
+```
+
+### List all Hybrid Azure AD joined devices
+
+```azurepowershell
+Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))}
+```
+
+### List all Hybrid Azure AD joined devices with **Pending** state
+
+```azurepowershell
+Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (-not([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))}
+```
+
+### List details of a single device:
+
+1. Enter `get-msoldevice -deviceId <deviceId>` (This **DeviceId** is obtained locally on the device).
+2. Verify that **Enabled** is set to **True**.
+
+## Next steps
+
+- [Downlevel device enablement](howto-hybrid-join-downlevel.md)
+- [Configure hybrid Azure AD join](howto-hybrid-azure-ad-join.md)
active-directory Hybrid Azuread Join Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/hybrid-azuread-join-control.md
Title: Controlled validation of hybrid Azure AD join - Azure AD
-description: Learn how to do a controlled validation of hybrid Azure AD join before enabling it across the entire organization all at once
+ Title: Targeted deployments of hybrid Azure AD join
+description: Learn how to do a targeted deployment of hybrid Azure AD join before enabling it across the entire organization all at once.
Previously updated : 06/28/2019 Last updated : 01/20/2022
-# Controlled validation of hybrid Azure AD join
+# Hybrid Azure AD join targeted deployment
-When all of the pre-requisites are in place, Windows devices will automatically register as devices in your Azure AD tenant. The state of these device identities in Azure AD is referred as hybrid Azure AD join. More information about the concepts covered in this article can be found in the articles [Introduction to device management in Azure Active Directory](overview.md) and [Plan your hybrid Azure Active Directory join implementation](hybrid-azuread-join-plan.md).
+You can validate your [planning and prerequisites](hybrid-azuread-join-plan.md) for hybrid Azure AD joining devices using a targeted deployment before enabling it across the entire organization. This article will explain how to accomplish a targeted deployment of hybrid Azure AD join.
-Organizations may want to do a controlled validation of hybrid Azure AD join before enabling it across their entire organization all at once. This article will explain how to accomplish a controlled validation of hybrid Azure AD join.
+## Targeted deployment of hybrid Azure AD join on Windows current devices
-## Controlled validation of hybrid Azure AD join on Windows current devices
+For devices running Windows 10, the minimum supported version is Windows 10 (version 1607) to do hybrid join. As a best practice, upgrade to the latest version of Windows 10 or 11. If you need to support previous operating systems, see the section [Supporting down-level devices](#supporting-down-level-devices)
-For devices running the Windows desktop operating system, the supported version is the Windows 10 Anniversary Update (version 1607) or later. As a best practice, upgrade to the latest version of Windows 10.
+To do a targeted deployment of hybrid Azure AD join on Windows current devices, you need to:
-To do a controlled validation of hybrid Azure AD join on Windows current devices, you need to:
-
-1. Clear the Service Connection Point (SCP) entry from Active Directory (AD) if it exists
-1. Configure client-side registry setting for SCP on your domain-joined computers using a Group Policy Object (GPO)
-1. If you are using AD FS, you must also configure the client-side registry setting for SCP on your AD FS server using a GPO
+1. Clear the Service Connection Point (SCP) entry from Active Directory (AD) if it exists.
+1. Configure client-side registry setting for SCP on your domain-joined computers using a Group Policy Object (GPO).
+1. If you're using Active Directory Federation Services (AD FS), you must also configure the client-side registry setting for SCP on your AD FS server using a GPO.
1. You may also need to [customize synchronization options](../hybrid/how-to-connect-post-installation.md#additional-tasks-available-in-azure-ad-connect) in Azure AD Connect to enable device synchronization. - ### Clear the SCP from AD Use the Active Directory Services Interfaces Editor (ADSI Edit) to modify the SCP objects in AD. 1. Launch the **ADSI Edit** desktop application from and administrative workstation or a domain controller as an Enterprise Administrator. 1. Connect to the **Configuration Naming Context** of your domain.
-1. Browse to **CN=Configuration,DC=contoso,DC=com** > **CN=Services** > **CN=Device Registration Configuration**
-1. Right click on the leaf object **CN=62a0ff2e-97b9-4513-943f-0d221bd30080** and select **Properties**
- 1. Select **keywords** from the **Attribute Editor** window and click **Edit**
- 1. Select the values of **azureADId** and **azureADName** (one at a time) and click **Remove**
-1. Close **ADSI Edit**
-
+1. Browse to **CN=Configuration,DC=contoso,DC=com** > **CN=Services** > **CN=Device Registration Configuration**.
+1. Right-click on the leaf object **CN=62a0ff2e-97b9-4513-943f-0d221bd30080** and select **Properties**.
+ 1. Select **keywords** from the **Attribute Editor** window and select **Edit**.
+ 1. Select the values of **azureADId** and **azureADName** (one at a time) and select **Remove**.
+1. Close **ADSI Edit**.
### Configure client-side registry setting for SCP
Use the following example to create a Group Policy Object (GPO) to deploy a regi
1. Open a Group Policy Management console and create a new Group Policy Object in your domain. 1. Provide your newly created GPO a name (for example, ClientSideSCP).
-1. Edit the GPO and locate the following path: **Computer Configuration** > **Preferences** > **Windows Settings** > **Registry**
-1. Right-click on the Registry and select **New** > **Registry Item**
- 1. On the **General** tab, configure the following
- 1. Action: **Update**
- 1. Hive: **HKEY_LOCAL_MACHINE**
- 1. Key Path: **SOFTWARE\Microsoft\Windows\CurrentVersion\CDJ\AAD**
- 1. Value name: **TenantId**
- 1. Value type: **REG_SZ**
- 1. Value data: The GUID or **Tenant ID** of your Azure AD instance (This value can be found in the **Azure portal** > **Azure Active Directory** > **Properties** > **Tenant ID**)
- 1. Click **OK**
-1. Right-click on the Registry and select **New** > **Registry Item**
- 1. On the **General** tab, configure the following
- 1. Action: **Update**
- 1. Hive: **HKEY_LOCAL_MACHINE**
- 1. Key Path: **SOFTWARE\Microsoft\Windows\CurrentVersion\CDJ\AAD**
- 1. Value name: **TenantName**
- 1. Value type: **REG_SZ**
- 1. Value data: Your verified **domain name** if you are using federated environment such as AD FS. Your verified **domain name** or your onmicrosoft.com domain name for example, `contoso.onmicrosoft.com` if you are using managed environment
- 1. Click **OK**
-1. Close the editor for the newly created GPO
-1. Link the newly created GPO to the desired OU containing domain-joined computers that belong to your controlled rollout population
+1. Edit the GPO and locate the following path: **Computer Configuration** > **Preferences** > **Windows Settings** > **Registry**.
+1. Right-click on the Registry and select **New** > **Registry Item**.
+ 1. On the **General** tab, configure the following.
+ 1. Action: **Update**.
+ 1. Hive: **HKEY_LOCAL_MACHINE**.
+ 1. Key Path: **SOFTWARE\Microsoft\Windows\CurrentVersion\CDJ\AAD**.
+ 1. Value name: **TenantId**.
+ 1. Value type: **REG_SZ**.
+ 1. Value data: The GUID or **Tenant ID** of your Azure AD instance (This value can be found in the **Azure portal** > **Azure Active Directory** > **Properties** > **Tenant ID**).
+ 1. Select **OK**.
+1. Right-click on the Registry and select **New** > **Registry Item**.
+ 1. On the **General** tab, configure the following.
+ 1. Action: **Update**.
+ 1. Hive: **HKEY_LOCAL_MACHINE**.
+ 1. Key Path: **SOFTWARE\Microsoft\Windows\CurrentVersion\CDJ\AAD**.
+ 1. Value name: **TenantName**.
+ 1. Value type: **REG_SZ**.
+ 1. Value data: Your verified **domain name** if you're using federated environment such as AD FS. Your verified **domain name** or your onmicrosoft.com domain name, for example `contoso.onmicrosoft.com` if you're using managed environment.
+ 1. Select **OK**.
+1. Close the editor for the newly created GPO.
+1. Link the newly created GPO to the correct OU containing domain-joined computers that belong to your controlled rollout population.
### Configure AD FS settings
-If you are using AD FS, you first need to configure client-side SCP using the instructions mentioned above by linking the GPO to your AD FS servers. The SCP object defines the source of authority for device objects. It can be on-premises or Azure AD. When client-side SCP is configured for AD FS, the source for device objects is established as Azure AD.
+If you're using AD FS, you first need to configure client-side SCP using the instructions mentioned earlier by linking the GPO to your AD FS servers. The SCP object defines the source of authority for device objects. It can be on-premises or Azure AD. When client-side SCP is configured for AD FS, the source for device objects is established as Azure AD.
> [!NOTE] > If you failed to configure client-side SCP on your AD FS servers, the source for device identities would be considered as on-premises. ADFS will then start deleting device objects from on-premises directory after the stipulated period defined in the ADFS Device Registration's attribute "MaximumInactiveDays". ADFS Device Registration objects can be found using the [Get-AdfsDeviceRegistration cmdlet](/powershell/module/adfs/get-adfsdeviceregistration).
-## Controlled validation of hybrid Azure AD join on Windows down-level devices
+## Supporting down-level devices
To register Windows down-level devices, organizations must install [Microsoft Workplace Join for non-Windows 10 computers](https://www.microsoft.com/download/details.aspx?id=53554) available on the Microsoft Download Center.
To control the device registration, you should deploy the Windows Installer pack
> [!NOTE] > If a SCP is not configured in AD, then you should follow the same approach as described to [Configure client-side registry setting for SCP](#configure-client-side-registry-setting-for-scp)) on your domain-joined computers using a Group Policy Object (GPO).
+## Post validation
-After you verify that everything works as expected, you can automatically register the rest of your Windows current and down-level devices with Azure AD by [configuring SCP using Azure AD Connect](hybrid-azuread-join-managed-domains.md#configure-hybrid-azure-ad-join).
+After you verify that everything works as expected, you can automatically register the rest of your Windows current and down-level devices with Azure AD by [configuring the SCP using Azure AD Connect](hybrid-azuread-join-managed-domains.md#configure-hybrid-azure-ad-join).
## Next steps
-[Plan your hybrid Azure Active Directory join implementation](hybrid-azuread-join-plan.md)
+- [Plan your hybrid Azure Active Directory join implementation](hybrid-azuread-join-plan.md)
+- [Configure hybrid Azure AD join](howto-hybrid-azure-ad-join.md)
+- [Configure hybrid Azure Active Directory join manually](hybrid-azuread-join-manual.md)
+- [Use Conditional Access to require compliant or hybrid Azure AD joined device](../conditional-access/howto-conditional-access-policy-compliant-device.md)
active-directory Hybrid Azuread Join Federated Domains https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/hybrid-azuread-join-federated-domains.md
- Title: Configure hybrid Azure Active Directory join for federated domains | Microsoft Docs
-description: Learn how to configure hybrid Azure Active Directory join for federated domains.
----- Previously updated : 05/20/2020------
-#Customer intent: As an IT admin, I want to set up hybrid Azure Active Directory (Azure AD) joined devices for federated domains so I can automatically create and manage device identities in Azure AD for my Active Directory domain-joined computers
---
-# Tutorial: Configure hybrid Azure Active Directory join for federated domains
-
-Like a user in your organization, a device is a core identity you want to protect. You can use a device's identity to protect your resources at any time and from any location. You can accomplish this goal by bringing device identities and managing them in Azure Active Directory (Azure AD) by using one of the following methods:
--- Azure AD join-- Hybrid Azure AD join-- Azure AD registration-
-Bringing your devices to Azure AD maximizes user productivity through single sign-on (SSO) across your cloud and on-premises resources. You can secure access to your cloud and on-premises resources with [Conditional Access](../conditional-access/howto-conditional-access-policy-compliant-device.md) at the same time.
-
-A federated environment should have an identity provider that supports the following requirements. If you have a federated environment using Active Directory Federation Services (AD FS), then the below requirements are already supported.
--- **WIAORMULTIAUTHN claim:** This claim is required to do hybrid Azure AD join for Windows down-level devices.-- **WS-Trust protocol:** This protocol is required to authenticate Windows current hybrid Azure AD joined devices with Azure AD.
- When you're using AD FS, you need to enable the following WS-Trust endpoints:
- `/adfs/services/trust/2005/windowstransport`
- `/adfs/services/trust/13/windowstransport`
- `/adfs/services/trust/2005/usernamemixed`
- `/adfs/services/trust/13/usernamemixed`
- `/adfs/services/trust/2005/certificatemixed`
- `/adfs/services/trust/13/certificatemixed`
-
-> [!WARNING]
-> Both **adfs/services/trust/2005/windowstransport** and **adfs/services/trust/13/windowstransport** should be enabled as intranet facing endpoints only and must NOT be exposed as extranet facing endpoints through the Web Application Proxy. To learn more on how to disable WS-Trust Windows endpoints, see [Disable WS-Trust Windows endpoints on the proxy](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#disable-ws-trust-windows-endpoints-on-the-proxy-ie-from-extranet). You can see what endpoints are enabled through the AD FS management console under **Service** > **Endpoints**.
-
-In this tutorial, you learn how to configure hybrid Azure AD join for Active Directory domain-joined computers devices in a federated environment by using AD FS.
-
-You learn how to:
-
-> [!div class="checklist"]
-> * Configure hybrid Azure AD join
-> * Enable Windows downlevel devices
-> * Verify the registration
-> * Troubleshoot
-
-## Prerequisites
-
-This tutorial assumes that you're familiar with these articles:
--- [What is a device identity?](overview.md)-- [How to plan your hybrid Azure AD join implementation](hybrid-azuread-join-plan.md)-- [How to do controlled validation of hybrid Azure AD join](hybrid-azuread-join-control.md)-
-To configure the scenario in this tutorial, you need:
--- Windows Server 2012 R2 with AD FS-- [Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594) version 1.1.819.0 or later-
-Beginning with version 1.1.819.0, Azure AD Connect includes a wizard that you can use to configure hybrid Azure AD join. The wizard significantly simplifies the configuration process. The related wizard:
--- Configures the service connection points (SCPs) for device registration-- Backs up your existing Azure AD relying party trust-- Updates the claim rules in your Azure AD trust-
-The configuration steps in this article are based on using the Azure AD Connect wizard. If you have an earlier version of Azure AD Connect installed, you must upgrade it to 1.1.819 or later to use the wizard. If installing the latest version of Azure AD Connect isn't an option for you, see [how to manually configure hybrid Azure AD join](hybrid-azuread-join-manual.md).
-
-Hybrid Azure AD join requires devices to have access to the following Microsoft resources from inside your organization's network:
--- `https://enterpriseregistration.windows.net`-- `https://login.microsoftonline.com`-- `https://device.login.microsoftonline.com`-- Your organization's Security Token Service (STS) (For federated domains)-- `https://autologon.microsoftazuread-sso.com` (If you use or plan to use seamless SSO)-
-> [!WARNING]
-> If your organization uses proxy servers that intercept SSL traffic for scenarios like data loss prevention or Azure AD tenant restrictions, ensure that traffic to these URLs are excluded from TLS break-and-inspect. Failure to exclude these URLs may cause interference with client certificate authentication, cause issues with device registration, and device-based Conditional Access.
-
-Beginning with Windows 10 1803, if the instantaneous hybrid Azure AD join for a federated environment by using AD FS fails, we rely on Azure AD Connect to sync the computer object in Azure AD that's subsequently used to complete the device registration for hybrid Azure AD join. Verify that Azure AD Connect has synced the computer objects of the devices you want to be hybrid Azure AD joined to Azure AD. If the computer objects belong to specific organizational units (OUs), you must also configure the OUs to sync in Azure AD Connect. To learn more about how to sync computer objects by using Azure AD Connect, see [Configure filtering by using Azure AD Connect](../hybrid/how-to-connect-sync-configure-filtering.md#organizational-unitbased-filtering).
-
-> [!NOTE]
-> To get device registration sync join to succeed, as part of the device registration configuration, do not exclude the default device attributes from your Azure AD Connect sync configuration. To learn more about default device attributes synced to AAD, see [Attributes synchronized by Azure AD Connect](../hybrid/reference-connect-sync-attributes-synchronized.md#windows-10).
-
-If your organization requires access to the internet via an outbound proxy, Microsoft recommends [implementing Web Proxy Auto-Discovery (WPAD)](/previous-versions/tn-archive/cc995261(v%3dtechnet.10)) to enable Windows 10 computers for device registration with Azure AD. If you encounter issues configuring and managing WPAD, see [Troubleshoot automatic detection](/previous-versions/tn-archive/cc302643(v=technet.10)).
-
-If you don't use WPAD and want to configure proxy settings on your computer, you can do so beginning with Windows 10 1709. For more information, see [Configure WinHTTP settings by using a group policy object (GPO)](/archive/blogs/netgeeks/winhttp-proxy-settings-deployed-by-gpo).
-
-> [!NOTE]
-> If you configure proxy settings on your computer by using WinHTTP settings, any computers that can't connect to the configured proxy will fail to connect to the internet.
-
-If your organization requires access to the internet via an authenticated outbound proxy, you must make sure that your Windows 10 computers can successfully authenticate to the outbound proxy. Because Windows 10 computers run device registration by using machine context, you must configure outbound proxy authentication by using machine context. Follow up with your outbound proxy provider on the configuration requirements.
-
-To verify if the device is able to access the above Microsoft resources under the system account, you can use [Test Device Registration Connectivity](/samples/azure-samples/testdeviceregconnectivity/testdeviceregconnectivity/) script.
-
-## Configure hybrid Azure AD join
-
-To configure a hybrid Azure AD join by using Azure AD Connect, you need:
--- The credentials of a global administrator for your Azure AD tenant -- The enterprise administrator credentials for each of the forests-- The credentials of your AD FS administrator-
-**To configure a hybrid Azure AD join by using Azure AD Connect**:
-
-1. Start Azure AD Connect, and then select **Configure**.
-
- ![Welcome](./media/hybrid-azuread-join-federated-domains/11.png)
-
-1. On the **Additional tasks** page, select **Configure device options**, and then select **Next**.
-
- ![Additional tasks](./media/hybrid-azuread-join-federated-domains/12.png)
-
-1. On the **Overview** page, select **Next**.
-
- ![Overview](./media/hybrid-azuread-join-federated-domains/13.png)
-
-1. On the **Connect to Azure AD** page, enter the credentials of a global administrator for your Azure AD tenant, and then select **Next**.
-
- ![Connect to Azure AD](./media/hybrid-azuread-join-federated-domains/14.png)
-
-1. On the **Device options** page, select **Configure Hybrid Azure AD join**, and then select **Next**.
-
- ![Device options](./media/hybrid-azuread-join-federated-domains/15.png)
-
-1. On the **SCP** page, complete the following steps, and then select **Next**:
-
- ![SCP](./media/hybrid-azuread-join-federated-domains/16.png)
-
- 1. Select the forest.
- 1. Select the authentication service. You must select **AD FS server** unless your organization has exclusively Windows 10 clients and you have configured computer/device sync, or your organization uses seamless SSO.
- 1. Select **Add** to enter the enterprise administrator credentials.
-
-1. On the **Device operating systems** page, select the operating systems that the devices in your Active Directory environment use, and then select **Next**.
-
- ![Device operating system](./media/hybrid-azuread-join-federated-domains/17.png)
-
-1. On the **Federation configuration** page, enter the credentials of your AD FS administrator, and then select **Next**.
-
- ![Federation configuration](./media/hybrid-azuread-join-federated-domains/18.png)
-
-1. On the **Ready to configure** page, select **Configure**.
-
- ![Ready to configure](./media/hybrid-azuread-join-federated-domains/19.png)
-
-1. On the **Configuration complete** page, select **Exit**.
-
- ![Configuration complete](./media/hybrid-azuread-join-federated-domains/20.png)
-
-## Enable Windows downlevel devices
-
-If some of your domain-joined devices are Windows downlevel devices, you must:
--- Configure the local intranet settings for device registration-- Install Microsoft Workplace Join for Windows downlevel computers-
-> [!NOTE]
-> Windows 7 support ended on January 14, 2020. For more information, [Support for Windows 7 has ended](https://support.microsoft.com/en-us/help/4057281/windows-7-support-ended-on-january-14-2020).
-
-### Configure the local intranet settings for device registration
-
-To successfully complete hybrid Azure AD join of your Windows downlevel devices and to avoid certificate prompts when devices authenticate to Azure AD, you can push a policy to your domain-joined devices to add the following URLs to the local intranet zone in Internet Explorer:
--- `https://device.login.microsoftonline.com`-- Your organization's STS (For federated domains)-- `https://autologon.microsoftazuread-sso.com` (For seamless SSO)-
-You also must enable **Allow updates to status bar via script** in the userΓÇÖs local intranet zone.
-
-### Install Microsoft Workplace Join for Windows downlevel computers
-
-To register Windows downlevel devices, organizations must install [Microsoft Workplace Join for non-Windows 10 computers](https://www.microsoft.com/download/details.aspx?id=53554). Microsoft Workplace Join for non-Windows 10 computers is available in the Microsoft Download Center.
-
-You can deploy the package by using a software distribution system likeΓÇ»[Microsoft Endpoint Configuration Manager](/configmgr/). The package supports the standard silent installation options with the `quiet` parameter. The current branch of Configuration Manager offers benefits over earlier versions, like the ability to track completed registrations.
-
-The installer creates a scheduled task on the system that runs in the user context. The task is triggered when the user signs in to Windows. The task silently joins the device with Azure AD by using the user credentials after it authenticates with Azure AD.
-
-## Verify the registration
-
-Here are 3 ways to locate and verify the device state:
-
-### Locally on the device
-
-1. Open Windows PowerShell.
-2. Enter `dsregcmd /status`.
-3. Verify that both **AzureAdJoined** and **DomainJoined** are set to **YES**.
-4. You can use the **DeviceId** and compare the status on the service using either the Azure portal or PowerShell.
-
-For downlevel devices see the article [Troubleshooting hybrid Azure Active Directory joined down-level devices](troubleshoot-hybrid-join-windows-legacy.md#step-1-retrieve-the-registration-status)
-
-### Using the Azure portal
-
-1. Go to the devices page using a [direct link](https://portal.azure.com/#blade/Microsoft_AAD_IAM/DevicesMenuBlade/Devices).
-2. Information on how to locate a device can be found in [How to manage device identities using the Azure portal](./device-management-azure-portal.md).
-3. If the **Registered** column says **Pending**, then Hybrid Azure AD Join has not completed. In federated environments, this can happen only if it failed to register and AAD connect is configured to sync the devices.
-4. If the **Registered** column contains a **date/time**, then Hybrid Azure AD Join has completed.
-
-### Using PowerShell
-
-Verify the device registration state in your Azure tenant by using **[Get-MsolDevice](/powershell/module/msonline/get-msoldevice)**. This cmdlet is in the [Azure Active Directory PowerShell module](/powershell/azure/active-directory/install-msonlinev1).
-
-When you use the **Get-MSolDevice** cmdlet to check the service details:
--- An object with the **device ID** that matches the ID on the Windows client must exist.-- The value for **DeviceTrustType** is **Domain Joined**. This setting is equivalent to the **Hybrid Azure AD joined** state on the **Devices** page in the Azure AD portal.-- For devices that are used in Conditional Access, the value for **Enabled** is **True** and **DeviceTrustLevel** is **Managed**.-
-1. Open Windows PowerShell as an administrator.
-2. Enter `Connect-MsolService` to connect to your Azure tenant.
-
-#### Count all Hybrid Azure AD joined devices (excluding **Pending** state)
-
-```azurepowershell
-(Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))}).count
-```
-
-#### Count all Hybrid Azure AD joined devices with **Pending** state
-
-```azurepowershell
-(Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (-not([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))}).count
-```
-
-#### List all Hybrid Azure AD joined devices
-
-```azurepowershell
-Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))}
-```
-
-#### List all Hybrid Azure AD joined devices with **Pending** state
-
-```azurepowershell
-Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (-not([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))}
-```
-
-#### List details of a single device:
-
-1. Enter `get-msoldevice -deviceId <deviceId>` (This is the **DeviceId** obtained locally on the device).
-2. Verify that **Enabled** is set to **True**.
-
-## Troubleshoot your implementation
-
-If you experience issues with completing hybrid Azure AD join for domain-joined Windows devices, see:
--- [Troubleshooting devices using dsregcmd command](./troubleshoot-device-dsregcmd.md)-- [Troubleshoot hybrid Azure AD join for Windows current devices](troubleshoot-hybrid-join-windows-current.md)-- [Troubleshoot hybrid Azure AD join for Windows downlevel devices](troubleshoot-hybrid-join-windows-legacy.md)-
-## Next steps
-
-Learn how to [manage device identities by using the Azure portal](device-management-azure-portal.md).
-
-<!--Image references-->
-[1]: ./media/active-directory-conditional-access-automatic-device-registration-setup/12.png
active-directory Hybrid Azuread Join Managed Domains https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/hybrid-azuread-join-managed-domains.md
- Title: Configure hybrid Azure Active Directory join for managed domains | Microsoft Docs
-description: Learn how to configure hybrid Azure Active Directory join for managed domains.
----- Previously updated : 10/25/2021------
-#Customer intent: As an IT admin, I want to set up hybrid Azure Active Directory (Azure AD) joined devices for managed domains so I can automatically create and manage device identities in Azure AD for my Active Directory domain-joined computers
---
-# Tutorial: Configure hybrid Azure Active Directory join for managed domains
-
-In this tutorial, you learn how to configure hybrid Azure Active Directory (Azure AD) join for Active Directory domain-joined devices. This method supports a managed environment that includes both on-premises Active Directory and Azure AD.
-
-Like a user in your organization, a device is a core identity you want to protect. You can use a device's identity to protect your resources at any time and from any location. You can accomplish this goal by managing device identities in Azure AD. Use one of the following methods:
--- Azure AD join-- Hybrid Azure AD join-- Azure AD registration-
-This article focuses on hybrid Azure AD join.
-
-Bringing your devices to Azure AD maximizes user productivity through single sign-on (SSO) across your cloud and on-premises resources. You can secure access to your cloud and on-premises resources with [Conditional Access](../conditional-access/howto-conditional-access-policy-compliant-device.md) at the same time.
-
-You can deploy a managed environment by using [password hash sync (PHS)](../hybrid/whatis-phs.md) or [pass-through authentication (PTA)](../hybrid/how-to-connect-pta.md) with [seamless single sign-on](../hybrid/how-to-connect-sso.md). These scenarios don't require you to configure a federation server for authentication.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Configure hybrid Azure AD join
-> * Enable Windows down-level devices
-> * Verify joined devices
-> * Troubleshoot
-
-## Prerequisites
--- The [Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594) (1.1.819.0 or later)-- The credentials of a global administrator for your Azure AD tenant-- The enterprise administrator credentials for each of the forests-
-Familiarize yourself with these articles:
--- [What is a device identity?](overview.md)-- [How To: Plan your hybrid Azure Active Directory join implementation](hybrid-azuread-join-plan.md)-- [Controlled validation of hybrid Azure AD join](hybrid-azuread-join-control.md)-
-> [!NOTE]
-> Azure AD doesn't support smartcards or certificates in managed domains.
-
-Verify that Azure AD Connect has synced the computer objects of the devices you want to be hybrid Azure AD joined to Azure AD. If the computer objects belong to specific organizational units (OUs), configure the OUs to sync in Azure AD Connect. To learn more about how to sync computer objects by using Azure AD Connect, see [Organizational unitΓÇôbased filtering](../hybrid/how-to-connect-sync-configure-filtering.md#organizational-unitbased-filtering).
-
-> [!NOTE]
-> To get device registration sync join to succeed, as part of the device registration configuration, do not exclude the default device attributes from your Azure AD Connect sync configuration. To learn more about default device attributes synced to AAD, see [Attributes synchronized by Azure AD Connect](../hybrid/reference-connect-sync-attributes-synchronized.md#windows-10).
-
-Beginning with version 1.1.819.0, Azure AD Connect includes a wizard to configure hybrid Azure AD join. The wizard significantly simplifies the configuration process. The wizard configures the service connection points (SCPs) for device registration.
-
-The configuration steps in this article are based on using the wizard in Azure AD Connect.
-
-Hybrid Azure AD join requires devices to have access to the following Microsoft resources from inside your organization's network:
--- `https://enterpriseregistration.windows.net`-- `https://login.microsoftonline.com`-- `https://device.login.microsoftonline.com`-- `https://autologon.microsoftazuread-sso.com` (If you use or plan to use seamless SSO)-
-> [!WARNING]
-> If your organization uses proxy servers that intercept SSL traffic for scenarios like data loss prevention or Azure AD tenant restrictions, ensure that traffic to these URLs are excluded from TLS break-and-inspect. Failure to exclude these URLs may cause interference with client certificate authentication, cause issues with device registration, and device-based Conditional Access.
-
-If your organization requires access to the internet via an outbound proxy, you can use [implementing Web Proxy Auto-Discovery (WPAD)](/previous-versions/tn-archive/cc995261(v=technet.10)) to enable Windows 10 computers for device registration with Azure AD. To address issues configuring and managing WPAD, see [Troubleshooting Automatic Detection](/previous-versions/tn-archive/cc302643(v=technet.10)). In Windows 10 devices prior to 1709 update, WPAD is the only available option to configure a proxy to work with Hybrid Azure AD join.
-
-If you don't use WPAD, you can configure WinHTTP proxy settings on your computer beginning with Windows 10 1709. For more information, see [WinHTTP Proxy Settings deployed by GPO](/archive/blogs/netgeeks/winhttp-proxy-settings-deployed-by-gpo).
-
-> [!NOTE]
-> If you configure proxy settings on your computer by using WinHTTP settings, any computers that can't connect to the configured proxy will fail to connect to the internet.
-
-If your organization requires access to the internet via an authenticated outbound proxy, make sure that your Windows 10 computers can successfully authenticate to the outbound proxy. Because Windows 10 computers run device registration by using machine context, configure outbound proxy authentication by using machine context. Follow up with your outbound proxy provider on the configuration requirements.
-
-Verify the device can access the above Microsoft resources under the system account by using the [Test Device Registration Connectivity](/samples/azure-samples/testdeviceregconnectivity/testdeviceregconnectivity/) script.
-
-## Configure hybrid Azure AD join
-
-To configure a hybrid Azure AD join by using Azure AD Connect:
-
-1. Start Azure AD Connect, and then select **Configure**.
-
-1. In **Additional tasks**, select **Configure device options**, and then select **Next**.
-
- ![Additional tasks](./media/hybrid-azuread-join-managed-domains/azure-ad-connect-additional-tasks.png)
-
-1. In **Overview**, select **Next**.
-
-1. In **Connect to Azure AD**, enter the credentials of a global administrator for your Azure AD tenant.
-
-1. In **Device options**, select **Configure Hybrid Azure AD join**, and then select **Next**.
-
- ![Device options](./media/hybrid-azuread-join-managed-domains/azure-ad-connect-device-options.png)
-
-1. In **Device operating systems**, select the operating systems that devices in your Active Directory environment use, and then select **Next**.
-
- ![Device operating system](./media/hybrid-azuread-join-managed-domains/azure-ad-connect-device-operating-systems.png)
-
-1. In **SCP configuration**, for each forest where you want Azure AD Connect to configure the SCP, complete the following steps, and then select **Next**.
-
- 1. Select the **Forest**.
- 1. Select an **Authentication Service**.
- 1. Select **Add** to enter the enterprise administrator credentials.
-
- ![SCP](./media/hybrid-azuread-join-managed-domains/azure-ad-connect-scp-configuration.png)
-
-1. In **Ready to configure**, select **Configure**.
-
-1. In **Configuration complete**, select **Exit**.
-
-## Enable Windows down-level devices
-
-If some of your domain-joined devices are Windows down-level devices, you must:
--- Configure the local intranet settings for device registration-- Configure seamless SSO-- Install Microsoft Workplace Join for Windows down-level computers-
-Windows down-level devices are devices with older operating systems. The following are Windows down-level devices:
--- Windows 7-- Windows 8.1-- Windows Server 2008 R2-- Windows Server 2012-- Windows Server 2012 R2-
-> [!NOTE]
-> Windows 7 support ended on January 14, 2020. For more information, see [Windows 7 support ended](https://support.microsoft.com/help/4057281/windows-7-support-ended-on-january-14-2020).
-
-### Configure the local intranet settings for device registration
-
-To complete hybrid Azure AD join of your Windows down-level devices and to avoid certificate prompts when devices authenticate to Azure AD, you can push a policy to your domain-joined devices to add the following URLs to the local intranet zone in Internet Explorer:
--- `https://device.login.microsoftonline.com`-- `https://autologon.microsoftazuread-sso.com`-
-You also must enable **Allow updates to status bar via script** in the user's local intranet zone.
-
-### Configure seamless SSO
-
-To complete hybrid Azure AD join of your Windows down-level devices in a managed domain that uses [password hash sync](../hybrid/whatis-phs.md) or [pass-through authentication](../hybrid/how-to-connect-pta.md) as your Azure AD cloud authentication method, you must also [configure seamless SSO](../hybrid/how-to-connect-sso-quick-start.md#step-2-enable-the-feature).
-
-### Install Microsoft Workplace Join for Windows down-level computers
-
-To register Windows down-level devices, organizations must install [Microsoft Workplace Join for non-Windows 10 computers](https://www.microsoft.com/download/details.aspx?id=53554). Microsoft Workplace Join for non-Windows 10 computers is available in the Microsoft Download Center.
-
-You can deploy the package by using a software distribution system likeΓÇ»[Microsoft Endpoint Configuration Manager](/configmgr/). The package supports the standard silent installation options with the `quiet` parameter. The current version of Configuration Manager offers benefits over earlier versions, like the ability to track completed registrations.
-
-The installer creates a scheduled task on the system that runs in the user context. The task is triggered when the user signs in to Windows. The task silently joins the device with Azure AD by using the user credentials after it authenticates with Azure AD.
-
-## Verify the registration
-
-Here are 3 ways to locate and verify the device state:
-
-### Locally on the device
-
-1. Open Windows PowerShell.
-2. Enter `dsregcmd /status`.
-3. Verify that both **AzureAdJoined** and **DomainJoined** are set to **YES**.
-4. You can use the **DeviceId** and compare the status on the service using either the Azure portal or PowerShell.
-
-### Using the Azure portal
-
-1. Go to the devices page using a [direct link](https://portal.azure.com/#blade/Microsoft_AAD_IAM/DevicesMenuBlade/Devices).
-2. Information on how to locate a device can be found in [How to manage device identities using the Azure portal](./device-management-azure-portal.md).
-3. If the **Registered** column says **Pending**, then Hybrid Azure AD Join has not completed.
-4. If the **Registered** column contains a **date/time**, then Hybrid Azure AD Join has completed.
-
-### Using PowerShell
-
-Verify the device registration state in your Azure tenant by using **[Get-MsolDevice](/powershell/module/msonline/get-msoldevice)**. This cmdlet is in the [Azure Active Directory PowerShell module](/powershell/azure/active-directory/install-msonlinev1).
-
-When you use the **Get-MSolDevice** cmdlet to check the service details:
--- An object with the **device ID** that matches the ID on the Windows client must exist.-- The value for **DeviceTrustType** is **Domain Joined**. This setting is equivalent to the **Hybrid Azure AD joined** state on the **Devices** page in the Azure AD portal.-- For devices that are used in Conditional Access, the value for **Enabled** is **True** and **DeviceTrustLevel** is **Managed**.-
-1. Open Windows PowerShell as an administrator.
-2. Enter `Connect-MsolService` to connect to your Azure tenant.
-
-#### Count all Hybrid Azure AD joined devices (excluding **Pending** state)
-
-```azurepowershell
-(Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))}).count
-```
-
-#### Count all Hybrid Azure AD joined devices with **Pending** state
-
-```azurepowershell
-(Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (-not([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))}).count
-```
-
-#### List all Hybrid Azure AD joined devices
-
-```azurepowershell
-Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))}
-```
-
-#### List all Hybrid Azure AD joined devices with **Pending** state
-
-```azurepowershell
-Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (-not([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))}
-```
-
-#### List details of a single device:
-
-1. Enter `get-msoldevice -deviceId <deviceId>` (This is the **DeviceId** obtained locally on the device).
-2. Verify that **Enabled** is set to **True**.
-
-## Troubleshoot your implementation
-
-If you experience issues completing hybrid Azure AD join for domain-joined Windows devices, see:
--- [Troubleshooting devices using dsregcmd command](./troubleshoot-device-dsregcmd.md)-- [Troubleshooting hybrid Azure Active Directory joined devices](troubleshoot-hybrid-join-windows-current.md)-- [Troubleshooting hybrid Azure Active Directory joined down-level devices](troubleshoot-hybrid-join-windows-legacy.md)-
-## Next steps
-
-Advance to the next article to learn how to manage device identities by using the Azure portal.
-> [!div class="nextstepaction"]
-> [Manage device identities](device-management-azure-portal.md)
active-directory Hybrid Azuread Join Manual https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/hybrid-azuread-join-manual.md
Title: Configure hybrid Azure Active Directory joined devices manually | Microsoft Docs
-description: Learn how to manually configure hybrid Azure Active Directory joined devices.
+ Title: Manual configuration for hybrid Azure Active Directory join devices
+description: Learn how to manually configure hybrid Azure Active Directory join devices.
Previously updated : 04/16/2021 Last updated : 01/20/2022
-#Customer intent: As an IT admin, I want to set up hybrid Azure AD joined devices so that I can automatically bring AD domain-joined devices under control.
-# Tutorial: Configure hybrid Azure Active Directory joined devices manually
+# Configure hybrid Azure Active Directory join manually
-With device management in Azure Active Directory (Azure AD), you can ensure that users are accessing your resources from devices that meet your standards for security and compliance. For more information, see [Introduction to device management in Azure Active Directory](overview.md).
+If using Azure AD Connect is an option for you, see the guidance in [Configure hybrid Azure AD join](howto-hybrid-azure-ad-join.md). Using the automation in Azure AD Connect, will significantly simplify the configuration of hybrid Azure AD join.
-> [!TIP]
-> If using Azure AD Connect is an option for you, see the related tutorials for [managed](hybrid-azuread-join-managed-domains.md) or [federated](hybrid-azuread-join-federated-domains.md) domains. By using Azure AD Connect, you can significantly simplify the configuration of hybrid Azure AD join.
-
-If you have an on-premises Active Directory environment and you want to join your domain-joined devices to Azure AD, you can accomplish this by configuring hybrid Azure AD joined devices. In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Manually configure hybrid Azure AD join
-> * Configure a service connection point
-> * Set up issuance of claims
-> * Enable Windows down-level devices
-> * Verify joined devices
-> * Troubleshoot your implementation
+This article covers the manual configuration of requirements for hybrid Azure AD join including steps for managed and federated domains.
## Prerequisites
-This tutorial assumes that you're familiar with:
-
-* [Introduction to device management in Azure Active Directory](./overview.md)
-* [Plan your hybrid Azure Active Directory join implementation](hybrid-azuread-join-plan.md)
-* [Control the hybrid Azure AD join of your devices](hybrid-azuread-join-control.md)
-
-Before you start enabling hybrid Azure AD joined devices in your organization, make sure that:
+- [Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594) version 1.1.819.0 or later.
+ - To get device registration sync join to succeed, as part of the device registration configuration, don't exclude the default device attributes from your Azure AD Connect sync configuration. To learn more about default device attributes synced to Azure AD, see [Attributes synchronized by Azure AD Connect](../hybrid/reference-connect-sync-attributes-synchronized.md#windows-10).
+ - If the computer objects of the devices you want to be hybrid Azure AD joined belong to specific organizational units (OUs), configure the correct OUs to sync in Azure AD Connect. To learn more about how to sync computer objects by using Azure AD Connect, see [Organizational unitΓÇôbased filtering](../hybrid/how-to-connect-sync-configure-filtering.md#organizational-unitbased-filtering).
+- Global administrator credentials for your Azure AD tenant.
+- Enterprise administrator credentials for each of the on-premises Active Directory Domain Services forests.
+- (**For federated domains**) Windows Server 2012 R2 with Active Directory Federation Services installed.
+- Users can register their devices with Azure AD. More information about this setting can be found under the heading **Configure device settings**, in the article, [Configure device settings](device-management-azure-portal.md#configure-device-settings).
-* You're running an up-to-date version of Azure AD Connect.
-* Azure AD Connect has synchronized the computer objects of the devices you want to be hybrid Azure AD joined to Azure AD. If the computer objects belong to specific organizational units (OUs), these OUs need to be configured for synchronization in Azure AD Connect as well.
-
-Azure AD Connect:
-
-* Keeps the association between the computer account in your on-premises Active Directory instance and the device object in Azure AD.
-* Enables other device-related features, like Windows Hello for Business.
-
-Make sure that the following URLs are accessible from computers inside your organization's network for registration of computers to Azure AD:
+Hybrid Azure AD join requires devices to have access to the following Microsoft resources from inside your organization's network:
- `https://enterpriseregistration.windows.net` - `https://login.microsoftonline.com` - `https://device.login.microsoftonline.com`-- Your organization's Security Token Service (STS) (For federated domains) - `https://autologon.microsoftazuread-sso.com` (If you use or plan to use seamless SSO)
+- Your organization's Security Token Service (STS) (**For federated domains**)
> [!WARNING] > If your organization uses proxy servers that intercept SSL traffic for scenarios like data loss prevention or Azure AD tenant restrictions, ensure that traffic to these URLs are excluded from TLS break-and-inspect. Failure to exclude these URLs may cause interference with client certificate authentication, cause issues with device registration, and device-based Conditional Access.
-If your organization plans to use Seamless SSO, the following URL must be added to the user's local intranet zone.
--- `https://autologon.microsoftazuread-sso.com`
+If your organization requires access to the internet via an outbound proxy, you can use [Web Proxy Auto-Discovery (WPAD)](/previous-versions/tn-archive/cc995261(v=technet.10)) to enable Windows 10 computers for device registration with Azure AD. To address issues configuring and managing WPAD, see [Troubleshooting Automatic Detection](/previous-versions/tn-archive/cc302643(v=technet.10)).
-Also, the following setting should be enabled in the user's intranet zone: "Allow status bar updates via script."
-
-If your organization uses managed (non-federated) setup with on-premises Active Directory and does not use Active Directory Federation Services (AD FS) to federate with Azure AD, then hybrid Azure AD join on Windows 10 relies on the computer objects in Active Directory to be synced to Azure AD. Make sure that any OUs that contain the computer objects that need to be hybrid Azure AD joined are enabled for sync in the Azure AD Connect sync configuration.
-
-For Windows 10 devices on version 1703 or earlier, if your organization requires access to the internet via an outbound proxy, you must implement Web Proxy Auto-Discovery (WPAD) to enable Windows 10 computers to register to Azure AD.
-
-Beginning with Windows 10 1803, even if a hybrid Azure AD join attempt by a device in a federated domain through AD FS fails, and if Azure AD Connect is configured to sync the computer/device objects to Azure AD, the device will try to complete the hybrid Azure AD join by using the synced computer/device.
+If you don't use WPAD, you can configure WinHTTP proxy settings on your computer beginning with Windows 10 1709. For more information, see [WinHTTP Proxy Settings deployed by GPO](/archive/blogs/netgeeks/winhttp-proxy-settings-deployed-by-gpo).
> [!NOTE]
-> To get device registration sync join to succeed, as part of the device registration configuration, do not exclude the default device attributes from your Azure AD Connect sync configuration. To learn more about default device attributes synced to Azure AD, see [Attributes synchronized by Azure AD Connect](../hybrid/reference-connect-sync-attributes-synchronized.md#windows-10).
+> If you configure proxy settings on your computer by using WinHTTP settings, any computers that can't connect to the configured proxy will fail to connect to the internet.
+
+If your organization requires access to the internet via an authenticated outbound proxy, make sure that your Windows 10 computers can successfully authenticate to the outbound proxy. Because Windows 10 computers run device registration by using machine context, configure outbound proxy authentication by using machine context. Follow up with your outbound proxy provider on the configuration requirements.
-To verify if the device is able to access the above Microsoft resources under the system account, you can use [Test Device Registration Connectivity](/samples/azure-samples/testdeviceregconnectivity/testdeviceregconnectivity/) script.
+Verify devices can access the required Microsoft resources under the system account by using the [Test Device Registration Connectivity](/samples/azure-samples/testdeviceregconnectivity/testdeviceregconnectivity/) script.
-## Verify configuration steps
+## Configuration
-You can configure hybrid Azure AD joined devices for various types of Windows device platforms. This topic includes the required steps for all typical configuration scenarios.
+You can configure hybrid Azure AD joined devices for various types of Windows device platforms.
-Use the following table to get an overview of the steps that are required for your scenario:
+- For managed and federated domains, you must [configure a service connection point or SCP](#configure-a-service-connection-point).
+- For federated domains, you must ensure that your [federation service is configured to issue the appropriate claims](#set-up-issuance-of-claims).
-| Steps | Windows current and password hash sync | Windows current and federation | Windows down-level |
-| : | :: | :: | :: |
-| Configure service connection point | ![Check][1] | ![Check][1] | ![Check][1] |
-| Set up issuance of claims | | ![Check][1] | ![Check][1] |
-| Enable non-Windows 10 devices | | | ![Check][1] |
-| Verify joined devices | ![Check][1] | ![Check][1] | ![Check][1] |
+After these configurations are complete, follow the guidance to [verify registration](howto-hybrid-join-verify.md) and [enable downlevel operating systems](howto-hybrid-join-downlevel.md) where necessary.
-## Configure a service connection point
+### Configure a service connection point
-Your devices use a service connection point (SCP) object during the registration to discover Azure AD tenant information. In your on-premises Active Directory instance, the SCP object for the hybrid Azure AD joined devices must exist in the configuration naming context partition of the computer's forest. There is only one configuration naming context per forest. In a multi-forest Active Directory configuration, the service connection point must exist in all forests that contain domain-joined computers.
+Your devices use a service connection point (SCP) object during the registration to discover Azure AD tenant information. In your on-premises Active Directory instance, the SCP object for the hybrid Azure AD joined devices must exist in the configuration naming context partition of the computer's forest. There's only one configuration naming context per forest. In a multi-forest Active Directory configuration, the service connection point must exist in all forests that contain domain-joined computers.
You can use the [**Get-ADRootDSE**](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ee617246(v=technet.10)) cmdlet to retrieve the configuration naming context of your forest.
For a forest with the Active Directory domain name *fabrikam.com*, the configura
`CN=Configuration,DC=fabrikam,DC=com`
-In your forest, the SCP object for the auto-registration of domain-joined devices is located at:
+In your forest, the SCP object for the autoregistration of domain-joined devices is located at:
`CN=62a0ff2e-97b9-4513-943f-0d221bd30080,CN=Device Registration Configuration,CN=Services,[Your Configuration Naming Context]`
The **$scp.Keywords** output shows the Azure AD tenant information. Here's an ex
azureADId:72f988bf-86f1-41af-91ab-2d7cd011db47 ```
-If the service connection point does not exist, you can create it by running the `Initialize-ADSyncDomainJoinedComputerSync` cmdlet on your Azure AD Connect server. Enterprise admin credentials are required to run this cmdlet.
+If the service connection point doesn't exist, you can create it by running the `Initialize-ADSyncDomainJoinedComputerSync` cmdlet on your Azure AD Connect server. Enterprise admin credentials are required to run this cmdlet.
-The cmdlet:
+The `Initialize-ADSyncDomainJoinedComputerSync` cmdlet:
* Creates the service connection point in the Active Directory forest that Azure AD Connect is connected to. * Requires you to specify the `AdConnectorAccount` parameter. This account is configured as the Active Directory connector account in Azure AD Connect.
-The following script shows an example for using the cmdlet. In this script, `$aadAdminCred = Get-Credential` requires you to type a user name. You need to provide the user name in the user principal name (UPN) format (`user@example.com`).
+The following script shows an example for using the cmdlet. In this script, `$aadAdminCred = Get-Credential` requires you to type a user name. Provide the user name in the user principal name (UPN) format (`user@example.com`).
```PowerShell Import-Module -Name "C:\Program Files\Microsoft Azure Active Directory Connect\AdPrep\AdSyncPrep.psm1";
The `Initialize-ADSyncDomainJoinedComputerSync` cmdlet:
* Uses the Active Directory PowerShell module and Active Directory Domain Services (AD DS) tools. These tools rely on Active Directory Web Services running on a domain controller. Active Directory Web Services is supported on domain controllers running Windows Server 2008 R2 and later. * Is only supported by the MSOnline PowerShell module version 1.1.166.0. To download this module, use [this link](https://www.powershellgallery.com/packages/MSOnline/1.1.166.0).
-* If the AD DS tools are not installed, `Initialize-ADSyncDomainJoinedComputerSync` will fail. You can install the AD DS tools through Server Manager under **Features** > **Remote Server Administration Tools** > **Role Administration Tools**.
-
-For domain controllers running Windows Server 2008 or earlier versions, use the following script to create the service connection point. In a multi-forest configuration, use the following script to create the service connection point in each forest where computers exist.
-
- ```PowerShell
- $verifiedDomain = "contoso.com" # Replace this with any of your verified domain names in Azure AD
- $tenantID = "72f988bf-86f1-41af-91ab-2d7cd011db47" # Replace this with you tenant ID
- $configNC = "CN=Configuration,DC=corp,DC=contoso,DC=com" # Replace this with your Active Directory configuration naming context
-
- $de = New-Object System.DirectoryServices.DirectoryEntry
- $de.Path = "LDAP://CN=Services," + $configNC
- $deDRC = $de.Children.Add("CN=Device Registration Configuration", "container")
- $deDRC.CommitChanges()
-
- $deSCP = $deDRC.Children.Add("CN=62a0ff2e-97b9-4513-943f-0d221bd30080", "serviceConnectionPoint")
- $deSCP.Properties["keywords"].Add("azureADName:" + $verifiedDomain)
- $deSCP.Properties["keywords"].Add("azureADId:" + $tenantID)
-
- $deSCP.CommitChanges()
- ```
-
-In the preceding script, `$verifiedDomain = "contoso.com"` is a placeholder. Replace it with one of your verified domain names in Azure AD. You have to own the domain before you can use it.
+* If the AD DS tools aren't installed, `Initialize-ADSyncDomainJoinedComputerSync` will fail. You can install the AD DS tools through Server Manager under **Features** > **Remote Server Administration Tools** > **Role Administration Tools**.
-For more information about verified domain names, see [Add a custom domain name to Azure Active Directory](../fundamentals/add-custom-domain.md).
-
-To get a list of your verified company domains, you can use the [Get-AzureADDomain](/powershell/module/Azuread/Get-AzureADDomain) cmdlet.
-
-![List of company domains](./media/hybrid-azuread-join-manual/01.png)
-
-## Set up issuance of claims
+### Set up issuance of claims
In a federated Azure AD configuration, devices rely on AD FS or an on-premises federation service from a Microsoft partner to authenticate to Azure AD. Devices authenticate to get an access token to register against the Azure Active Directory Device Registration Service (Azure DRS). Windows current devices authenticate by using integrated Windows authentication to an active WS-Trust endpoint (either 1.3 or 2005 versions) hosted by the on-premises federation service. When you're using AD FS, you need to enable the following WS-Trust endpoints+ - `/adfs/services/trust/2005/windowstransport` - `/adfs/services/trust/13/windowstransport` - `/adfs/services/trust/2005/usernamemixed`
When you're using AD FS, you need to enable the following WS-Trust endpoints
> Both **adfs/services/trust/2005/windowstransport** and **adfs/services/trust/13/windowstransport** should be enabled as intranet facing endpoints only and must NOT be exposed as extranet facing endpoints through the Web Application Proxy. To learn more on how to disable WS-Trust Windows endpoints, see [Disable WS-Trust Windows endpoints on the proxy](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#disable-ws-trust-windows-endpoints-on-the-proxy-ie-from-extranet). You can see what endpoints are enabled through the AD FS management console under **Service** > **Endpoints**. > [!NOTE]
->If you donΓÇÖt have AD FS as your on-premises federation service, follow the instructions from your vendor to make sure they support WS-Trust 1.3 or 2005 endpoints and that these are published through the Metadata Exchange file (MEX).
+> If you donΓÇÖt have AD FS as your on-premises federation service, follow the instructions from your vendor to make sure they support WS-Trust 1.3 or 2005 endpoints and that these are published through the Metadata Exchange file (MEX).
For device registration to finish, the following claims must exist in the token that Azure DRS receives. Azure DRS will create a device object in Azure AD with some of this information. Azure AD Connect then uses this information to associate the newly created device object with the computer account on-premises.
For device registration to finish, the following claims must exist in the token
* `http://schemas.microsoft.com/identity/claims/onpremobjectguid` * `http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid`
-If you have more than one verified domain name, you need to provide the following claim for computers:
+If you require more than one verified domain name, you need to provide the following claim for computers:
* `http://schemas.microsoft.com/ws/2008/06/identity/claims/issuerid`
The definition helps you to verify whether the values are present or if you need
> [!NOTE] > If you donΓÇÖt use AD FS for your on-premises federation server, follow your vendor's instructions to create the appropriate configuration to issue these claims.
-### Issue account type claim
+#### Issue account type claim
The `http://schemas.microsoft.com/ws/2012/01/accounttype` claim must contain a value of **DJ**, which identifies the device as a domain-joined computer. In AD FS, you can add an issuance transform rule that looks like this:
The `http://schemas.microsoft.com/ws/2012/01/accounttype` claim must contain a v
); ```
-### Issue objectGUID of the computer account on-premises
+#### Issue objectGUID of the computer account on-premises
The `http://schemas.microsoft.com/identity/claims/onpremobjectguid` claim must contain the **objectGUID** value of the on-premises computer account. In AD FS, you can add an issuance transform rule that looks like this:
The `http://schemas.microsoft.com/identity/claims/onpremobjectguid` claim must c
); ```
-### Issue objectSID of the computer account on-premises
+#### Issue objectSID of the computer account on-premises
The `http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid` claim must contain the **objectSid** value of the on-premises computer account. In AD FS, you can add an issuance transform rule that looks like this:
The `http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid` claim m
=> issue(claim = c2); ```
-### Issue issuerID for the computer when multiple verified domain names are in Azure AD
+#### Issue issuerID for the computer when multiple verified domain names are in Azure AD
-The `http://schemas.microsoft.com/ws/2008/06/identity/claims/issuerid` claim must contain the Uniform Resource Identifier (URI) of any of the verified domain names that connect with the on-premises federation service (AD FS or partner) issuing the token. In AD FS, you can add issuance transform rules that look like the following ones in that specific order, after the preceding ones. Note that one rule to explicitly issue the rule for users is necessary. In the following rules, a first rule that identifies user versus computer authentication is added.
+The `http://schemas.microsoft.com/ws/2008/06/identity/claims/issuerid` claim must contain the Uniform Resource Identifier (URI) of any of the verified domain names that connect with the on-premises federation service (AD FS or partner) issuing the token. In AD FS, you can add issuance transform rules that look like the following ones in that specific order, after the preceding ones. One rule to explicitly issue the rule for users is necessary. In the following rules, a first rule that identifies user versus computer authentication is added.
``` @RuleName = "Issue account type with the value User when its not a computer"
To get a list of your verified company domains, you can use the [Get-MsolDomain]
![List of company domains](./media/hybrid-azuread-join-manual/01.png)
-### Issue ImmutableID for the computer when one for users exists (for example, using mS-DS-ConsistencyGuid as the source for ImmutableID)
+#### Issue ImmutableID for the computer when one for users exists (for example, using mS-DS-ConsistencyGuid as the source for ImmutableID)
The `http://schemas.microsoft.com/LiveID/Federation/2008/05/ImmutableID` claim must contain a valid value for computers. In AD FS, you can create an issuance transform rule as follows:
The `http://schemas.microsoft.com/LiveID/Federation/2008/05/ImmutableID` claim m
); ```
-### Helper script to create the AD FS issuance transform rules
+#### Helper script to create the AD FS issuance transform rules
The following script helps you with the creation of the issuance transform rules described earlier.
The following script helps you with the creation of the issuance transform rules
#### Remarks
-* This script appends the rules to the existing rules. Do not run the script twice, because the set of rules would be added twice. Make sure that no corresponding rules exist for these claims (under the corresponding conditions) before running the script again.
+* This script appends the rules to the existing rules. Don't run the script twice, because the set of rules would be added twice. Make sure that no corresponding rules exist for these claims (under the corresponding conditions) before running the script again.
* If you have multiple verified domain names (as shown in the Azure AD portal or via the **Get-MsolDomain** cmdlet), set the value of **$multipleVerifiedDomainNames** in the script to **$true**. Also make sure that you remove any existing **issuerid** claim that might have been created by Azure AD Connect or via other means. Here's an example for this rule: ```
The following script helps you with the creation of the issuance transform rules
=> issue(Type = "http://schemas.microsoft.com/ws/2008/06/identity/claims/issuerid", Value = regexreplace(c.Value, ".+@(?<domain>.+)", "http://${domain}/adfs/services/trust/")); ```
-If you have already issued an **ImmutableID** claim for user accounts, set the value of **$immutableIDAlreadyIssuedforUsers** in the script to **$true**.
-
-## Enable Windows down-level devices
-
-If some of your domain-joined devices are Windows down-level devices, you need to:
-
-* Set a policy in Azure AD to enable users to register devices.
-* Configure your on-premises federation service to issue claims to support integrated Windows authentication (IWA) for device registration.
-* Add the Azure AD device authentication endpoint to the local intranet zones to avoid certificate prompts when authenticating the device.
-* Control Windows down-level devices.
-
-### Set a policy in Azure AD to enable users to register devices
+If you've already issued an **ImmutableID** claim for user accounts, set the value of **$immutableIDAlreadyIssuedforUsers** in the script to **$true**.
-To register Windows down-level devices, make sure that the setting to allow users to register devices in Azure AD is enabled. In the Azure portal, you can find this setting under **Azure Active Directory** > **Users and groups** > **Device settings**.
+#### Configure federation service for downlevel devices
-The following policy must be set to **All**: **Users may register their devices with Azure AD**.
-
-![The All button that enables users to register devices](./media/hybrid-azuread-join-manual/23.png)
-
-### Configure the on-premises federation service
+Downlevel devices require your on-premises federation service to issue claims to support integrated Windows authentication (IWA) for device registration.
Your on-premises federation service must support issuing the **authenticationmethod** and **wiaormultiauthn** claims when it receives an authentication request to the Azure AD relying party holding a resource_params parameter with the following encoded value:
In AD FS, you must add an issuance transform rule that passes through the authen
`Set-AdfsRelyingPartyTrust -TargetName <RPObjectName> -AllowedAuthenticationClassReferences wiaormultiauthn`
-### Add the Azure AD device authentication endpoint to the local intranet zones
-
-To avoid certificate prompts when users of registered devices authenticate to Azure AD, you can push a policy to your domain-joined devices to add the following URL to the local intranet zone in Internet Explorer:
-
-`https://device.login.microsoftonline.com`
-
-### Control Windows down-level devices
-
-To register Windows down-level devices, you need to download and install a Windows Installer package (.msi) from the Download Center. For more information, see the section [Controlled validation of hybrid Azure AD join on Windows down-level devices](hybrid-azuread-join-control.md#controlled-validation-of-hybrid-azure-ad-join-on-windows-down-level-devices).
-
-## Verify joined devices
-
-Here are 3 ways to locate and verify the device state:
-
-### Locally on the device
-
-1. Open Windows PowerShell.
-2. Enter `dsregcmd /status`.
-3. Verify that both **AzureAdJoined** and **DomainJoined** are set to **YES**.
-4. You can use the **DeviceId** and compare the status on the service using either the Azure portal or PowerShell.
-
-### Using the Azure portal
-
-1. Go to the devices page using a [direct link](https://portal.azure.com/#blade/Microsoft_AAD_IAM/DevicesMenuBlade/Devices).
-2. Information on how to locate a device can be found in [Manage device identities using the Azure portal](./device-management-azure-portal.md).
-3. If the **Registered** column says **Pending**, then Hybrid Azure AD Join has not completed. In federated environments, this can happen only if it failed to register and AAD connect is configured to sync the devices.
-4. If the **Registered** column contains a **date/time**, then Hybrid Azure AD Join has completed.
-
-### Using PowerShell
-
-Verify the device registration state in your Azure tenant by using **[Get-MsolDevice](/powershell/module/msonline/get-msoldevice)**. This cmdlet is in the [Azure Active Directory PowerShell module](/powershell/azure/active-directory/install-msonlinev1).
-
-When you use the **Get-MSolDevice** cmdlet to check the service details:
--- An object with the **device ID** that matches the ID on the Windows client must exist.-- The value for **DeviceTrustType** is **Domain Joined**. This setting is equivalent to the **Hybrid Azure AD joined** state on the **Devices** page in the Azure AD portal.-- For devices that are used in Conditional Access, the value for **Enabled** is **True** and **DeviceTrustLevel** is **Managed**.-
-1. Open Windows PowerShell as an administrator.
-2. Enter `Connect-MsolService` to connect to your Azure tenant.
-
-#### Count all Hybrid Azure AD joined devices (excluding **Pending** state)
-
-```azurepowershell
-(Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))}).count
-```
-
-#### Count all Hybrid Azure AD joined devices with **Pending** state
-
-```azurepowershell
-(Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (-not([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))}).count
-```
-
-#### List all Hybrid Azure AD joined devices
-
-```azurepowershell
-Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))}
-```
-
-#### List all Hybrid Azure AD joined devices with **Pending** state
-
-```azurepowershell
-Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -eq 'Domain Joined') -and (-not([string]($_.AlternativeSecurityIds)).StartsWith("X509:"))}
-```
-
-#### List details of a single device:
-
-1. Enter `get-msoldevice -deviceId <deviceId>` (This is the **DeviceId** obtained locally on the device).
-2. Verify that **Enabled** is set to **True**.
- ## Troubleshoot your implementation If you experience issues completing hybrid Azure AD join for domain-joined Windows devices, see:
If you experience issues completing hybrid Azure AD join for domain-joined Windo
## Next steps
-* [Introduction to device management in Azure Active Directory](overview.md)
-
-<!--Image references-->
-[1]: ./media/hybrid-azuread-join-manual/12.png
+- [Hybrid Azure AD join verification](howto-hybrid-join-verify.md)
+- [Downlevel device enablement](howto-hybrid-join-downlevel.md)
+- [Plan your hybrid Azure Active Directory join implementation](hybrid-azuread-join-plan.md)
+- [Use Conditional Access to require compliant or hybrid Azure AD joined device](../conditional-access/howto-conditional-access-policy-compliant-device.md)
active-directory Hybrid Azuread Join Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/hybrid-azuread-join-plan.md
Title: Plan hybrid Azure Active Directory join - Azure Active Directory
-description: Learn how to configure hybrid Azure Active Directory joined devices.
+ Title: Plan your hybrid Azure Active Directory join deployment
+description: Explains the steps that are required to implement hybrid Azure AD joined devices in your environment.
Previously updated : 06/10/2021 Last updated : 01/20/2022
-# How To: Plan your hybrid Azure Active Directory join implementation
+# Plan your hybrid Azure Active Directory join implementation
-In a similar way to a user, a device is another core identity you want to protect and use it to protect your resources at any time and from any location. You can accomplish this goal by bringing and managing device identities in Azure AD using one of the following methods:
--- Azure AD join-- Hybrid Azure AD join-- Azure AD registration-
-By bringing your devices to Azure AD, you maximize your users' productivity through single sign-on (SSO) across your cloud and on-premises resources. At the same time, you can secure access to your cloud and on-premises resources with [Conditional Access](../conditional-access/overview.md).
-
-If you have an on-premises Active Directory (AD) environment and you want to join your AD domain-joined computers to Azure AD, you can accomplish this by doing hybrid Azure AD join. This article provides you with the related steps to implement a hybrid Azure AD join in your environment.
+If you have an on-premises Active Directory Domain Services (AD DS) environment and you want to join your AD DS domain-joined computers to Azure AD, you can accomplish this task by doing hybrid Azure AD join.
> [!TIP] > SSO access to on-premises resources is also available to devices that are Azure AD joined. For more information, see [How SSO to on-premises resources works on Azure AD joined devices](azuread-join-sso.md).
->
## Prerequisites
-This article assumes that you are familiar with the [Introduction to device identity management in Azure Active Directory](./overview.md).
+This article assumes that you're familiar with the [Introduction to device identity management in Azure Active Directory](./overview.md).
> [!NOTE] > The minimum required domain controller version for Windows 10 hybrid Azure AD join is Windows Server 2008 R2.
To plan your hybrid Azure AD implementation, you should familiarize yourself wit
> [!div class="checklist"] > - Review supported devices > - Review things you should know
-> - Review controlled validation of hybrid Azure AD join
+> - Review targeted deployment of hybrid Azure AD join
> - Select your scenario based on your identity infrastructure > - Review on-premises AD UPN support for hybrid Azure AD join ## Review supported devices
-Hybrid Azure AD join supports a broad range of Windows devices. Because the configuration for devices running older versions of Windows requires additional or different steps, the supported devices are grouped into two categories:
+Hybrid Azure AD join supports a broad range of Windows devices. Because the configuration for devices running older versions of Windows requires other steps, the supported devices are grouped into two categories:
### Windows current devices
Hybrid Azure AD join supports a broad range of Windows devices. Because the conf
- **Note**: Azure National cloud customers require version 1803 - Windows Server 2019
-For devices running the Windows desktop operating system, supported version are listed in this article [Windows 10 release information](/windows/release-information/). As a best practice, Microsoft recommends you upgrade to the latest version of Windows 10.
+For devices running the Windows desktop operating system, supported versions are listed in this article [Windows 10 release information](/windows/release-information/). As a best practice, Microsoft recommends you upgrade to the latest version of Windows 10.
### Windows down-level devices - Windows 8.1-- Windows 7 support ended on January 14, 2020. For more information, see [Support for Windows 7 has ended](https://support.microsoft.com/en-us/help/4057281/windows-7-support-ended-on-january-14-2020).
+- Windows 7 support ended on January 14, 2020. For more information, see [Support for Windows 7 has ended](https://support.microsoft.com/en-us/help/4057281/windows-7-support-ended-on-january-14-2020)
- Windows Server 2012 R2 - Windows Server 2012-- Windows Server 2008 R2. For support information on Windows Server 2008 and 2008 R2, see [Prepare for Windows Server 2008 end of support](https://www.microsoft.com/cloud-platform/windows-server-2008).
+- Windows Server 2008 R2 for support information on Windows Server 2008 and 2008 R2, see [Prepare for Windows Server 2008 end of support](https://www.microsoft.com/cloud-platform/windows-server-2008)
As a first planning step, you should review your environment and determine whether you need to support Windows down-level devices.
As a first planning step, you should review your environment and determine wheth
### Unsupported scenarios -- Hybrid Azure AD join is not supported for Windows Server running the Domain Controller (DC) role.--- Hybrid Azure AD join is not supported on Windows down-level devices when using credential roaming or user profile roaming or mandatory profile.-
+- Hybrid Azure AD join isn't supported for Windows Server running the Domain Controller (DC) role.
+- Hybrid Azure AD join isn't supported on Windows down-level devices when using credential roaming or user profile roaming or mandatory profile.
- Server Core OS doesn't support any type of device registration.- - User State Migration Tool (USMT) doesn't work with device registration. ### OS imaging considerations -- If you are relying on the System Preparation Tool (Sysprep) and if you are using a **pre-Windows 10 1809** image for installation, make sure that image is not from a device that is already registered with Azure AD as Hybrid Azure AD join.
+- If you're relying on the System Preparation Tool (Sysprep) and if you're using a **pre-Windows 10 1809** image for installation, make sure that image isn't from a device that is already registered with Azure AD as hybrid Azure AD joined.
-- If you are relying on a Virtual Machine (VM) snapshot to create additional VMs, make sure that snapshot is not from a VM that is already registered with Azure AD as Hybrid Azure AD join.
+- If you're relying on a Virtual Machine (VM) snapshot to create more VMs, make sure that snapshot isn't from a VM that is already registered with Azure AD as hybrid Azure AD joined.
-- If you are using [Unified Write Filter](/windows-hardware/customize/enterprise/unified-write-filter) and similar technologies that clear changes to the disk at reboot, they must be applied after the device is Hybrid Azure AD joined. Enabling such technologies prior to completion of Hybrid Azure AD join will result in the device getting unjoined on every reboot
+- If you're using [Unified Write Filter](/windows-hardware/customize/enterprise/unified-write-filter) and similar technologies that clear changes to the disk at reboot, they must be applied after the device is hybrid Azure AD joined. Enabling such technologies before completion of hybrid Azure AD join will result in the device getting unjoined on every reboot.
### Handling devices with Azure AD registered state
-If your Windows 10 domain joined devices are [Azure AD registered](concept-azure-ad-register.md) to your tenant, it could lead to a dual state of Hybrid Azure AD joined and Azure AD registered device. We recommend upgrading to Windows 10 1803 (with KB4489894 applied) or above to automatically address this scenario. In pre-1803 releases, you will need to remove the Azure AD registered state manually before enabling Hybrid Azure AD join. In 1803 and above releases, the following changes have been made to avoid this dual state:
+If your Windows 10 domain joined devices are [Azure AD registered](concept-azure-ad-register.md) to your tenant, it could lead to a dual state of hybrid Azure AD joined and Azure AD registered device. We recommend upgrading to Windows 10 1803 (with KB4489894 applied) or newer to automatically address this scenario. In pre-1803 releases, you'll need to remove the Azure AD registered state manually before enabling hybrid Azure AD join. In 1803 and above releases, the following changes have been made to avoid this dual state:
-- Any existing Azure AD registered state for a user would be automatically removed <i>after the device is Hybrid Azure AD joined and the same user logs in</i>. For example, if User A had an Azure AD registered state on the device, the dual state for User A is cleaned up only when User A logs in to the device. If there are multiple users on the same device, the dual state is cleaned up individually when those users log in. In addition to removing the Azure AD registered state, Windows 10 will also unenroll the device from Intune or other MDM, if the enrollment happened as part of the Azure AD registration via auto-enrollment.-- Azure AD registered state on any local accounts on the device is not impacted by this change. It is only applicable to domain accounts. So Azure AD registered state on local accounts is not removed automatically even after user logon, since the user is not a domain user.
+- Any existing Azure AD registered state for a user would be automatically removed <i>after the device is hybrid Azure AD joined and the same user logs in</i>. For example, if User A had an Azure AD registered state on the device, the dual state for User A is cleaned up only when User A logs in to the device. If there are multiple users on the same device, the dual state is cleaned up individually when those users log in. After removing the Azure AD registered state, Windows 10 will unenroll the device from Intune or other MDM, if the enrollment happened as part of the Azure AD registration via auto-enrollment.
+- Azure AD registered state on any local accounts on the device isnΓÇÖt impacted by this change. Only applicable to domain accounts. Azure AD registered state on local accounts isn't removed automatically even after user logon, since the user isn't a domain user.
- You can prevent your domain joined device from being Azure AD registered by adding the following registry value to HKLM\SOFTWARE\Policies\Microsoft\Windows\WorkplaceJoin: "BlockAADWorkplaceJoin"=dword:00000001.-- In Windows 10 1803, if you have Windows Hello for Business configured, the user needs to re-setup Windows Hello for Business after the dual state clean up.This issue has been addressed with KB4512509
+- In Windows 10 1803, if you have Windows Hello for Business configured, the user needs to reconfigure Windows Hello for Business after the dual state cleanup. This issue has been addressed with KB4512509.
> [!NOTE] > Even though Windows 10 automatically removes the Azure AD registered state locally, the device object in Azure AD is not immediately deleted if it is managed by Intune. You can validate the removal of Azure AD registered state by running dsregcmd /status and consider the device not to be Azure AD registered based on that. ### Hybrid Azure AD join for single forest, multiple Azure AD tenants
-To register devices as hybrid Azure AD join to respective tenants, organizations need to ensure that the SCP configuration is done on the devices and not in AD. More details on how to accomplish this can be found in the article [controlled validation of hybrid Azure AD join](hybrid-azuread-join-control.md). It is also important for organizations to understand that certain Azure AD capabilities will not work in a single forest, multiple Azure AD tenants configurations.
-- [Device writeback](../hybrid/how-to-connect-device-writeback.md) will not work. This affects [Device based Conditional Access for on-premise apps that are federated using ADFS](/windows-server/identity/ad-fs/operations/configure-device-based-conditional-access-on-premises). This also affects [Windows Hello for Business deployment when using the Hybrid Cert Trust model](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-trust).-- [Groups writeback](../hybrid/how-to-connect-group-writeback.md) will not work. This affects writeback of Office 365 Groups to a forest with Exchange installed.-- [Seamless SSO](../hybrid/how-to-connect-sso.md) will not work. This affects SSO scenarios that organizations may be using on cross OS/browser platforms, for example iOS/Linux with Firefox, Safari, Chrome without the Windows 10 extension.-- [Hybrid Azure AD join for Windows down-level devices in managed environment](./hybrid-azuread-join-managed-domains.md#enable-windows-down-level-devices) will not work. For example, hybrid Azure AD join on Windows Server 2012 R2 in a managed environment requires Seamless SSO and since Seamless SSO will not work, hybrid Azure AD join for such a setup will not work.-- [On-premises Azure AD Password Protection](../authentication/concept-password-ban-bad-on-premises.md) will not work.This affects ability to perform password changes and password reset events against on-premises Active Directory Domain Services (AD DS) domain controllers using the same global and custom banned password lists that are stored in Azure AD.
+To register devices as hybrid Azure AD join to respective tenants, organizations need to ensure that the SCP configuration is done on the devices and not in AD. More details on how to accomplish this task can be found in the article [Hybrid Azure AD join targeted deployment](hybrid-azuread-join-control.md). It's important for organizations to understand that certain Azure AD capabilities won't work in a single forest, multiple Azure AD tenants configurations.
+- [Device writeback](../hybrid/how-to-connect-device-writeback.md) won't work. This configuration affects [Device based Conditional Access for on-premise apps that are federated using ADFS](/windows-server/identity/ad-fs/operations/configure-device-based-conditional-access-on-premises). This configuration also affects [Windows Hello for Business deployment when using the Hybrid Cert Trust model](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-trust).
+- [Groups writeback](../hybrid/how-to-connect-group-writeback.md) won't work. This configuration affects writeback of Office 365 Groups to a forest with Exchange installed.
+- [Seamless SSO](../hybrid/how-to-connect-sso.md) won't work. This configuration affects SSO scenarios that organizations may be using on cross OS or browser platforms, for example iOS or Linux with Firefox, Safari, or Chrome without the Windows 10 extension.
+- [Hybrid Azure AD join for Windows down-level devices in managed environment](./hybrid-azuread-join-managed-domains.md#enable-windows-down-level-devices) won't work. For example, hybrid Azure AD join on Windows Server 2012 R2 in a managed environment requires Seamless SSO and since Seamless SSO won't work, hybrid Azure AD join for such a setup won't work.
+- [On-premises Azure AD Password Protection](../authentication/concept-password-ban-bad-on-premises.md) won't work. This configuration affects the ability to do password changes and password reset events against on-premises Active Directory Domain Services (AD DS) domain controllers using the same global and custom banned password lists that are stored in Azure AD.
-### Additional considerations
+### Other considerations
- If your environment uses virtual desktop infrastructure (VDI), see [Device identity and desktop virtualization](./howto-device-identity-virtual-desktop-infrastructure.md). -- Hybrid Azure AD join is supported for FIPS-compliant TPM 2.0 and not supported for TPM 1.2. If your devices have FIPS-compliant TPM 1.2, you must disable them before proceeding with Hybrid Azure AD join. Microsoft does not provide any tools for disabling FIPS mode for TPMs as it is dependent on the TPM manufacturer. Please contact your hardware OEM for support.
+- Hybrid Azure AD join is supported for FIPS-compliant TPM 2.0 and not supported for TPM 1.2. If your devices have FIPS-compliant TPM 1.2, you must disable them before proceeding with hybrid Azure AD join. Microsoft doesn't provide any tools for disabling FIPS mode for TPMs as it is dependent on the TPM manufacturer. Contact your hardware OEM for support.
-- Starting from Windows 10 1903 release, TPMs 1.2 are not used with hybrid Azure AD join and devices with those TPMs will be considered as if they don't have a TPM.
+- Starting from Windows 10 1903 release, TPMs 1.2 aren't used with hybrid Azure AD join and devices with those TPMs will be considered as if they don't have a TPM.
-- UPN changes are only supported starting Windows 10 2004 update. For devices prior to Windows 10 2004 update, users would have SSO and Conditional Access issues on their devices. To resolve this issue, you need to unjoin the device from Azure AD (run "dsregcmd /leave" with elevated privileges) and rejoin (happens automatically). However, users signing in with Windows Hello for Business do not face this issue.
+- UPN changes are only supported starting Windows 10 2004 update. For devices before the Windows 10 2004 update, users could have SSO and Conditional Access issues on their devices. To resolve this issue, you need to unjoin the device from Azure AD (run "dsregcmd /leave" with elevated privileges) and rejoin (happens automatically). However, users signing in with Windows Hello for Business don't face this issue.
-## Review controlled validation of hybrid Azure AD join
+## Review targeted hybrid Azure AD join
-When all of the pre-requisites are in place, Windows devices will automatically register as devices in your Azure AD tenant. The state of these device identities in Azure AD is referred as hybrid Azure AD join. More information about the concepts covered in this article can be found in the article [Introduction to device identity management in Azure Active Directory](overview.md).
+Organizations may want to do a targeted rollout of hybrid Azure AD join before enabling it for their entire organization. Review the article [Hybrid Azure AD join targeted deployment](hybrid-azuread-join-control.md) to understand how to accomplish it.
-Organizations may want to do a controlled validation of hybrid Azure AD join before enabling it across their entire organization all at once. Review the article [controlled validation of hybrid Azure AD join](hybrid-azuread-join-control.md) to understand how to accomplish it.
+> [!WARNING]
+> Organizations should include a sample of users from varying roles and profiles in their pilot group. A targeted rollout will help identify any issues your plan may not have addressed before you enable for the entire organization.
## Select your scenario based on your identity infrastructure
A managed environment can be deployed either through [Password Hash Sync (PHS)](
These scenarios don't require you to configure a federation server for authentication. > [!NOTE]
-> [Cloud authentication using Staged rollout](../hybrid/how-to-connect-staged-rollout.md) is only supported starting Windows 10 1903 update
+> [Cloud authentication using Staged rollout](../hybrid/how-to-connect-staged-rollout.md) is only supported starting at the Windows 10 1903 update.
+>
+> Azure AD doesn't support smartcards or certificates in managed domains.
### Federated environment
When you're using AD FS, you need to enable the following WS-Trust endpoints:
> [!WARNING] > Both **adfs/services/trust/2005/windowstransport** or **adfs/services/trust/13/windowstransport** should be enabled as intranet facing endpoints only and must NOT be exposed as extranet facing endpoints through the Web Application Proxy. To learn more on how to disable WS-Trust Windows endpoints, see [Disable WS-Trust Windows endpoints on the proxy](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#disable-ws-trust-windows-endpoints-on-the-proxy-ie-from-extranet). You can see what endpoints are enabled through the AD FS management console under **Service** > **Endpoints**.
-> [!NOTE]
-> Azure AD does not support smartcards or certificates in managed domains.
-
-Beginning with version 1.1.819.0, Azure AD Connect provides you with a wizard to configure hybrid Azure AD join. The wizard enables you to significantly simplify the configuration process. If installing the required version of Azure AD Connect is not an option for you, see [how to manually configure device registration](hybrid-azuread-join-manual.md).
-
-Based on the scenario that matches your identity infrastructure, see:
+Beginning with version 1.1.819.0, Azure AD Connect provides you with a wizard to configure hybrid Azure AD join. The wizard enables you to significantly simplify the configuration process. If installing the required version of Azure AD Connect isn't an option for you, see [how to manually configure device registration](hybrid-azuread-join-manual.md).
-- [Configure hybrid Azure Active Directory join for federated environment](hybrid-azuread-join-federated-domains.md)-- [Configure hybrid Azure Active Directory join for managed environment](hybrid-azuread-join-managed-domains.md)
+## Review on-premises AD users UPN support for hybrid Azure AD join
-## Review on-premises AD users UPN support for Hybrid Azure AD join
+Sometimes, on-premises AD users UPNs are different from your Azure AD UPNs. In these cases, Windows 10 hybrid Azure AD join provides limited support for on-premises AD UPNs based on the [authentication method](../hybrid/choose-ad-authn.md), domain type, and Windows 10 version. There are two types of on-premises AD UPNs that can exist in your environment:
-Sometimes, your on-premises AD users UPNs could be different from your Azure AD UPNs. In such cases, Windows 10 Hybrid Azure AD join provides limited support for on-premises AD UPNs based on the [authentication method](../hybrid/choose-ad-authn.md), domain type and Windows 10 version. There are two types of on-premises AD UPNs that can exist in your environment:
--- Routable users UPN: A routable UPN has a valid verified domain, that is registered with a domain registrar. For example, if contoso.com is the primary domain in Azure AD, contoso.org is the primary domain in on-premises AD owned by Contoso and [verified in Azure AD](../fundamentals/add-custom-domain.md)-- Non-routable users UPN: A non-routable UPN does not have a verified domain. It is applicable only within your organization's private network. For example, if contoso.com is the primary domain in Azure AD, contoso.local is the primary domain in on-premises AD but is not a verifiable domain in the internet and only used within Contoso's network.
+- Routable users UPN: A routable UPN has a valid verified domain, that is registered with a domain registrar. For example, if contoso.com is the primary domain in Azure AD, contoso.org is the primary domain in on-premises AD owned by Contoso and [verified in Azure AD](../fundamentals/add-custom-domain.md).
+- Non-routable users UPN: A non-routable UPN doesn't have a verified domain and is applicable only within your organization's private network. For example, if contoso.com is the primary domain in Azure AD and contoso.local is the primary domain in on-premises AD but isn't a verifiable domain in the internet and only used within Contoso's network.
> [!NOTE] > The information in this section applies only to an on-premises users UPN. It isn't applicable to an on-premises computer domain suffix (example: computer1.contoso.local).
-The table below provides details on support for these on-premises AD UPNs in Windows 10 Hybrid Azure AD join
+The following table provides details on support for these on-premises AD UPNs in Windows 10 hybrid Azure AD join
| Type of on-premises AD UPN | Domain type | Windows 10 version | Description | | -- | -- | -- | -- | | Routable | Federated | From 1703 release | Generally available | | Non-routable | Federated | From 1803 release | Generally available |
-| Routable | Managed | From 1803 release | Generally available, Azure AD SSPR on Windows lock screen is not supported. The on-premises UPN must be synced to the `onPremisesUserPrincipalName` attribute in Azure AD |
+| Routable | Managed | From 1803 release | Generally available, Azure AD SSPR on Windows lock screen isn't supported. The on-premises UPN must be synced to the `onPremisesUserPrincipalName` attribute in Azure AD |
| Non-routable | Managed | Not supported | | ## Next steps
-> [!div class="nextstepaction"]
-> [Configure hybrid Azure Active Directory join for federated environment](hybrid-azuread-join-federated-domains.md)
-> [Configure hybrid Azure Active Directory join for managed environment](hybrid-azuread-join-managed-domains.md)
-
-<!--Image references-->
-[1]: ./media/hybrid-azuread-join-plan/12.png
+- [Configure hybrid Azure AD join](howto-hybrid-azure-ad-join.md)
active-directory Plan Device Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/plan-device-deployment.md
Previously updated : 06/15/2020 Last updated : 01/20/2022
This article helps you evaluate the methods to integrate your device with Azure AD, choose the implementation plan, and provides key links to supported device management tools.
-The landscape of devices from which your users sign in is expanding. Organizations may provide desktops, laptops, phones, tablets, and other devices. Your users may bring their own array of devices, and access information from varied locations. In this environment, your job as an administrator is to keep your organizational resources secure across all devices.
+The landscape of your user's devices is constantly expanding. Organizations may provide desktops, laptops, phones, tablets, and other devices. Your users may bring their own array of devices, and access information from varied locations. In this environment, your job as an administrator is to keep your organizational resources secure across all devices.
-Azure Active Directory (Azure AD) enables your organization to meet these goals with device identity management. You can now get your devices in Azure AD and control them from a central location in the [Azure portal](https://portal.azure.com/). This gives you a unified experience, enhanced security, and reduces the time needed to configure a new device.
+Azure Active Directory (Azure AD) enables your organization to meet these goals with device identity management. You can now get your devices in Azure AD and control them from a central location in the [Azure portal](https://portal.azure.com/). This process gives you a unified experience, enhanced security, and reduces the time needed to configure a new device.
-There are multiple methods to integrate your devices into Azure AD:
+There are multiple methods to integrate your devices into Azure AD, they can work separately or together based on the operating system and your requirements:
-* You can [register devices](concept-azure-ad-register.md) with Azure AD
-
-* [Join devices](concept-azure-ad-join.md) to Azure AD (cloud-only) or
-
-* [Create a hybrid Azure AD join](concept-azure-ad-join-hybrid.md) in between devices in your on-premises Active Directory and Azure AD.
+* You can [register devices](concept-azure-ad-register.md) with Azure AD.
+* [Join devices](concept-azure-ad-join.md) to Azure AD (cloud-only).
+* [Hybrid Azure AD join](concept-azure-ad-join-hybrid.md) devices to your on-premises Active Directory domain and Azure AD.
## Learn
Before you begin, make sure that you're familiar with the [device identity manag
The key benefits of giving your devices an Azure AD identity:
-* Increase productivity ΓÇô With Azure AD, your users can do [seamless sign-on (SSO)](./azuread-join-sso.md) to your on-premises and cloud resources, which enables them to be productive wherever they are.
-
-* Increase security ΓÇô Azure AD devices enable you to apply [Conditional Access policies](../conditional-access/require-managed-devices.md) to resources based on the identity of the device or user. Conditional Access policies can offer extra protection using [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md). Joining a device to Azure AD is a prerequisite for increasing your security with a [Passwordless Authentication](../authentication/concept-authentication-passwordless.md) strategy.
+* Increase productivity ΓÇô Users can do [seamless sign-on (SSO)](./azuread-join-sso.md) to your on-premises and cloud resources, enabling productivity wherever they are.
-* Improve user experience ΓÇô With device identities in Azure AD, you can provide your users with easy access to your organizationΓÇÖs cloud-based resources from both personal and corporate devices. Administrators can enable [Enterprise State Roaming](enterprise-state-roaming-overview.md) for a unified experience across all Windows devices.
+* Increase security ΓÇô Apply [Conditional Access policies](../conditional-access/overview.md) to resources based on the identity of the device or user. Joining a device to Azure AD is a prerequisite for increasing your security with a [Passwordless](../authentication/concept-authentication-passwordless.md) strategy.
-* Simplify deployment and management ΓÇô Device identity management simplifies the process of bringing devices to Azure AD with [Windows Autopilot](/windows/deployment/windows-autopilot/windows-10-autopilot), [bulk provisioning](/mem/intune/enrollment/windows-bulk-enroll), and [self-service: Out of Box Experience (OOBE)](https://support.microsoft.com/account-billing/join-your-work-device-to-your-work-or-school-network-ef4d6adb-5095-4e51-829e-5457430f3973). You can manage these devices with Mobile Device Management (MDM) tools like [Microsoft Intune](/mem/intune/fundamentals/what-is-intune), and their identities in [Azure portal](https://portal.azure.com/).
+ > [!VIDEO https://www.youtube-nocookie.com/embed/NcONUf-jeS4]
-### Training resources
+* Improve user experience ΓÇô Provide your users with easy access to your organizationΓÇÖs cloud-based resources from both personal and corporate devices. Administrators can enable [Enterprise State Roaming](enterprise-state-roaming-overview.md) for a unified experience across all Windows devices.
-Video: [Conditional access with device controls](https://youtu.be/NcONUf-jeS4)
-
-FAQs: [Azure AD device management FAQ](faq.yml) and [Settings and data roaming FAQ](enterprise-state-roaming-faqs.yml)
+* Simplify deployment and management ΓÇô Simplify the process of bringing devices to Azure AD with [Windows Autopilot](/windows/deployment/windows-autopilot/windows-10-autopilot), [bulk provisioning](/mem/intune/enrollment/windows-bulk-enroll), or [self-service: Out of Box Experience (OOBE)](https://support.microsoft.com/account-billing/join-your-work-device-to-your-work-or-school-network-ef4d6adb-5095-4e51-829e-5457430f3973). Manage devices with Mobile Device Management (MDM) tools like [Microsoft Intune](/mem/intune/fundamentals/what-is-intune), and their identities in [Azure portal](https://portal.azure.com/).
## Plan the deployment project
Consider your organizational needs while you determine the strategy for this dep
### Engage the right stakeholders
-When technology projects fail, they typically do so due to mismatched expectations on impact, outcomes, and responsibilities. To avoid these pitfalls, [ensure that you are engaging the right stakeholders](../fundamentals/active-directory-deployment-plans.md) and that stakeholder roles in the project are well understood.
+When technology projects fail, they typically do because of mismatched expectations on impact, outcomes, and responsibilities. To avoid these pitfalls, [ensure that you're engaging the right stakeholders](../fundamentals/active-directory-deployment-plans.md) and that stakeholder roles in the project are well understood.
For this plan, add the following stakeholders to your list:
Communication is critical to the success of any new service. Proactively communi
We recommend that the initial configuration of your integration method is in a test environment, or with a small group of test devices. See [Best practices for a pilot](../fundamentals/active-directory-deployment-plans.md).
-Hybrid Azure AD join deployment is straightforward, and it's 100% an administratorΓÇÖs task without end user action necessary. You may want to do a [controlled validation of hybrid Azure AD join](hybrid-azuread-join-control.md) before enabling it across the entire organization all at once.
+You may want to do a [targeted deployment of hybrid Azure AD join](hybrid-azuread-join-control.md) before enabling it across the entire organization.
+
+> [!WARNING]
+> Organizations should include a sample of users from varying roles and profiles in their pilot group. A targeted rollout will help identify any issues your plan may not have addressed before you enable for the entire organization.
## Choose your integration methods
The following information can help you decide which integration methods to use.
Use this tree to determine options for organization-owned devices. > [!NOTE]
-> Personal or bring-your-own device (BYOD) scenarios are not pictured in this diagram. They always result in Azure AD registration.
+> Personal or bring-your-own device (BYOD) scenarios are not pictured in this diagram. They always result in Azure AD registration.
![Decision tree](./media/plan-device-deployment/flowchart.png)
Use this tree to determine options for organization-owned devices.
iOS and Android devices may only be Azure AD registered. The following table presents high-level considerations for Windows client devices. Use it as an overview, then explore the different integration methods in detail.
-| Consideration | Azure AD registered| Azure AD join| Hybrid Azure AD join |
-| - | - | - | - |
-| **Client operating systems**| | | |
-| Windows 10 devices| ![Checkmark for these values.](./media/plan-device-deployment/check.png)| ![Checkmark for these values.](./media/plan-device-deployment/check.png)| ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
-| Windows down-level devices (Windows 8.1 or Windows 7)| | | ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
-|**Sign in options**| | | |
-| End-user local credentials| ![Checkmark for these values.](./media/plan-device-deployment/check.png)| | |
-| Password| ![Checkmark for these values.](./media/plan-device-deployment/check.png)| ![Checkmark for these values.](./media/plan-device-deployment/check.png)| ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
-| Device PIN| ![Checkmark for these values.](./media/plan-device-deployment/check.png)| | |
-| Windows Hello| ![Checkmark for these values.](./media/plan-device-deployment/check.png)| | |
-| Windows Hello for Business| | ![Checkmark for these values.](./media/plan-device-deployment/check.png)| ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
-| FIDO 2.0 security keys| | ![Checkmark for these values.](./media/plan-device-deployment/check.png)| ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
-| Microsoft Authenticator App (passwordless)| ![Checkmark for these values.](./media/plan-device-deployment/check.png)| ![Checkmark for these values.](./media/plan-device-deployment/check.png)| ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
-|**Key capabilities**| | | |
-| SSO to cloud resources| ![Checkmark for these values.](./media/plan-device-deployment/check.png)| ![Checkmark for these values.](./media/plan-device-deployment/check.png)| ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
-| SSO to on-premises resources| | ![Checkmark for these values.](./media/plan-device-deployment/check.png)| ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
-| Conditional Access <br> (Require devices be marked as compliant) <br> (Must be managed by MDM)| ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png)|![Checkmark for these values.](./media/plan-device-deployment/check.png) |
-Conditional Access <br>(Require hybrid Azure AD joined devices)| | | ![Checkmark for these values.](./media/plan-device-deployment/check.png)
-| Self-service password reset from the Windows login screen| | ![Checkmark for these values.](./media/plan-device-deployment/check.png)| ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
-| Windows Hello PIN reset| | ![Checkmark for these values.](./media/plan-device-deployment/check.png)| ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
-| Enterprise state roaming across devices| | ![Checkmark for these values.](./media/plan-device-deployment/check.png)| ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
-
+| Consideration | Azure AD registered | Azure AD joined | Hybrid Azure AD joined |
+| | :: | :: | :: |
+| **Client operating systems** | | | |
+| Windows 10 devices | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
+| Windows down-level devices (Windows 8.1 or Windows 7) | | | ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
+|**Sign in options** | | | |
+| End-user local credentials | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | | |
+| Password | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
+| Device PIN | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | | |
+| Windows Hello | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | | |
+| Windows Hello for Business | | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
+| FIDO 2.0 security keys | | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
+| Microsoft Authenticator App (passwordless) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
+|**Key capabilities** | | | |
+| SSO to cloud resources | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
+| SSO to on-premises resources | | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
+| Conditional Access <br> (Require devices be marked as compliant) <br> (Must be managed by MDM) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) |![Checkmark for these values.](./media/plan-device-deployment/check.png) |
+Conditional Access <br>(Require hybrid Azure AD joined devices) | | | ![Checkmark for these values.](./media/plan-device-deployment/check.png)
+| Self-service password reset from the Windows login screen | | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
+| Windows Hello PIN reset | | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
## Azure AD Registration
-Registered devices are often managed with [Microsoft Intune](/mem/intune/enrollment/device-enrollment). Devices are enrolled in Intune in a number of ways, depending on the operating system.
+Registered devices are often managed with [Microsoft Intune](/mem/intune/enrollment/device-enrollment). Devices are enrolled in Intune in several ways, depending on the operating system.
Azure AD registered devices provide support for Bring Your Own Devices (BYOD) and corporate owned devices to SSO to cloud resources. Access to resources is based on the Azure AD [Conditional Access policies](../conditional-access/require-managed-devices.md) applied to the device and the user. ### Registering devices
-Registered devices are often managed with [Microsoft Intune](/mem/intune/enrollment/device-enrollment). Devices are enrolled in Intune in a number of ways, depending on the operating system.
+Registered devices are often managed with [Microsoft Intune](/mem/intune/enrollment/device-enrollment). Devices are enrolled in Intune in several ways, depending on the operating system.
BYOD and corporate owned mobile device are registered by users installing the Company portal app. * [iOS](/mem/intune/user-help/install-and-sign-in-to-the-intune-company-portal-app-ios)- * [Android](/mem/intune/user-help/enroll-device-android-company-portal)- * [Windows 10](/mem/intune/user-help/enroll-windows-10-device)- * [macOS](/mem/intune/user-help/enroll-your-device-in-intune-macos-cp) If registering your devices is the best option for your organization, see the following resources: * This overview of [Azure AD registered devices](concept-azure-ad-register.md).- * This end-user documentation on [Register your personal device on your organizationΓÇÖs network](https://support.microsoft.com/account-billing/register-your-personal-device-on-your-work-or-school-network-8803dd61-a613-45e3-ae6c-bd1ab25bf8a8). ## Azure AD join Azure AD join enables you to transition towards a cloud-first model with Windows. It provides a great foundation if you're planning to modernize your device management and reduce device-related IT costs. Azure AD join works with Windows 10 devices only. Consider it as the first choice for new devices.
-However, [Azure AD joined devices can SSO to on-premises resources](azuread-join-sso.md) when they are on the organization's network, can authenticate to on-premises servers like file, print, and other applications.
+[Azure AD joined devices can SSO to on-premises resources](azuread-join-sso.md) when they are on the organization's network, can authenticate to on-premises servers like file, print, and other applications.
-If this is the best option for your organization, see the following resources:
+If this option is best for your organization, see the following resources:
* This overview of [Azure AD joined devices](concept-azure-ad-join.md).
+* Familiarize yourself with the [Azure AD join implementation plan](azureadjoin-plan.md).
-* Familiarize yourself with the [Azure AD Join implementation plan](azureadjoin-plan.md).
-
-### Provisioning Azure AD Join to your devices
+### Provisioning Azure AD Joined devices
-To provision Azure AD Join, you have the following approaches:
+To provision devices to Azure AD join, you have the following approaches:
* Self-Service: [Windows 10 first-run experience](azuread-joined-devices-frx.md) If you have either Windows 10 Professional or Windows 10 Enterprise installed on a device, the experience defaults to the setup process for company-owned devices. * [Windows Out of Box Experience (OOBE) or from Windows Settings](https://support.microsoft.com/account-billing/join-your-work-device-to-your-work-or-school-network-ef4d6adb-5095-4e51-829e-5457430f3973)- * [Windows Autopilot](/windows/deployment/windows-autopilot/windows-autopilot)- * [Bulk Enrollment](/mem/intune/enrollment/windows-bulk-enroll) Choose your deployment procedure after careful [comparison of these approaches](azureadjoin-plan.md).
-You may determine that Azure AD Join is the best solution for a device, and that device may already be in a different states. Here are the upgrade considerations.
-
-| Current device state| Desired device state| How-to |
-| - | - | - |
-| On-premises domain joined| Azure AD Join| Unjoin the device from on-premises domain before joining to Azure AD |
-| Hybrid Azure AD Join| Azure AD Join| Unjoin the device from on-premises domain and from Azure AD before joining to Azure AD |
-| Azure AD registered| Azure AD Join| Unregister the device before joining to Azure AD |
+You may determine that Azure AD join is the best solution for a device in a different state. The following table shows how to change the state of a device.
+| Current device state | Desired device state | How-to |
+| | | |
+| On-premises domain joined | Azure AD joined | Unjoin the device from on-premises domain before joining to Azure AD. |
+| Hybrid Azure AD joined | Azure AD joined | Unjoin the device from on-premises domain and from Azure AD before joining to Azure AD. |
+| Azure AD registered | Azure AD joined | Unregister the device before joining to Azure AD. |
## Hybrid Azure AD join
-If you have an on-premises Active Directory environment and you want to join your Active directory domain-joined computers to Azure AD, you can accomplish this with hybrid Azure AD join. It supports a [broad range of Windows devices](hybrid-azuread-join-plan.md), including both Windows current and Windows down-level devices.
+If you have an on-premises Active Directory environment and want to join your existing domain-joined computers to Azure AD, you can accomplish this task with hybrid Azure AD join. It supports a [broad range of Windows devices](hybrid-azuread-join-plan.md), including both Windows current and Windows down-level devices.
-Most organizations already have domain joined devices and manage them via Group Policy or System Center Configuration Manager (SCCM). In that case, we recommend configuring hybrid Azure AD Join to start getting benefits while leveraging existing investment.
+Most organizations already have domain joined devices and manage them via Group Policy or System Center Configuration Manager (SCCM). In that case, we recommend configuring hybrid Azure AD join to start getting benefits while using existing investments.
If hybrid Azure AD join is the best option for your organization, see the following resources:
-* This overview of [Hybrid Azure AD joined devices](concept-azure-ad-join-hybrid.md).
-
-* Familiarize yourself with the [Hybrid Azure AD join implementation](hybrid-azuread-join-plan.md) plan.
+* This overview of [hybrid Azure AD joined devices](concept-azure-ad-join-hybrid.md).
+* Familiarize yourself with the [hybrid Azure AD join implementation](hybrid-azuread-join-plan.md) plan.
### Provisioning hybrid Azure AD join to your devices
-[Review your identity infrastructure](hybrid-azuread-join-plan.md). Azure AD Connect provides you with a wizard to configure hybrid Azure AD Join for:
+[Review your identity infrastructure](hybrid-azuread-join-plan.md). Azure AD Connect provides you with a wizard to configure hybrid Azure AD join for:
-* [Federated domains](hybrid-azuread-join-federated-domains.md)
+* [Managed domains](howto-hybrid-azure-ad-join.md#managed-domains)
+* [Federated domains](howto-hybrid-azure-ad-join.md#federated-domains)
-* [Managed domains](hybrid-azuread-join-managed-domains.md)
-
-If installing the required version of Azure AD Connect isn't an option for you, see [how to manually configure Hybrid Azure AD join](hybrid-azuread-join-manual.md).
+If installing the required version of Azure AD Connect isn't an option for you, see [how to manually configure hybrid Azure AD join](hybrid-azuread-join-manual.md).
> [!NOTE]
-> The on-premises domain-joined Windows 10 device attempts to auto-join to Azure AD to become Hybrid Azure AD joined by default. This will only succeed if you haves set up the right environment.
+> The on-premises domain-joined Windows 10 device attempts to auto-join to Azure AD to become hybrid Azure AD joined by default. This will only succeed if you have set up the right environment.
-You may determine that Hybrid Azure AD Join is the best solution for a device, and that device may already be in a different state. Here are the upgrade considerations.
+You may determine that hybrid Azure AD join is the best solution for a device in a different state. The following table shows how to change the state of a device.
-| Current device state| Desired device state| How-to |
-| - | - | - |
-| On-premises domain join| Hybrid Azure AD Join| Use Azure AD connect or AD FS to join to Azure |
-| On-premises workgroup joined or new| Hybrid Azure AD Join| Supported with [Windows Autopilot](/windows/deployment/windows-autopilot/windows-autopilot). Otherwise device needs to be on-premises domain joined before Hybrid Azure AD Join |
-| Azure AD joined| Hybrid Azure AD Join| Unjoin from Azure AD, which puts it in the on-premises workgroup or new state. |
-| Azure AD registerd| Hybrid Azure AD Join| Depends on Windows version. [See these considerations](hybrid-azuread-join-plan.md). |
+| Current device state | Desired device state | How-to |
+| | | |
+| On-premises domain joined | Hybrid Azure AD joined | Use Azure AD connect or AD FS to join to Azure. |
+| On-premises workgroup joined or new | Hybrid Azure AD joined | Supported with [Windows Autopilot](/windows/deployment/windows-autopilot/windows-autopilot). Otherwise device needs to be on-premises domain joined before hybrid Azure AD join. |
+| Azure AD joined | Hybrid Azure AD joined | Unjoin from Azure AD, which puts it in the on-premises workgroup or new state. |
+| Azure AD registered | Hybrid Azure AD joined | Depends on Windows version. [See these considerations](hybrid-azuread-join-plan.md). |
## Manage your devices
-Once you have registered or joined your devices to Azure AD, use the [Azure portal](https://portal.azure.com/) as a central place to manage your device identities. The Azure Active Directory devices page enables you to:
+Once you've registered or joined your devices to Azure AD, use the [Azure portal](https://portal.azure.com/) as a central place to manage your device identities. The Azure Active Directory devices page enables you to:
-* [Configure your device settings](device-management-azure-portal.md#configure-device-settings)
-* You need to be a local administrator to manage Windows devices. [Azure AD updates this membership for Azure AD joined devices](assign-local-admin.md), automatically adding those with the device manager role as administrators to all joined devices.
+* [Configure your device settings](device-management-azure-portal.md#configure-device-settings).
+* You need to be a local administrator to manage Windows devices. [Azure AD updates this membership for Azure AD joined devices](assign-local-admin.md), automatically adding users with the device manager role as administrators to all joined devices.
Make sure that you keep the environment clean by [managing stale devices](manage-stale-devices.md), and focus your resources on managing current devices.
Make sure that you keep the environment clean by [managing stale devices](manage
### Supported device management tools
-Administrators can secure and further control these registered and joined devices using additional device management tools. These tools provide a means to enforce organization-required configurations like requiring storage to be encrypted, password complexity, software installations, and software updates.
+Administrators can secure and further control registered and joined devices using other device management tools. These tools provide you a way to enforce configurations like requiring storage to be encrypted, password complexity, software installations, and software updates.
Review supported and unsupported platforms for integrated devices:
-| Device management tools| Azure AD registered| Azure AD join| Hybrid Azure AD join|
-| - | - | - | - |
-| [Mobile Device Management (MDM) ](/windows/client-management/mdm/azure-active-directory-integration-with-mdm) <br>Example: Microsoft Intune| ![Checkmark for these values.](./media/plan-device-deployment/check.png)| ![Checkmark for these values.](./media/plan-device-deployment/check.png)| ![Checkmark for these values.](./media/plan-device-deployment/check.png)|
-| [Co management with Microsoft Intune and Microsoft Endpoint Configuration Manager](/mem/configmgr/comanage/overview) <br>(Windows 10 and later)| | ![Checkmark for these values.](./media/plan-device-deployment/check.png)| ![Checkmark for these values.](./media/plan-device-deployment/check.png)|
-| [Group policy](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831791(v=ws.11))<br>(Windows only)| | | ![Checkmark for these values.](./media/plan-device-deployment/check.png)|
---
- We recommend that you consider [Microsoft Intune Mobile Application management (MAM)](/mem/intune/apps/app-management) with or without device management for registered iOS or Android devices.
-
- Administrators can also [deploy virtual desktop infrastructure (VDI) platforms](howto-device-identity-virtual-desktop-infrastructure.md) hosting Windows operating systems in their organizations to streamline management and reduce costs through consolidation and centralization of resources.
-
-### Troubleshoot device identities
-
-* [Troubleshooting devices using the dsregcmd command](troubleshoot-device-dsregcmd.md)
-
-* [Troubleshooting Enterprise State Roaming settings in Azure Active Directory](enterprise-state-roaming-troubleshooting.md)
-
-If you experience issues with completing hybrid Azure AD join for domain-joined Windows devices, see:
+| Device management tools | Azure AD registered | Azure AD joined | Hybrid Azure AD joined |
+| | :: | :: | :: |
+| [Mobile Device Management (MDM) ](/windows/client-management/mdm/azure-active-directory-integration-with-mdm) <br>Example: Microsoft Intune | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
+| [Co-management with Microsoft Intune and Microsoft Endpoint Configuration Manager](/mem/configmgr/comanage/overview) <br>(Windows 10 and later) | | ![Checkmark for these values.](./media/plan-device-deployment/check.png) | ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
+| [Group policy](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831791(v=ws.11))<br>(Windows only) | | | ![Checkmark for these values.](./media/plan-device-deployment/check.png) |
-* [Troubleshoot hybrid Azure AD join for Windows current devices](troubleshoot-hybrid-join-windows-current.md)
+We recommend that you consider [Microsoft Intune Mobile Application management (MAM)](/mem/intune/apps/app-management) with or without device management for registered iOS or Android devices.
-* [Troubleshoot hybrid Azure AD join for Windows down level devices](troubleshoot-hybrid-join-windows-legacy.md)
+Administrators can also [deploy virtual desktop infrastructure (VDI) platforms](howto-device-identity-virtual-desktop-infrastructure.md) hosting Windows operating systems in their organizations to streamline management and reduce costs through consolidation and centralization of resources.
## Next steps
-* [Plan your Azure AD Join implementation](azureadjoin-plan.md)
-* [Plan your hybrid Azure AD Join implementation](hybrid-azuread-join-plan.md)
+* [Plan your Azure AD join implementation](azureadjoin-plan.md)
+* [Plan your hybrid Azure AD join implementation](hybrid-azuread-join-plan.md)
* [Manage device identities](device-management-azure-portal.md)
active-directory Troubleshoot Hybrid Join Windows Current https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/troubleshoot-hybrid-join-windows-current.md
Previously updated : 12/02/2021 Last updated : 01/20/2022
# Troubleshoot hybrid Azure AD-joined devices
-This article provides troubleshooting guidance to help you resolve potential issues with devices that are running Windows 10 or Windows Server 2016.
+This article provides troubleshooting guidance to help you resolve potential issues with devices that are running Windows 10 or Windows Server 2016 or newer.
Hybrid Azure Active Directory (Azure AD) join supports the Windows 10 November 2015 update and later.
active-directory How To Connect Fed Hybrid Azure Ad Join Post Config Tasks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-hybrid-azure-ad-join-post-config-tasks.md
- Title: 'Azure AD Connect: Hybrid Azure AD join post configuration tasks | Microsoft Docs'
-description: This document details post configuration tasks needed to complete the Hybrid Azure AD join
------ Previously updated : 01/05/2022------
-# Post configuration tasks for Hybrid Azure AD join
-
-After you have run Azure AD Connect to configure your organization for Hybrid Azure AD join, there are a few additional steps that you must complete to finalize that setup. Carry out only the steps that apply for your devices.
-
-## 1. Configure controlled rollout (Optional)
-All domain-joined devices running Windows 10 and Windows Server 2016 automatically register with Azure AD once all configuration steps are complete. If you prefer a controlled rollout rather than this auto-registration, you can use group policy to selectively enable or disable automatic rollout. This group policy should be set before starting the other configuration steps:
-* Create a group policy object in your Active Directory.
-* Name it (ex- Hybrid Azure AD join).
-* Edit and go to: Computer Configuration > Policies > Administrative Templates > Windows Components > Device Registration.
-
->[!NOTE]
->For 2012R2 the policy settings are at **Computer Configuration > Policies > Administrative Templates > Windows Components > Workplace Join > Automatically workplace join client computers**
-
-* Enable this setting: Register domain-joined computers as devices.
-* Apply and click OK.
-* Link the GPO to the location of your choice (organizational unit, security group, or to the domain for all devices).
-
-## 2. Configure network with device registration endpoints
-Make sure that the following URLs are accessible from computers inside your organizational network for registration to Azure AD:
-
-* `https://enterpriseregistration.windows.net`
-* `https://login.microsoftonline.com`
-* `https://device.login.microsoftonline.com`
-
-## 3. Implement WPAD for Windows 10 devices
-If your organization accesses the Internet via an outbound proxy, implement Web Proxy Auto-Discovery (WPAD)to enable Windows 10 computers to register to Azure AD.
-
-## 4. Configure the SCP in any forests that were not configured by Azure AD Connect
-
-The service connection point (SCP) contains your Azure AD tenant information that will be used by your devices for auto-registration. Run the PowerShell script, ConfigureSCP.ps1, that you downloaded from Azure AD Connect.
-
-## 5. Configure any federation service that was not configured by Azure AD Connect
-
-If your organization uses a federation service to sign in to Azure AD, the claim rules in your Azure AD relying party trust must allow device authentication. If you are using federation with AD FS, go to [AD FS Help](https://aka.ms/aadrptclaimrules) to generate the claim rules. If you are using a non-Microsoft federation solution, contact that provider for guidance.
-
->[!NOTE]
->If you have Windows down-level devices, the service must support issuing the authenticationmethod and wiaormultiauthn claims when receiving requests to the Azure AD trust. In AD FS, you should add an issuance transform rule that passes-through the authentication method.
-
-## 6. Enable Azure AD Seamless SSO for Windows down-level devices
-
-If your organization uses Password Hash Synchronization or Pass-through Authentication to sign in to Azure AD, enable [Azure AD Seamless SSO](/azure/active-directory/connect/active-directory-aadconnect-sso) with that sign-in method to authenticate Windows down-level devices.
-
-## 7. Set Azure AD policy for Windows down-level devices
-
-To register Windows down-level devices, you need to make sure that the Azure AD policy allows users to register devices.
-
-* Log-in to your account in the Azure portal.
-* Go to: Azure Active Directory > Devices > Device settings
-* Set "Users may register their devices with Azure AD" to ALL.
-* Click Save
-
-## 8. Add Azure AD endpoint to Windows down-level devices
-
-Add the Azure AD device authentication endpoint to the local Intranet zones on your Windows down-level devices to avoid certificate prompts when authenticating the devices:
-`https://device.login.microsoftonline.com`
-
-If you are using [Seamless SSO](how-to-connect-sso.md), also enable ΓÇ£Allow status bar updates via scriptΓÇ¥ on that zone and add the following endpoint:
-`https://autologon.microsoftazuread-sso.com`
-
-## 9. Install Microsoft Workplace Join on Windows down-level devices
-
-This installer creates a scheduled task on the device system that runs in the userΓÇÖs context. The task is triggered when the user signs in to Windows. The task silently joins the device with Azure AD with the user credentials after authenticating using integrated Windows authentication. The download center is at https://www.microsoft.com/download/details.aspx?id=53554.
-
-## 10. Configure group policy to allow device registration
-
-For information about how to allow hybrid Azure AD join for individual devices, see [Controlled validation of hybrid Azure AD join](../devices/hybrid-azuread-join-control.md).
-
-## Next steps
-[Configure device writeback](how-to-connect-device-writeback.md)
active-directory How To Connect Group Writeback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-group-writeback.md
Title: 'Azure AD Connect: Group writeback'
-description: This article describes group writeback in Azure AD Connect.
+ Title: 'Azure AD Connect: Group writeback V1'
+description: This article describes group writeback V1 in Azure AD Connect.
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
To approve requests, a reviewer must be a global administrator, cloud applicatio
To configure the admin consent workflow, you need: - An Azure account. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+- One of the following roles: Global Administrator or owner of the service principal.
## Enable the admin consent workflow
active-directory F5 Big Ip Headers Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-headers-easy-button.md
The Easy Button client must also be registered as a client in Azure AD, before i
5. Specify who can use the application > **Accounts in this organizational directory only** 6. Select **Register** to complete the initial app registration 7. Navigate to **API permissions** and authorize the following Microsoft Graph permissions:
- * Application.Read.All
- * Application.ReadWrite.All
- * Application.ReadWrite.OwnedBy
- * Directory.Read.All
- * Group.Read.All
- * IdentityRiskyUser.Read.All
- * Policy.Read.All
- * Policy.ReadWrite.ApplicationConfiguration
- * Policy.ReadWrite.ConditionalAccess
- * User.Read.All
- 8. Grant admin consent for your organization
+
+ * Application.Read.All
+ * Application.ReadWrite.All
+ * Application.ReadWrite.OwnedBy
+ * Directory.Read.All
+ * Group.Read.All
+ * IdentityRiskyUser.Read.All
+ * Policy.Read.All
+ * Policy.ReadWrite.ApplicationConfiguration
+ * Policy.ReadWrite.ConditionalAccess
+ * User.Read.All
+
+8. Grant admin consent for your organization
9. In the **Certificates & Secrets** blade, generate a new **client secret** and note it down 10. From the **Overview** blade, note the **Client ID** and **Tenant ID**
Next, step through the Easy Button configurations to federate and publish the in
1. From a browser, sign-in to the F5 BIG-IP management console 2. Navigate to **System > Certificate Management > Traffic Certificate Management SSL Certificate List > Import** 3. Select **PKCS 12 (IIS)** and import your certificate along with its private key
- Once provisioned, the certificate can be used for every application published through Easy Button. You can also choose to upload a separate certificate for individual applications.
+
+Once provisioned, the certificate can be used for every application published through Easy Button. You can also choose to upload a separate certificate for individual applications.
- ![Screenshot for Configure Easy Button- Import SSL certificates and keys](./media/f5-big-ip-easy-button-ldap/configure-easy-button.png)
+ ![Screenshot for Configure Easy Button- Import SSL certificates and keys](./media/f5-big-ip-easy-button-ldap/configure-easy-button.png)
4. Navigate to **Access > Guided Configuration > Microsoft Integration and select Azure AD Application**
- You can now access the Easy Button functionality that provides quick configuration steps to set up the APM as a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
+
+You can now access the Easy Button functionality that provides quick configuration steps to set up the APM as a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-ldap/easy-button-template.png)
Consider the **Azure Service Account Details** be the BIG-IP client application
Some of these are global settings that can be reused for publishing more applications, further reducing deployment time and effort.
-1. Enter **Configuration Name**. A unique name that enables an admin to easily distinguish between Easy Button configurations for published applications
+1. Enter a unique **Configuration Name** so admins can easily distinguish between Easy Button configurations.
2. Enable **Single Sign-On (SSO) & HTTP Headers**
-3. Enter the **Tenant Id**, **Client ID**, and **Client Secret** from your registered application
+3. Enter the **Tenant Id**, **Client ID**, and **Client Secret** you noted down during tenant registration
4. Confirm the BIG-IP can successfully connect to your tenant, and then select **Next**
The Service Provider settings define the SAML SP properties for the APM instance
![Screenshot for Service Provider settings](./media/f5-big-ip-easy-button-ldap/service-provider.png)
- Next, under security settings, enter information for Azure AD to encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
+Next, under security settings, enter information for Azure AD to encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
3. Check **Enable Encrypted Assertion (Optional)**. Enable to request Azure AD to encrypt SAML assertions
In the **Additional User Attributes tab**, you can enable session augmentation r
You can further protect the published application with policies returned from your Azure AD tenant. These policies are enforced after the first-factor authentication has been completed and uses signals from conditions like device platform, location, user or group membership, or application to determine access.
-The **Available Policies** list, by default, displays a list of policies that target selected apps.
+The **Available Policies** by default, lists all CA policies defined without user based actions.
-The **Selected Policies** list, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list. They are included by default but can be excluded if necessary.
+The **Selected Policies** list, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list.
To select a policy to be applied to the application being published:
To select a policy to be applied to the application being published:
2. Select the right arrow and move it to the **Selected Policies** list
-Selected policies should either have an **Include** or **Exclude option** checked. If both options are checked, the selected policy is not enforced. Exclude all policies while testing. You can go back and enable them later.
+Selected policies should either have an **Include** or **Exclude option** checked. If both options are checked, the selected policy is not enforced. Excluding all policies may ease testing, you can go back and enable them later.
![Screenshot for CA policies](./media/f5-big-ip-easy-button-ldap/conditional-access-policy.png)
active-directory F5 Big Ip Kerberos Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md
To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD i
## Scenario description
-For this scenario, you will configure a critical line of business (LOB) application for **Kerberos authentication**, also known as **Integrated Windows Authentication (IWA)**.
+For this scenario, we have an application using **Kerberos authentication**, also known as **Integrated Windows Authentication (IWA)**, to gate access to protected content.
-Ideally, Azure AD should manage the application, but being legacy, it does not support any form of modern authentication protocols. Modernization would take considerable effort, introducing inevitable costs, and risk of potential downtime.
+Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. Modernizing the app would be ideal, but is costly, requires careful planning, and introduces risk of potential impact.
-Instead, a BIG-IP Virtual Edition (VE) deployed between the public internet and the internal Azure VNet application is connected and will be used to gate inbound access to the application, along with Azure AD for its extensive choice of authentication and authorization capabilities.
+One option would be to consider using [Azure AD Application Proxy](/azure/active-directory/app-proxy/application-proxy), as it provides the protocol transitioning required to bridge the legacy application to the modern identity control plane. Or for our scenario, we'll achieve this using F5's BIG-IP Application Delivery Controller (ADC).
-Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and header-based SSO. It significantly improves the overall security posture of the application, and allows the business to continue operating at pace, without interruption.
+Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application for remote and local access.
## Scenario architecture
The SHA solution for this scenario is made up of the following:
**KDC:** Key Distribution Center (KDC) role on a Domain Controller (DC), issuing Kerberos tickets.
-**BIG-IP:** Reverse proxy functionality enables publishing backend applications. The APM then overlays published applications with SAML Service Provider (SP) and SSO functionality.
+**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the PeopleSoft service.
SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
Before a client or service can access Microsoft Graph, it must be trusted by the
7. Navigate to **API permissions** and authorize the following Microsoft Graph permissions: * Application.Read.All- * Application.ReadWrite.All- * Application.ReadWrite.OwnedBy- * Directory.Read.All- * Group.Read.All- * IdentityRiskyUser.Read.All- * Policy.Read.All- * Policy.ReadWrite.ApplicationConfiguration- * Policy.ReadWrite.ConditionalAccess- * User.Read.All 8. Grant admin consent for your organization
Next, step through the Easy Button configurations, and complete the trust to sta
3. Select **PKCS 12 (IIS)** and import your certificate along with its private key
- Once provisioned, the certificate can be used for every application published through Easy Button. You can also choose to upload a separate certificate for individual applications.
+Once provisioned, the certificate can be used for every application published through Easy Button. You can also choose to upload a separate certificate for individual applications.
![Screenshot for Configure Easy Button- Import SSL certificates and keys](./media/f5-big-ip-kerberos-easy-button/config-easy-button.png) 4. Navigate to **Access > Guided Configuration > Microsoft Integration and select Azure AD Application**
- You can now access the Easy Button functionality that provides quick configuration steps to set up the APM as a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
+You can now access the Easy Button functionality that provides quick configuration steps to set up the APM as a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-kerberos-easy-button/easy-button-template.png)
These are general and service account properties. Consider this section to be th
Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
-1. Enter **Configuration Name.** A unique name that enables an admin to easily distinguish between Easy Button configurations for published applications
+1. Provide a unique **Configuration Name** so admins can easily distinguish between Easy Button configurations
2. Enable **Single Sign-On (SSO) & HTTP Headers**
-3. Enter the **Tenant Id, Client ID,** and **Client Secret** from your registered application
+3. Enter the **Tenant Id, Client ID,** and **Client Secret** you noted down during tenant registration
![Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-kerberos-easy-button/azure-configuration-properties.png)
As our AD infrastructure is based on a .com domain suffix used both, internally
#### Additional User Attributes
-In the **Additional User Attributes tab**, you can enable session augmentation required by various distributed systems such as Oracle, SAP, and other JAVA based implementations requiring attributes stored in other directories. Attributes fetched from an LDAP source can then be injected as additional SSO headers to further control access based on roles, Partner IDs, etc.![Graphical user interface, text, application, email
+The **Additional User Attributes** tab can support a variety of distributed systems requiring attributes stored in other directories, for session augmentation. Attributes fetched from an LDAP source can then be injected as additional SSO headers to further control access based on roles, Partner IDs, etc.
![Screenshot for additional user attributes](./media/f5-big-ip-kerberos-easy-button/additional-user-attributes.png)
In the **Additional User Attributes tab**, you can enable session augmentation r
You can further protect the published application with policies returned from your Azure AD tenant. These policies are enforced after the first-factor authentication has been completed and uses signals from conditions like device platform, location, user or group membership, or application to determine access.
-The **Available Policies** list, by default, displays a list of policies that target selected apps.
+The **Available Policies** by default, lists all CA policies defined without user based actions.
-The **Selected Policies** list, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list. They are included by default but can be excluded if necessary.
+The **Selected Policies**, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list.
To select a policy to be applied to the application being published:
To select a policy to be applied to the application being published:
2. Select the right arrow and move it to the **Selected Policies** list
- Selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the selected policy is not enforced. **Exclude** all policies while testing. You can go back and enable them later.
+Selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the selected policy is not enforced. Excluding all policies may ease testing, you can go back and enable them later.
![Screenshot for CA policies](./media/f5-big-ip-kerberos-easy-button/conditional-access-policy.png)
A virtual server is a BIG-IP data plane object represented by a virtual IP addre
### Pool Properties
-The **Application Pool tab** details the services behind a BIG-IP that are represented as a pool, containing one or more application servers.
+The **Application Pool tab** details the services behind a BIG-IP, represented as a pool containing one or more application servers.
1. Choose from **Select a Pool.** Create a new pool or select an existing one
For more information, see [Kerberos Constrained Delegation across domains](/prev
## Next steps
-From a browser, **connect** to the applicationΓÇÖs external URL or select the **applicationΓÇÖs icon** in the [Microsoft MyApps portal](https://myapps.microsoft.com/). After authenticating against Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
+From a browser, **connect** to the applicationΓÇÖs external URL or select the **applicationΓÇÖs icon** in the [Microsoft MyApps portal](https://myapps.microsoft.com/). After authenticating to Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
![Screenshot for App views](./media/f5-big-ip-kerberos-easy-button/app-view.png)
active-directory F5 Big Ip Ldap Header Easybutton https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md
The Easy Button client must also be registered as a client in Azure AD, before i
7. Navigate to **API permissions** and authorize the following Microsoft Graph permissions:
- * Application.Read.All
-
- * Application.ReadWrite.All
-
+ * Application.Read.All
+ * Application.ReadWrite.All
* Application.ReadWrite.OwnedBy- * Directory.Read.All- * Group.Read.All-
- * IdentityRiskyUser.Read.All
-
- * Policy.Read.All
-
- * Policy.ReadWrite.ApplicationConfiguration
-
- * Policy.ReadWrite.ConditionalAccess
-
- * User.Read.All
+ * IdentityRiskyUser.Read.All
+ * Policy.Read.All
+ * Policy.ReadWrite.ApplicationConfiguration
+ * Policy.ReadWrite.ConditionalAccess
+ * User.Read.All
8. Grant admin consent for your organization
Consider the **Azure Service Account Details** be the BIG-IP client application
Some of these are global settings that can be reused for publishing more applications, further reducing deployment time and effort.
-1. Enter **Configuration Name**. A unique name that enables an admin to easily distinguish between Easy Button configurations for published applications
+1. Enter a unique **Configuration Name** so admins can easily distinguish between Easy Button configurations.
2. Enable **Single Sign-On (SSO) & HTTP Headers**
-3. Enter the **Tenant Id**, **Client ID**, and **Client Secret** from your registered application
-4. Confirm the BIG-IP can successfully connect to your tenant, and then select **Next**
+3. Enter the **Tenant Id**, **Client ID**, and **Client Secret** you noted down during tenant registration
+
+5. Confirm the BIG-IP can successfully connect to your tenant, and then select **Next**
![Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-easy-button-ldap/config-properties.png)
The Service Provider settings define the SAML SP properties for the APM instance
![Screenshot for Service Provider settings](./media/f5-big-ip-easy-button-ldap/service-provider.png)
- Next, under security settings, enter information for Azure AD to encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens can't be intercepted, and personal or corporate data be compromised.
+Next, under security settings, enter information for Azure AD to encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens can't be intercepted, and personal or corporate data be compromised.
3. Check **Enable Encrypted Assertion (Optional)**. Enable to request Azure AD to encrypt SAML assertions
For this example, you can include one more attribute:
2. Enter **Source Attribute** as *user.employeeid* - ![Screenshot for user attributes and claims](./media/f5-big-ip-easy-button-ldap/user-attributes-claims.png) #### Additional User Attributes
In the **Additional User Attributes tab**, you can enable session augmentation r
You can further protect the published application with policies returned from your Azure AD tenant. These policies are enforced after the first-factor authentication has been completed and uses signals from conditions like device platform, location, user or group membership, or application to determine access.
-The **Available Policies** list, by default, displays a list of policies that target selected apps.
+The **Available Policies** by default, lists all CA policies defined without user based actions.
-The **Selected Policies** list, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list. They are included by default but can be excluded if necessary.
+The **Selected Policies** list, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list.
To select a policy to be applied to the application being published:
To select a policy to be applied to the application being published:
2. Select the right arrow and move it to the **Selected Policies** list
-Selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the selected policy is not enforced. **Exclude** all policies while testing. You can go back and enable them later.
+Selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the selected policy is not enforced. Excluding all policies may ease testing, you can go back and enable them later.
![Screenshot for CA policies](./media/f5-big-ip-easy-button-ldap/conditional-access-policy.png)
Our backend application sits on HTTP port 80 but obviously switch to 443 if your
Enabling SSO allows users to access BIG-IP published services without having to enter credentials. The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO, the latter of which weΓÇÖll enable to configure the following.
-* **Header Operation:** Insert
-
-* **Header Name:** upn
-
-* **Header Value:** %{session.saml.last.identity}
-
-* **Header Operation:** Insert
-
-* **Header Name:** employeeid
-
-* **Header Value:** %{session.saml.last.attr.name.employeeid}
-
-
-* **Header Operation:** Insert
+ * **Header Operation:** Insert
+ * **Header Name:** upn
+ * **Header Value:** %{session.saml.last.identity}
-* **Header Name:** eventroles
+ * **Header Operation:** Insert
+ * **Header Name:** employeeid
+ * **Header Value:** %{session.saml.last.attr.name.employeeid}
-* **Header Value:** %{session.ldap.last.attr.eventroles}
+ * **Header Operation:** Insert
+ * **Header Name:** eventroles
+ * **Header Value:** %{session.ldap.last.attr.eventroles}
![Screenshot for SSO and HTTP headers](./media/f5-big-ip-easy-button-ldap/sso-headers.png)
active-directory Admin Units Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/admin-units-assign-roles.md
Previously updated : 01/24/2022 Last updated : 01/28/2022
The following Azure AD roles can be assigned with administrative unit scope:
| --| -- | | [Authentication Administrator](permissions-reference.md#authentication-administrator) | Has access to view, set, and reset authentication method information for any non-admin user in the assigned administrative unit only. | | [Groups Administrator](permissions-reference.md#groups-administrator) | Can manage all aspects of groups and groups settings, such as naming and expiration policies, in the assigned administrative unit only. |
-| [Helpdesk Administrator](permissions-reference.md#helpdesk-administrator) | Can reset passwords for non-administrators and Helpdesk Administrators in the assigned administrative unit only. |
+| [Helpdesk Administrator](permissions-reference.md#helpdesk-administrator) | Can reset passwords for non-administrators in the assigned administrative unit only. |
| [License Administrator](permissions-reference.md#license-administrator) | Can assign, remove, and update license assignments within the administrative unit only. | | [Password Administrator](permissions-reference.md#password-administrator) | Can reset passwords for non-administrators within the assigned administrative unit only. | | [SharePoint Administrator](permissions-reference.md#sharepoint-administrator) * | Can manage all aspects of the SharePoint service. |
active-directory Burp Suite Enterprise Edition Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/burp-suite-enterprise-edition-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Burp Suite Enterprise Edition | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Burp Suite Enterprise Edition'
description: Learn how to configure single sign-on between Azure Active Directory and Burp Suite Enterprise Edition.
Previously updated : 12/09/2020 Last updated : 01/27/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Burp Suite Enterprise Edition
+# Tutorial: Azure AD SSO integration with Burp Suite Enterprise Edition
In this tutorial, you'll learn how to integrate Burp Suite Enterprise Edition with Azure Active Directory (Azure AD). When you integrate Burp Suite Enterprise Edition with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Burp Suite Enterprise Edition single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
+* Burp Suite Enterprise Edition supports **IDP** initiated SSO.
-* Burp Suite Enterprise Edition supports **IDP** initiated SSO
-
-* Burp Suite Enterprise Edition supports **Just In Time** user provisioning
-
+* Burp Suite Enterprise Edition supports **Just In Time** user provisioning.
-## Adding Burp Suite Enterprise Edition from the gallery
+## Add Burp Suite Enterprise Edition from the gallery
To configure the integration of Burp Suite Enterprise Edition into Azure AD, you need to add Burp Suite Enterprise Edition from the gallery to your list of managed SaaS apps.
To configure the integration of Burp Suite Enterprise Edition into Azure AD, you
1. In the **Add from the gallery** section, type **Burp Suite Enterprise Edition** in the search box. 1. Select **Burp Suite Enterprise Edition** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for Burp Suite Enterprise Edition Configure and test Azure AD SSO with Burp Suite Enterprise Edition using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Burp Suite Enterprise Edition.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Burp Suite Enterprise Edition** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Set up single sign-on with SAML** page, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://<BURPSUITEDOMAIN:PORT>/saml`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In addition to above, Burp Suite Enterprise Edition application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirement. - | Name | Source Attribute| | | | | Group | user.groups |
In this section, you test your Azure AD single sign-on configuration with follow
* You can use Microsoft My Apps. When you click the Burp Suite Enterprise Edition tile in the My Apps, you should be automatically signed in to the Burp Suite Enterprise Edition for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510). - ## Next steps
-Once you configure Burp Suite Enterprise Edition you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Burp Suite Enterprise Edition you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory New Relic Limited Release Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/new-relic-limited-release-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with New Relic | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with New Relic'
description: Learn how to configure single sign-on between Azure Active Directory and New Relic.
Previously updated : 08/31/2021 Last updated : 01/27/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with New Relic
+# Tutorial: Azure AD SSO integration with New Relic
In this tutorial, you'll learn how to integrate New Relic with Azure Active Directory (Azure AD). When you integrate New Relic with Azure AD, you can:
To get started, you need:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * A New Relic organization on the [New Relic One account/user model](https://docs.newrelic.com/docs/accounts/accounts-billing/new-relic-one-user-management/introduction-managing-users/#user-models) and on either Pro or Enterprise edition. For more information, see [New Relic requirements](https://docs.newrelic.com/docs/accounts/accounts-billing/new-relic-one-user-management/authentication-domains-saml-sso-scim-more).
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
Once done, you can verify that your users have been added in New Relic by going
Next, you will probably want to assign your users to specific New Relic accounts or roles. To learn more about this, see [User management concepts](https://docs.newrelic.com/docs/accounts/accounts-billing/new-relic-one-user-management/add-manage-users-groups-roles/#understand-concepts).
-In New Relic's authentication domain UI, you can configure [other settings](https://docs.newrelic.com/docs/accounts/accounts-billing/new-relic-one-user-management/authentication-domains-saml-sso-scim-more/#session-mgmt), like session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+In New Relic's authentication domain UI, you can configure [other settings](https://docs.newrelic.com/docs/accounts/accounts-billing/new-relic-one-user-management/authentication-domains-saml-sso-scim-more/#session-mgmt), like session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Sailpoint Identitynow Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sailpoint-identitynow-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with SailPoint IdentityNow | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with SailPoint IdentityNow'
description: Learn how to configure single sign-on between Azure Active Directory and SailPoint IdentityNow.
Previously updated : 05/31/2021 Last updated : 01/27/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with SailPoint IdentityNow
+# Tutorial: Azure AD SSO integration with SailPoint IdentityNow
In this tutorial, you'll learn how to integrate SailPoint IdentityNow with Azure Active Directory (Azure AD). When you integrate SailPoint IdentityNow with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * SailPoint IdentityNow active subscription. If you do not have IdentityNow, please contact [SailPoint IdentityNow support team](mailto:support@sailpoint.com).
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment. * SailPoint IdentityNow supports **SP and IDP** initiated SSO.
-## Adding SailPoint IdentityNow from the gallery
+## Add SailPoint IdentityNow from the gallery
To configure the integration of SailPoint IdentityNow into Azure AD, you need to add SailPoint IdentityNow from the gallery to your list of managed SaaS apps.
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure SailPoint IdentityNow you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure SailPoint IdentityNow you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Surveymonkey Enterprise Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/surveymonkey-enterprise-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with SurveyMonkey Enterprise | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with SurveyMonkey Enterprise'
description: Learn how to configure single sign-on between Azure Active Directory and SurveyMonkey Enterprise.
Previously updated : 02/11/2021 Last updated : 01/27/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with SurveyMonkey Enterprise
+# Tutorial: Azure AD SSO integration with SurveyMonkey Enterprise
In this tutorial, you'll learn how to integrate SurveyMonkey Enterprise with Azure Active Directory (Azure AD). When you integrate SurveyMonkey Enterprise with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * SurveyMonkey Enterprise single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure SurveyMonkey Enterprise you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure SurveyMonkey Enterprise you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Tulip Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tulip-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Tulip | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Tulip'
description: Learn how to configure single sign-on between Azure Active Directory and Tulip.
Previously updated : 06/30/2021 Last updated : 01/27/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Tulip
+# Tutorial: Azure AD SSO integration with Tulip
In this tutorial, you'll learn how to integrate Tulip with Azure Active Directory (Azure AD). When you integrate Tulip with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Tulip single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment. - * Tulip supports **IDP** initiated SSO.
-## Adding Tulip from the gallery
+## Add Tulip from the gallery
To configure the integration of Tulip into Azure AD, you need to add Tulip from the gallery to your list of managed SaaS apps.
To configure the integration of Tulip into Azure AD, you need to add Tulip from
1. In the **Add from the gallery** section, type **Tulip** in the search box. 1. Select **Tulip** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for Tulip Configure and test Azure AD SSO with Tulip using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Tulip.
Follow these steps to enable Azure AD SSO in the Azure portal.
![image3](common/idp-intiated.png) > [!Note]
- > If the **Identifier** and **Reply URL** values are not getting auto polulated, then fill in the values manually according to your requirement.
+ > If the **Identifier** and **Reply URL** values are not getting auto populated, then fill in the values manually according to your requirement.
1. Tulip application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
Follow these steps to enable Azure AD SSO in the Azure portal.
| badgeID | user.employeeid | | groups |user.groups | - 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/certificatebase64.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up Tulip** section, copy the appropriate URL(s) based on your requirement. ![Copy configuration URLs](common/copy-configuration-urls.png)+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you create a user called Britta Simon in Tulip. Work with [Tul
In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on Test this application in Azure portal and you should be automatically signed in to the Tulip for which you set up the SSO
+* Click on Test this application in Azure portal and you should be automatically signed in to the Tulip for which you set up the SSO.
* You can use Microsoft My Apps. When you click the Tulip tile in the My Apps, you should be automatically signed in to the Tulip for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510). ## Next steps
-Once you configure Tulip you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Tulip you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
aks Update Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/update-credentials.md
Title: Reset the credentials for a cluster
-description: Learn how update or reset the service principal or AAD Application credentials for an Azure Kubernetes Service (AKS) cluster
+description: Learn how update or reset the service principal or Azure AD Application credentials for an Azure Kubernetes Service (AKS) cluster.
Last updated 03/11/2019
Last updated 03/11/2019
AKS clusters created with a service principal have a one-year expiration time. As you near the expiration date, you can reset the credentials to extend the service principal for an additional period of time. You may also want to update, or rotate, the credentials as part of a defined security policy. This article details how to update these credentials for an AKS cluster.
-You may also have [integrated your AKS cluster with Azure Active Directory][aad-integration], and use it as an authentication provider for your cluster. In that case you will have 2 more identities created for your cluster, the AAD Server App and the AAD Client App, you may also reset those credentials.
+You may also have [integrated your AKS cluster with Azure Active Directory (Azure AD)][aad-integration], and use it as an authentication provider for your cluster. In that case you will have 2 more identities created for your cluster, the Azure AD Server App and the Azure AD Client App, you may also reset those credentials.
Alternatively, you can use a managed identity for permissions instead of a service principal. Managed identities are easier to manage than service principals and do not require updates or rotations. For more information, see [Use managed identities](use-managed-identity.md).
If you chose to update the existing service principal credentials in the previou
To create a service principal and then update the AKS cluster to use these new credentials, use the [az ad sp create-for-rbac][az-ad-sp-create] command. ```azurecli-interactive
-az ad sp create-for-rbac
+az ad sp create-for-rbac --role Contributor
``` The output is similar to the following example. Make a note of your own `appId` and `password`. These values are used in the next step.
az aks update-credentials \
For small and midsize clusters, it takes a few moments for the service principal credentials to be updated in the AKS.
-## Update AKS Cluster with new AAD Application credentials
+## Update AKS Cluster with new Azure AD Application credentials
-You may create new AAD Server and Client applications by following the [AAD integration steps][create-aad-app]. Or reset your existing AAD Applications following the [same method as for service principal reset](#reset-the-existing-service-principal-credential). After that you just need to update your cluster AAD Application credentials using the same [az aks update-credentials][az-aks-update-credentials] command but using the *--reset-aad* variables.
+You may create new Azure AD Server and Client applications by following the [Azure AD integration steps][create-aad-app]. Or reset your existing Azure AD Applications following the [same method as for service principal reset](#reset-the-existing-service-principal-credential). After that you just need to update your cluster Azure AD Application credentials using the same [az aks update-credentials][az-aks-update-credentials] command but using the *--reset-aad* variables.
```azurecli-interactive az aks update-credentials \
az aks update-credentials \
## Next steps
-In this article, the service principal for the AKS cluster itself and the AAD Integration Applications were updated. For more information on how to manage identity for workloads within a cluster, see [Best practices for authentication and authorization in AKS][best-practices-identity].
+In this article, the service principal for the AKS cluster itself and the Azure AD Integration Applications were updated. For more information on how to manage identity for workloads within a cluster, see [Best practices for authentication and authorization in AKS][best-practices-identity].
<!-- LINKS - internal --> [install-azure-cli]: /cli/azure/install-azure-cli
api-management Api Management Howto Integrate Internal Vnet Appgateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-integrate-internal-vnet-appgateway.md
API Management configured in a virtual network provides a single gateway interfa
## Next steps
+* Set up using an [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/private-webapp-with-app-gateway-and-apim).
* Learn more about Application Gateway: * [Application Gateway overview](../application-gateway/overview.md)
api-management Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/configure-custom-domain.md
If you already have a private certificate from a third-party provider, you can u
We recommend using Azure Key Vault to [manage your certificates](../key-vault/certificates/about-certificates.md) and setting them to `autorenew`.
-If you use Azure Key Vault to manage a custom domain TLS certificate, make sure the certificate is inserted into Key Vault [as a _certificate_](/rest/api/keyvault/createcertificate/createcertificate), not a _secret_.
+If you use Azure Key Vault to manage a custom domain TLS certificate, make sure the certificate is inserted into Key Vault [as a _certificate_](/rest/api/keyvault/certificates/create-certificate/create-certificate), not a _secret_.
To fetch a TLS/SSL certificate, API Management must have the list and get secrets permissions on the Azure Key Vault containing the certificate. * When you use the Azure portal to import the certificate, all the necessary configuration steps are completed automatically.
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-ssl-certificate.md
To secure a custom domain in a TLS binding, the certificate has additional requi
> [!NOTE] > Before creating a free managed certificate, make sure you have [fulfilled the prerequisites](#prerequisites) for your app.
-The free App Service managed certificate is a turn-key solution for securing your custom DNS name in App Service. It's a TLS/SSL server certificate that's fully managed by App Service and renewed continuously and automatically in six-month increments, 45 days before expiration. You create the certificate and bind it to a custom domain, and let App Service do the rest.
+The free App Service managed certificate is a turn-key solution for securing your custom DNS name in App Service. It's a TLS/SSL server certificate that's fully managed by App Service and renewed continuously and automatically in six-month increments, 45 days before expiration, as long as the prerequisites set-up remain the same without any action required from you. All the associated bindings will be updated with the renewed certificate. You create the certificate and bind it to a custom domain, and let App Service do the rest.
The free certificate comes with the following limitations: - Does not support wildcard certificates. - Does not support usage as a client certificate by certificate thumbprint (removal of certificate thumbprint is planned).
+- Does not support private DNS.
- Is not exportable.-- Is not supported on App Service not publicly accessible. - Is not supported on App Service Environment (ASE).
+- Only supports alphanumeric characters, dashes (-), and periods (.).
+
+# [Apex domain](#tab/apex)
+- Must have an A record pointing to your web app's IP address.
- Is not supported with root domains that are integrated with Traffic Manager.-- If a certificate is for a CNAME-mapped domain, the CNAME must be mapped directly to `<app-name>.azurewebsites.net`.
+- All the above must be met for successful certificate issuances and renewals
+
+# [Subdomain](#tab/subdomain)
+- Must have CNAME mapped _directly_ to <app-name>.azurewebsites.net; using services that proxy the CNAME value will block certificate issuance and renewal
+- All the above must be met for successful certificate issuance and renewals
+
+--
> [!NOTE] > The free certificate is issued by DigiCert. For some domains, you must explicitly allow DigiCert as a certificate issuer by creating a [CAA domain record](https://wikipedia.org/wiki/DNS_Certification_Authority_Authorization) with the value: `0 issue digicert.com`.
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/migrate.md
# Migration to App Service Environment v3
-App Service can now migrate your App Service Environment v2 to an [App Service Environment v3](overview.md). App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
+App Service can now migrate your App Service Environment v2 to an [App Service Environment v3](overview.md). If you want to migrate an App Service Environment v1 to an App Service Environment v3, see the [migration alternatives documentation](migration-alternatives.md). App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
> [!IMPORTANT] > It is recommended to use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
app-service Overview Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-managed-identity.md
There is a simple REST protocol for obtaining a token in App Service and Azure F
### Using the REST protocol > [!NOTE]
-> An older version of this protocol, using the "2017-09-01" API version, used the `secret` header instead of `X-IDENTITY-HEADER` and only accepted the `clientid` property for user-assigned. It also returned the `expires_on` in a timestamp format. MSI_ENDPOINT can be used as an alias for IDENTITY_ENDPOINT, and MSI_SECRET can be used as an alias for IDENTITY_HEADER. This version of the protocol is currently required for Linux Consumption hosting plans.
+> An older version of this protocol, using the "2017-09-01" API version, used the `secret` header instead of `X-IDENTITY-HEADER` and only accepted the `clientid` property for user-assigned. It also returned the `expires_on` in a timestamp format. MSI_ENDPOINT can be used as an alias for IDENTITY_ENDPOINT, and MSI_SECRET can be used as an alias for IDENTITY_HEADER.
An app with a managed identity has two environment variables defined:
The **IDENTITY_ENDPOINT** is a local URL from which your app can request tokens.
> | Parameter name | In | Description | > |-|--|--| > | resource | Query | The Azure AD resource URI of the resource for which a token should be obtained. This could be one of the [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication) or any other resource URI. |
-> | api-version | Query | The version of the token API to be used. Please use "2019-08-01" or later (unless using Linux Consumption, which currently only offers "2017-09-01" - see note above). |
+> | api-version | Query | The version of the token API to be used. Please use "2019-08-01" or later. |
> | X-IDENTITY-HEADER | Header | The value of the IDENTITY_HEADER environment variable. This header is used to help mitigate server-side request forgery (SSRF) attacks. | > | client_id | Query | (Optional) The client ID of the user-assigned identity to be used. Cannot be used on a request that includes `principal_id`, `mi_res_id`, or `object_id`. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used. | > | principal_id | Query | (Optional) The principal ID of the user-assigned identity to be used. `object_id` is an alias that may be used instead. Cannot be used on a request that includes client_id, mi_res_id, or object_id. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used. |
applied-ai-services Get Started Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/quickstarts/get-started-sdk-rest-api.md
recommendations: false
-# Get started with a client library SDKs or REST API
+# Get started with Form Recognizer client library SDKs or REST API
Get started with Azure Form Recognizer using the programming language of your choice. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract and analyze form fields, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDks into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
applied-ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool.md
keywords: document processing
<!-- markdownlint-disable MD033 --> <!-- markdownlint-disable MD034 --> <!-- markdownlint-disable MD029 -->
-# Get started with the Sample Labeling tool
+# Get started with the Form Recognizer Sample Labeling tool
Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine-learning models to extract and analyze form fields, text, and tables from your documents. You can use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities.
applied-ai-services Try V3 Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-csharp-sdk.md
Previously updated : 01/04/2022 Last updated : 01/28/2022 recommendations: false <!-- markdownlint-disable MD025 -->
-# Quickstart: C# client library SDK v3.0 | Preview
+# Get started: Form Recognizer C# SDK v3.0 | Preview
>[!NOTE] > Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
applied-ai-services Try V3 Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-java-sdk.md
Previously updated : 01/24/2022 Last updated : 01/28/2022 recommendations: false <!-- markdownlint-disable MD025 -->
-# Quickstart: Java client library SDK v3.0 | Preview
+# Get started: Form Recognizer Java SDK v3.0 | Preview
>[!NOTE] > Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
applied-ai-services Try V3 Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-javascript-sdk.md
Previously updated : 01/04/2022 Last updated : 01/28/2022 recommendations: false <!-- markdownlint-disable MD025 -->
-# Quickstart: Form Recognizer JavaScript client library SDKs v3.0 | Preview
+# Get Started: Form Recognizer JavaScript SDK v3.0 | Preview
>[!NOTE] > Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
applied-ai-services Try V3 Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-python-sdk.md
Previously updated : 01/04/2022 Last updated : 01/28/2022 recommendations: false <!-- markdownlint-disable MD025 -->
-# Quickstart: Python client library SDK v3.0 | Preview
+# Get started: Form Recognizer Python SDK v3.0 | Preview
>[!NOTE] > Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
Previously updated : 11/02/2021 Last updated : 01/28/2022
-# Quickstart: REST API | Preview
+# Get started: Form Recognizer REST API v3.0 | Preview
>[!NOTE] > Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
azure-cache-for-redis Cache Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-managed-identity.md
To use managed identity, you must have a premium-tier cache.
:::image type="content" source="media/cache-managed-identity/identity-add.png" alt-text="User assigned identity status is on":::
-1. A sidebar pops up to allow you to select any available user-assigned identity to your subscription. Choose an identity and select **Add**. For more information on user assigned managed identities, see [manage user-assigned identity](/azure/active-directory/managed-identities-azure-resources/manage-user-assigned-managed-identities.md).
+1. A sidebar pops up to allow you to select any available user-assigned identity to your subscription. Choose an identity and select **Add**. For more information on user assigned managed identities, see [manage user-assigned identity](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities).
>[!Note] >You need to [create a user assigned identity](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp) in advance of this step. >
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/language-support-policy.md
After the language end-of-life date, function apps that use retired language ver
## Retirement policy exceptions
-There are few exceptions to the retirement policy outlined above. Here is a list of languages that are approaching or have reached their end-of-life dates but continue to be supported on the platform until further notice. When these languages versions reach their end-of-life dates, they are no longer updated or patched. Because of this, we discourage you from developing and running your function apps on these language versions.
+There are few exceptions to the retirement policy outlined above. Here is a list of languages that are approaching or have reached their end-of-life (EOL) dates but continue to be supported on the platform until further notice. When these languages versions reach their end-of-life dates, they are no longer updated or patched. Because of this, we discourage you from developing and running your function apps on these language versions.
|Language Versions |EOL Date |Retirement Date| |--|--|-|
-|.NET 5|February 2022|TBA|
+|.NET 5|8 May 2022|TBA|
|Node 6|30 April 2019|28 February 2022| |Node 8|31 December 2019|28 February 2022| |Node 10|30 April 2021|30 September 2022|
azure-functions Remove https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/start-stop-vms/remove.md
After you enable the Start/Stop VMs v2 (preview) feature to manage the running s
To delete the resource group, follow the steps outlined in the [Azure Resource Manager resource group and resource deletion](../../azure-resource-manager/management/delete-resource-group.md) article.
+> [!NOTE]
+> You might need to manually remove the managed identity associated with the removed Start Stop V2 function app. You can determine whether you need to do this by going to your subscription and selecting **Access Control (IAM)**. From there you can filter by Type: `App Services or Function Apps`. If you find a managed identity that was left over from your removed Start Stop V2 installation, you must remove it. Leaving this managed identity could interfere with future installations.
+ ## Next steps
-To re-deploy this feature, see [Deploy Start/Stop v2](deploy.md) (preview).
+To re-deploy this feature, see [Deploy Start/Stop v2](deploy.md) (preview).
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/action-groups.md
The Action Groups Secure Webhook action enables you to take advantage of Azure A
1. Create an Azure AD Application for your protected web API. See [Protected web API: App registration](../../active-directory/develop/scenario-protected-web-api-app-registration.md). - Configure your protected API to be [called by a daemon app](../../active-directory/develop/scenario-protected-web-api-app-registration.md#if-your-web-api-is-called-by-a-daemon-app).
+ > [!NOTE]
+ > Your protected web API must be configured to [accept V2.0 access tokens](../../active-directory/develop/reference-app-manifest.md#accesstokenacceptedversion-attribute).
+
2. Enable Action Group to use your Azure AD Application. > [!NOTE]
azure-monitor Ilogger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/ilogger.md
namespace WebApplication
// or when you need to capture logs during application startup, such as // in Program.cs or Startup.cs itself. builder.AddApplicationInsights(
- context.Configuration["APPLICATIONINSIGHTS_CONNECTION_STRING"]);
+ context.Configuration["APPINSIGHTS_INSTRUMENTATIONKEY"]);
// Capture all log-level entries from Program builder.AddFilter<ApplicationInsightsLoggerProvider>(
namespace WebApplication
} ```
-In the preceding code, `ApplicationInsightsLoggerProvider` is configured with your `"APPLICATIONINSIGHTS_CONNECTION_STRING"` connection string. Filters are applied, setting the log level to <xref:Microsoft.Extensions.Logging.LogLevel.Trace?displayProperty=nameWithType>.
+In the preceding code, `ApplicationInsightsLoggerProvider` is configured with your `"APPINSIGHTS_INSTRUMENTATIONKEY"` instrumentation key. Filters are applied, setting the log level to <xref:Microsoft.Extensions.Logging.LogLevel.Trace?displayProperty=nameWithType>.
-> [!IMPORTANT]
-> We recommend [connection strings](./sdk-connection-string.md?tabs=net) over instrumentation keys. New Azure regions *require* the use of connection strings instead of instrumentation keys.
->
-> A connection string identifies the resource that you want to associate with your telemetry data. It also allows you to modify the endpoints that your resource will use as a destination for your telemetry. You'll need to copy the connection string and add it to your application's code or to an environment variable.
#### Example Startup.cs
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/usage-overview.md
To do this, [set up a telemetry initializer](./api-filtering-sampling.md#addmodi
// Telemetry initializer class public class MyTelemetryInitializer : ITelemetryInitializer {
+ // In this example, to differentiate versions, we use the value specified in the AssemblyInfo.cs
+ // for ASP.NET apps, or in your project file (.csproj) for the ASP.NET Core apps. Make sure that
+ // you set a different assembly version when you deploy your application for A/B testing.
+ static readonly string _version =
+ System.Reflection.Assembly.GetExecutingAssembly().GetName().Version.ToString();
+
public void Initialize(ITelemetry item)
- {
- var itemProperties = item as ISupportProperties;
- if (itemProperties != null && !itemProperties.Properties.ContainsKey("AppVersion"))
- {
- itemProperties.Properties["AppVersion"] = "v2.1";
- }
- }
+ {
+ item.Context.Component.Version = _version;
+ }
} ```
In the web app initializer such as Global.asax.cs:
{ // ... TelemetryConfiguration.Active.TelemetryInitializers
- .Add(new MyTelemetryInitializer());
+ .Add(new MyTelemetryInitializer());
} ```
In the web app initializer such as Global.asax.cs:
For [ASP.NET Core](asp-net-core.md#adding-telemetryinitializers) applications, adding a new `TelemetryInitializer` is done by adding it to the Dependency Injection container, as shown below. This is done in `ConfigureServices` method of your `Startup.cs` class. ```csharp
- using Microsoft.ApplicationInsights.Extensibility;
- using CustomInitializer.Telemetry;
- public void ConfigureServices(IServiceCollection services)
+using Microsoft.ApplicationInsights.Extensibility;
+
+public void ConfigureServices(IServiceCollection services)
{ services.AddSingleton<ITelemetryInitializer, MyTelemetryInitializer>(); } ```
-All new TelemetryClients automatically add the property value you specify. Individual telemetry events can override the default values.
- ## Next steps - [Users, Sessions, Events](usage-segmentation.md) - [Funnels](usage-funnels.md)
azure-netapp-files Azacsnap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-get-started.md
This article provides a guide for installing the Azure Application Consistent Sn
## Getting the snapshot tools
-It is recommended customers get the most recent version of the [AzAcSnap Installer](https://aka.ms/azacsnapdownload) from Microsoft.
+It's recommended customers get the most recent version of the [AzAcSnap Installer](https://aka.ms/azacsnapinstaller) from Microsoft.
-The self-installation file has an associated [AzAcSnap Installer signature file](https://aka.ms/azacsnapdownloadsignature) which is signed with Microsoft's public key to allow for GPG verification of the downloaded installer.
+The self-installation file has an associated [AzAcSnap Installer signature file](https://aka.ms/azacsnapdownloadsignature). This file is signed with Microsoft's public key to allow for GPG verification of the downloaded installer.
Once these downloads are completed, then follow the steps in this guide to install. ### Verifying the download
-The installer, which is downloadable per above, has an associated PGP signature file with an `.asc`
-filename extension. This file can be used to ensure the installer downloaded is a verified
-Microsoft provided file. The Microsoft PGP Public Key used for signing Linux packages is available here
-(<https://packages.microsoft.com/keys/microsoft.asc>) and has been used to sign the signature file.
+The installer has an associated PGP signature file with an `.asc` filename extension. This file can be used to ensure the installer downloaded is a verified
+Microsoft provided file. The Microsoft PGP Public Key used for signing Linux packages is available here (<https://packages.microsoft.com/keys/microsoft.asc>)
+and has been used to sign the signature file.
The Microsoft PGP Public Key can be imported to a user's local as follows:
The following commands trust the Microsoft PGP Public Key:
1. List the keys in the store. 2. Edit the Microsoft key.
-3. Check the fingerprint with `fpr`
+3. Check the fingerprint with `fpr`.
4. Sign the key to trust it. ```bash
gpg: Good signature from "Microsoft (Release signing)
<gpgsecurity@microsoft.com>" [full] ```
-For more details about using GPG, see [The GNU Privacy Handbook](https://www.gnupg.org/gph/en/manual/book1.html).
+For more information about using GPG, see [The GNU Privacy Handbook](https://www.gnupg.org/gph/en/manual/book1.html).
## Supported scenarios
See [Supported scenarios for HANA Large Instances](../virtual-machines/workloads
The following matrix is provided as a guideline on which versions of SAP HANA are supported by SAP for Storage Snapshot Backups.
-| Database Versions |1.0 SPS12|2.0 SPS0|2.0 SPS1|2.0 SPS2|2.0 SPS3|2.0 SPS4|
-|-||--|--|--|--|--|
-|Single Container Database| √ | √ | - | - | - | - |
-|MDC Single Tenant | - | - | √ | √ | √ | √ |
-|MDC Multiple Tenants | - | - | - | - | - | √ |
-> √ = <small>supported by SAP for Storage Snapshots</small>
+
+| Database type | Minimum database versions | Notes |
+|||--|
+| Single Container Database | 1.0 SPS 12, 2.0 SPS 00 | |
+| MDC Single Tenant | 2.0 SPS 01 | or later versions where MDC Single Tenant supported by SAP for storage/data snapshots.* |
+| MDC Multiple Tenants | 2.0 SPS 04 | or later where MDC Multiple Tenants supported by SAP for data snapshots. |
+> \* SAP changed terminology from Storage Snapshots to Data Snapshots from 2.0 SPS 02
+ ## Important things to remember
are supported by SAP for Storage Snapshot Backups.
necessary, delete the old snapshots on a regular basis to avoid storage fill out. - Always use the latest snapshot tools. - Use the same version of the snapshot tools across the landscape.-- Test the snapshot tools and get comfortable with the parameters required and output of the
- command before using in the production system.
-- When setting up the HANA user for backup (details below in this document), you need to
- set up the user for each HANA instance. Create an SAP HANA user account to access HANA
- instance under the SYSTEMDB (and not in the SID database) for MDC. In the single container
- environment, it can be set up under the tenant database.
-- Customers must provide the SSH public key for storage access. This action must be done once per
- node and for each user under which the command is executed.
+- Test the snapshot tools to understand the parameters required and their behavior, along with the log files, before deployment into production.
+- When setting up the HANA user for backup, you need to set up the user for each HANA instance. Create an SAP HANA user account to access HANA
+ instance under the SYSTEMDB (and not in the SID database) for MDC. In the single container environment, it can be set up under the tenant database.
+- Customers must provide the SSH public key for storage access. This action must be done once per node and for each user under which the command is executed.
- The number of snapshots per volume is limited to 250.-- If manually editing the configuration file, always use a Linux text editor such as "vi" and not
- Windows editors like Notepad. Using Windows editor may corrupt the file format.
- - Set up `hdbuserstore` for the SAP HANA user to communicate with SAP HANA.
+- If manually editing the configuration file, always use a Linux text editor such as "vi" and not Windows editors like Notepad. Using Windows editor may corrupt the file format.
+- Set up `hdbuserstore` for the SAP HANA user to communicate with SAP HANA.
- For DR: The snapshot tools must be tested on DR node before DR is set up.-- Monitor disk space regularly, automated log deletion is managed with the `--trim` option of the
- `azacsnap -c backup` for SAP HANA 2 and later releases.
-- **Risk of snapshots not being taken** - The snapshot tools only interact with the node of the SAP HANA
-system specified in the configuration file. If this node becomes unavailable, there is no mechanism to
-automatically start communicating with another node.
- - For an **SAP HANA Scale-Out with Standby** scenario it is typical to install and configure the snapshot
- tools on the master node. But, if the master node becomes unavailable, the standby node will take over
-the master node role. In this case, the implementation team should configure the snapshot tools on both
-nodes (Master and Stand-By) to avoid any missed snapshots. In the normal state, the master node will take
-HANA snapshots initiated by crontab, but after master node failover those snapshots will have to be
-executed from another node such as the new master node (former standby). To achieve this outcome, the standby
-node would need the snapshot tool installed, storage communication enabled, hdbuserstore configured,
-`azacsnap.json` configured, and crontab commands staged in advance of the failover.
- - For an **SAP HANA HSR HA** scenario, it is recommended to install, configure, and schedule the
-snapshot tools on both (Primary and Secondary) nodes. Then, if the Primary node becomes unavailable,
-the Secondary node will take over with snapshots being taken on the Secondary. In the normal state, the
-Primary node will take HANA snapshots initiated by crontab and the Secondary node would attempt to take
-snapshots but fail as the Primary is functioning correctly. But after Primary node failover, those
-snapshots will be executed from the Secondary node. To achieve this outcome, the Secondary node needs the
-snapshot tool installed, storage communication enabled, `hdbuserstore` configured, azacsnap.json
-configured, and crontab enabled in advance of the failover.
+- Monitor disk space regularly
+ - Automated log deletion is managed with the `--trim` option of the `azacsnap -c backup` for SAP HANA 2 and later releases.
+- **Risk of snapshots not being taken** - The snapshot tools only interact with the node of the SAP HANA system specified in the configuration file. If this
+ node becomes unavailable, there's no mechanism to automatically start communicating with another node.
+ - For an **SAP HANA Scale-Out with Standby** scenario it's typical to install and configure the snapshot tools on the primary node. But, if the primary node becomes
+ unavailable, the standby node will take over the primary node role. In this case, the implementation team should configure the snapshot tools on both
+ nodes (Primary and Stand-By) to avoid any missed snapshots. In the normal state, the primary node will take HANA snapshots initiated by crontab. If the primary
+ node fails over those snapshots will have to be executed from another node, such as the new primary node (former standby). To achieve this outcome, the standby
+ node would need the snapshot tool installed, storage communication enabled, hdbuserstore configured, `azacsnap.json` configured, and crontab commands staged
+ in advance of the failover.
+ - For an **SAP HANA HSR HA** scenario, it's recommended to install, configure, and schedule the snapshot tools on both (Primary and Secondary) nodes. Then, if
+ the Primary node becomes unavailable, the Secondary node will take over with snapshots being taken on the Secondary. In the normal state, the Primary node
+ will take HANA snapshots initiated by crontab. The Secondary node would attempt to take snapshots but fail as the Primary is functioning correctly. But,
+ after Primary node failover, those snapshots will be executed from the Secondary node. To achieve this outcome, the Secondary node needs the snapshot tool
+ installed, storage communication enabled, `hdbuserstore` configured, `azacsnap.json` configured, and crontab enabled in advance of the failover.
## Guidance provided in this document
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 01/07/2022 Last updated : 01/28/2022 # Solution architectures using Azure NetApp Files
This section provides solutions for Azure platform services.
* [Integrate Azure NetApp Files with Azure Kubernetes Service](../aks/azure-netapp-files.md) * [Application data protection for AKS workloads on Azure NetApp Files - Azure Example Scenarios](/azure/architecture/example-scenario/file-storage/data-protection-kubernetes-astra-azure-netapp-files) * [Disaster Recovery of AKS workloads with Astra Control Service and Azure NetApp Files](https://techcommunity.microsoft.com/t5/azure-architecture-blog/disaster-recovery-of-aks-workloads-with-astra-control-service/ba-p/2948089)
+* [Protecting MongoDB on AKS/ANF with Astra Control Service using custom execution hooks](https://techcommunity.microsoft.com/t5/azure-architecture-blog/protecting-mongodb-on-aks-anf-with-astra-control-service-using/ba-p/3057574)
+* [Comparing and Contrasting the AKS/ANF NFS subdir external provisioner with Astra Trident](https://techcommunity.microsoft.com/t5/azure-architecture-blog/comparing-and-contrasting-the-aks-anf-nfs-subdir-external/ba-p/3057547)
* [Out-of-This-World Kubernetes performance on Azure with Azure NetApp Files](https://cloud.netapp.com/blog/ma-anf-blg-configure-kubernetes-openshift) * [Azure NetApp Files + Trident = Dynamic and Persistent Storage for Kubernetes](https://anfcommunity.com/2021/02/16/azure-netapp-files-trident-dynamic-and-persistent-storage-for-kubernetes/) * [Trident - Storage Orchestrator for Containers](https://netapp-trident.readthedocs.io/en/stable-v20.04/kubernetes/operations/tasks/backends/anf.html)
azure-netapp-files Cross Region Replication Display Health Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/cross-region-replication-display-health-status.md
na Previously updated : 12/03/2021 Last updated : 01/28/2022
-# Display health status of replication relationship
+# Display health and monitor status of replication relationship
-You can view replication status on the source volume or the destination volume.
+You can view replication status on the source volume or the destination volume. You can also set alert rules in Azure Monitor to help you monitor the replication status.
## Display replication status
-1. From either the source volume or the destination volume, click **Replication** under Storage Service for either volume.
+1. From either the source volume or the destination volume, select **Replication** under Storage Service for either volume.
- The following replication status and health information is displayed:
+ The following information about replication status and health is displayed:
* **End point type** ΓÇô Identifies whether the volume is the source or destination of replication. * **Health** ΓÇô Displays the health status of the replication relationship. * **Mirror state** ΓÇô Shows one of the following values:
You can view replication status on the source volume or the destination volume.
A transfer operation is in progress and future transfers are not disabled. * **Replication schedule** ΓÇô Shows how frequently incremental mirroring updates will be performed when the initialization (baseline copy) is complete.
- * **Total progress** -- Shows the total amount of cumulative bytes transferred over the lifetime of the relationship. This amount is the actual bytes transferred, and it might differ from the logical space that the source and destination volumes report.
+ * **Total progress** ΓÇô Shows the total number of cumulative bytes transferred over the lifetime of the relationship. This amount is the actual bytes transferred, and it might differ from the logical space that the source and destination volumes report.
![Replication health status](../media/azure-netapp-files/cross-region-replication-health-status.png) > [!NOTE] > Replication relationship shows health status as *unhealthy* if previous replication jobs are not complete. This status is a result of large volumes being transferred with a lower transfer window (for example, a ten-minute transfer time for a large volume). In this case, the relationship status shows *transferring* and health status shows *unhealthy*.
-## Next steps
+## Set alert rules to monitor replication
+
+Follow the following steps to create [alert rules in Azure Monitor](../azure-monitor/alerts/alerts-overview.md) to help you monitor the status of cross-region replication:
+
+1. From Azure Monitor, select **Alerts**.
+2. From the Alerts window, select the **Create** dropdown and select **Create new alert rule**.
+3. From the Scope tab of the Create an Alert Rule page, select **Select scope**. The **Select a Resource** page appears.
+4. From the Resource tab, find the **Volumes** resource type.
+5. From the Condition tab, select **ΓÇ£Add condition**ΓÇ¥. From there, find a signal called ΓÇ£**is volume replication healthy**ΓÇ¥.
+6. There you'll see ΓÇ£**Condition of the relationship, 1 or 0**ΓÇ¥ and the **Configure Signal Logic** window is displayed.
+7. To check if the replication is _unhealthy_:
+ 1. **Operator** to `Less than or equal to`.
+ 1. Set **Aggregation type** to `Average`.
+ 1. Set **Threshold** value to `0`.
+ 1. Set **Unit** to `Count`.
+8. To check if the replication is healthy:
+ 1. Set **Operator** to `Greater than or equal to`.
+ 1. Set **Aggregation** type to `Average`.
+ 1. Set **Threshold** value to `1`.
+ 1. Set **Unit** to `Count`.
+9. Select **Review + create**. The alert rule is ready for use.
++
+## Next steps
* [Cross-region replication](cross-region-replication-introduction.md) * [Manage disaster recovery](cross-region-replication-manage-disaster-recovery.md)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/whats-new.md
na Previously updated : 01/25/2022 Last updated : 01/28/2022
Azure NetApp Files is updated regularly. This article provides a summary about t
## January 2022
+* [Azure Application Consistent Snapshot Tool (AzAcSnap) v5.1 Public Preview](azacsnap-release-notes.md)
+
+ [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, SUSE and RHEL).
+
+ The public preview of v5.1 brings the following new capabilities to AzAcSnap:
+ * Oracle Database support
+ * Backint Co-existence
+ * Azure Managed Disk
+ * RunBefore and RunAfter capability
+ * [LDAP search scope](configure-ldap-extended-groups.md#ldap-search-scope) You might be using the Unix security style with a dual-protocol volume or Lightweight Directory Access Protocol (LDAP) with extended groups features in combination with large LDAP topologies. In this case, you might encounter "access denied" errors on Linux clients when interacting with such Azure NetApp Files volumes. You can now use the **LDAP Search Scope** option to specify the LDAP search scope to avoid "access denied" errors.
azure-resource-manager Concepts View Definition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/concepts-view-definition.md
In this view you can extend existing Azure resources based on the `targetResourc
## Looking for help
-If you have questions about Azure Managed Applications, try asking on [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-managedapps). A similar question may have already been asked and answered, so check first before posting. Add the tag `azure-managedapps` to get a fast response!
+If you have questions about Azure Managed Applications, try asking on [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-managed-app). A similar question may have already been asked and answered, so check first before posting. Add the tag `azure-managed-app` to get a fast response!
## Next steps
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | integrationAccounts / sessions | integration account | 1-80 | Alphanumerics, hyphens, underscores, periods, and parenthesis. | > | integrationServiceEnvironments | resource group | 1-80 | Alphanumerics, hyphens, periods, and underscores. | > | integrationServiceEnvironments / managedApis | integration service environment | 1-80 | Alphanumerics, hyphens, periods, and underscores. |
-> | workflows | resource group | 1-80 | Alphanumerics, hyphens, underscores, periods, and parenthesis. |
+> | workflows | resource group | 1-43 | Alphanumerics, hyphens, underscores, periods, and parenthesis. |
## Microsoft.MachineLearning
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/tag-resources.md
Title: Tag resources, resource groups, and subscriptions for logical organization description: Shows how to apply tags to organize Azure resources for billing and managing. Previously updated : 07/29/2021 Last updated : 01/28/2022
To work with tags through the Azure REST API, use:
* [Tags - Get At Scope](/rest/api/resources/tags/getatscope) (GET operation) * [Tags - Delete At Scope](/rest/api/resources/tags/deleteatscope) (DELETE operation)
+## SDKs
+
+For samples of applying tags with SDKs, see:
+
+* [.NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/resourcemanager/Azure.ResourceManager/samples/Sample2_ManagingResourceGroups.md)
+* [Java](https://github.com/Azure-Samples/resources-java-manage-resource-group/blob/master/src/main/java/com/azure/resourcemanager/resources/samples/ManageResourceGroup.java)
+* [JavaScript](https://github.com/Azure-Samples/azure-sdk-for-js-samples/blob/main/samples/resources/resources_example.ts)
+* [Python](https://github.com/Azure-Samples/resource-manager-python-resources-and-groups)
+ ## Inherit tags Tags applied to the resource group or subscription aren't inherited by the resources. To apply tags from a subscription or resource group to the resources, see [Azure Policies - tags](tag-policies.md).
azure-resource-manager Test Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/test-toolkit.md
Title: ARM template test toolkit description: Describes how to run the Azure Resource Manager template (ARM template) test toolkit on your template. The toolkit lets you see if you have implemented recommended practices. Previously updated : 07/16/2021 Last updated : 01/28/2022
The toolkit contains four sets of tests:
- [Test cases for createUiDefinition.json](createUiDefinition-test-cases.md) - [Test cases for all files](all-files-test-cases.md)
+> [!NOTE]
+> The test toolkit is only available for ARM templates. To validate Bicep files, use the [Bicep linter](../bicep/linter.md).
+ ### Microsoft Learn To learn more about the ARM template test toolkit, and for hands-on guidance, see [Validate Azure resources by using the ARM Template Test Toolkit](/learn/modules/arm-template-test) on **Microsoft Learn**.
azure-sql Purchasing Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/purchasing-models.md
The following table and chart compare and contrast the vCore-based and the DTU-b
|**Purchasing model**|**Description**|**Best for**| |||| |DTU-based|This model is based on a bundled measure of compute, storage, and I/O resources. Compute sizes are expressed in DTUs for single databases and in elastic database transaction units (eDTUs) for elastic pools. For more information about DTUs and eDTUs, see [What are DTUs and eDTUs?](purchasing-models.md#dtu-based-purchasing-model).|Customers who want simple, preconfigured resource options|
-|vCore-based|This model allows you to independently choose compute and storage resources. The vCore-based purchasing model also allows you to use [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) for SQL Server to save costs.|Customers who value flexibility, control, and transparency|
+|vCore-based|This model allows you to independently choose and scale compute and storage resources. The vCore-based purchasing model allows you to use [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) for SQL Server to save costs. Newer capabilities (e.g. Hyperscale, serverless) are only available in the vCore model.|Customers who value flexibility, control, and transparency|
|||| ![Pricing model comparison](./media/purchasing-models/pricing-model.png)
azure-sql Single Database Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/single-database-overview.md
Last updated 04/08/2019
# What is a single database in Azure SQL Database? [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-The single database resource type creates a database in Azure SQL Database with its own set of resources and is managed via a [server](logical-servers.md). With a single database, each database is isolated and portable. Each has its own service tier within the [DTU-based purchasing model](service-tiers-dtu.md) or [vCore-based purchasing model](service-tiers-vcore.md) and a guaranteed compute size.
+The single database resource type creates a database in Azure SQL Database with its own set of resources and is managed via a [server](logical-servers.md). With a single database, each database is isolated, using a dedicated database engine. Each has its own service tier within the [DTU-based purchasing model](service-tiers-dtu.md) or [vCore-based purchasing model](service-tiers-vcore.md) and a compute size defining the resources allocated to the database engine.
Single database is a deployment model for Azure SQL Database. The other is [elastic pools](elastic-pool-overview.md).
azure-sql Glossary Terms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/glossary-terms.md
Title: Glossary of terms
description: A glossary of terms for working with Azure SQL Database, Azure SQL Managed Instance, and SQL on Azure VM. -+ ms.devlang:
Previously updated : 5/18/2021 Last updated : 12/15/2021
-# Azure SQL Database glossary of terms
+# Azure SQL glossary of terms
## Azure SQL Database
-|Context|Term|More information|
+|Context|Term|Definition|
|:|:|:|
-|Azure service|Azure SQL Database or SQL Database|[Azure SQL Database](database/sql-database-paas-overview.md)|
-|Purchasing model|DTU-based purchasing model|[DTU-based purchasing model](database/service-tiers-dtu.md)|
-||vCore-based purchasing model|[vCore-based purchasing model](database/service-tiers-sql-database-vcore.md)|
-|Deployment option |Single database|[Single databases](database/single-database-overview.md)|
-||Elastic pool|[Elastic pool](database/elastic-pool-overview.md)|
-|Service tier|Basic, Standard, Premium, General Purpose, Hyperscale, Business Critical|For service tiers in the vCore model, see [SQL Database service tiers](database/service-tiers-sql-database-vcore.md#service-tiers). For service tiers in the DTU model, see [DTU model](database/service-tiers-dtu.md#compare-the-dtu-based-service-tiers).|
-|Compute tier|Serverless compute|[Serverless compute](database/service-tiers-sql-database-vcore.md#compute-tiers)
-||Provisioned compute|[Provisioned compute](database/service-tiers-sql-database-vcore.md#compute-tiers)
-|Compute generation|Gen5, M-series, Fsv2-series, DC-series|[Hardware generations](database/service-tiers-sql-database-vcore.md#hardware-generations)
-|Server entity| Server |[Logical SQL servers](database/logical-servers.md)|
-|Resource type|vCore|A CPU core provided to the compute resource for a single database, elastic pool. |
-||Compute size and storage amount|Compute size is the maximum amount of CPU, memory and other non-storage related resources available for a single database, or elastic pool. Storage size is the maximum amount of storage available for a single database, or elastic pool. For sizing options in the vCore model, see [vCore single databases](database/resource-limits-vcore-single-databases.md), and [vCore elastic pools](database/resource-limits-vcore-elastic-pools.md). (../managed-instance/resource-limits.md). For sizing options in the DTU model, see [DTU single databases](database/resource-limits-dtu-single-databases.md) and [DTU elastic pools](database/resource-limits-dtu-elastic-pools.md).
+|Azure service|Azure SQL Database |[Azure SQL Database](database/sql-database-paas-overview.md) is a fully managed platform as a service (PaaS) database that handles most database management functions such as upgrading, patching, backups, and monitoring without user involvement.|
+|Database engine | |The database engine used in Azure SQL Database is the most recent stable version of the same database engine shipped as the Microsoft SQL Server product. Some database engine features are exclusive to Azure SQL Database or are available before they are shipped with SQL Server. The database engine is configured and optimized for use in the cloud. In addition to core database functionality, Azure SQL Database provides cloud-native capabilities such as hyperscale and serverless compute.|
+|Server entity| Logical server | A [logical server](database/logical-servers.md) is a construct that acts as a central administrative point for a collection of databases in Azure SQL Database and Azure Synapse Analytics. All databases managed by a server are created in the same region as the server. A server is a purely logical concept: a logical server is *not* a machine running an instance of the database engine. There is no instance-level access or instance features for a server. |
+|Deployment option ||Databases may be deployed individually or as part of an elastic pool. You may move existing databases in and out of elastic pools. |
+||Elastic pool|[Elastic pools](database/elastic-pool-overview.md) are a simple, cost-effective solution for managing and scaling multiple databases that have varying and unpredictable usage demands. The databases in an elastic pool are on a single logical server. The databases share a set allocation of resources at a set price.|
+||Single database|If you deploy [single databases](database/single-database-overview.md), each database is isolated, using a dedicated database engine. Each has its own service tier within your selected purchasing model and a compute size defining the resources allocated to the database engine.|
+|Purchasing model|| Azure SQL Database has two purchasing models. The purchasing model defines how you scale your database and how you are billed for compute, storage, etc. |
+||DTU-based purchasing model|The [Database Transaction Unit (DTU)-based purchasing model](database/service-tiers-dtu.md) is based on a bundled measure of compute, storage, and I/O resources. Compute sizes are expressed in DTUs for single databases and in elastic database transaction units (eDTUs) for elastic pools. |
+||vCore-based purchasing model (recommended)| A virtual core (vCore) represents a logical CPU. The [vCore-based purchasing model](database/service-tiers-vcore.md) offers greater control over the hardware configuration to better match compute and memory requirements of the workload, pricing discounts for [Azure Hybrid Benefit (AHB)](azure-hybrid-benefit.md) and [Reserved Instance (RI)](database/reserved-capacity-overview.md), more granular scaling, and greater transparency in hardware details. Newer capabilities (for example, hyperscale, serverless) are only available in the vCore model. |
+|Service tier|| The service tier defines the storage architecture, storage and I/O limits, and business continuity options. Options for service tiers vary by purchasing model. |
+||DTU-based service tiers | [Basic, standard, and premium service tiers](database/service-tiers-dtu.md#compare-the-dtu-based-service-tiers) are available in the DTU-based purchasing model.|
+||vCore-based service tiers (recommended) |[General purpose, business critical, and hyperscale service tiers](database/service-tiers-sql-database-vcore.md#service-tiers) are available in the vCore-based purchasing model (recommended).|
+|Compute tier|| The compute tier determines whether resources are continuously available (provisioned) or autoscaled (serverless). Compute tier availability varies by purchasing model and service tier. Only the vCore purchasing model's general purpose service tier makes serverless compute available.|
+||Provisioned compute|The [provisioned compute tier](database/service-tiers-sql-database-vcore.md#compute-tiers) provides a specific amount of compute resources that are continuously provisioned independent of workload activity. Under the provisioned compute tier, you are billed at a fixed price per hour.
+||Serverless compute| The [serverless compute tier](database/serverless-tier-overview.md) autoscales compute resources based on workload activity and bills for the amount of compute used per second. Azure SQL Database serverless is currently available in the vCore purchasing model's general purpose service tier with Generation 5 hardware or newer.|
+|Hardware generation| Available hardware configurations | The vCore-based purchasing model allows you to select the appropriate hardware generation for your workload. [Hardware configuration options](database/service-tiers-sql-database-vcore.md#hardware-generations) include Gen5, M-series, Fsv2-series, and DC-series.|
+|Compute size (service objective) ||Compute size (service objective) is the amount of CPU, memory, and storage resources available for a single database or elastic pool. Compute size also defines resource consumption limits, such as maximum IOPS, maximum log rate, etc.
+||vCore-based sizing options| Configure the compute size for your database or elastic pool by selecting the appropriate service tier, compute tier, and hardware generation for your workload. When using an elastic pool, configure the reserved vCores for the pool, and optionally configure per-database settings. For sizing options and resource limits in the vCore-based purchasing model, see [vCore single databases](database/resource-limits-vcore-single-databases.md), and [vCore elastic pools](database/resource-limits-vcore-elastic-pools.md).|
+||DTU-based sizing options| Configure the compute size for your database or elastic pool by selecting the appropriate service tier and selecting the maximum data size and number of DTUs. When using an elastic pool, configure the reserved eDTUs for the pool, and optionally configure per-database settings. For sizing options and resource limits in the DTU-based purchasing model, see [DTU single databases](database/resource-limits-dtu-single-databases.md) and [DTU elastic pools](database/resource-limits-dtu-elastic-pools.md).
## Azure SQL Managed Instance |Context|Term|More information| |:|:|:|
-|Azure service|Azure SQL Managed Instance|[SQL Managed Instance](managed-instance/sql-managed-instance-paas-overview.md)|
-|Purchasing model|vCore-based purchasing model|[vCore-based purchasing model](managed-instance/service-tiers-managed-instance-vcore.md)|
-|Deployment option |Single Instance|[Single Instance](managed-instance/sql-managed-instance-paas-overview.md)|
-||Instance pool (preview)|[Instance pools](managed-instance/instance-pools-overview.md)|
-|Service tier|General Purpose, Business Critical|[SQL Managed Instance service tiers](managed-instance/sql-managed-instance-paas-overview.md#service-tiers)|
-|Compute tier|Provisioned compute|[Provisioned compute](managed-instance/service-tiers-managed-instance-vcore.md#compute-tiers)|
-|Compute generation|Gen5|[Hardware generations](managed-instance/service-tiers-managed-instance-vcore.md#hardware-generations)
-|Server entity|Managed instance or instance| N/A as the SQL Managed Instance is in itself the server |
-|Resource type|vCore|A CPU core provided to the compute resource for SQL Managed Instance.|
-||Compute size and storage amount|Compute size is the maximum amount of CPU, memory and other non-storage related resources for SQL Managed Instance. Storage size is the maximum amount of storage available for a SQL Managed Instance. For sizing options, [SQL Managed Instances](managed-instance/resource-limits.md). |
+|Azure service|Azure SQL Managed Instance | [Azure SQL Managed Instance](managed-instance/sql-managed-instance-paas-overview.md) is a fully managed platform as a service (PaaS) deployment option of Azure SQL. It gives you an instance of SQL Server, including the SQL Server Agent, but removes much of the overhead of managing a virtual machine. Most of the features available in SQL Server are available in SQL Managed Instance. [Compare the features in Azure SQL Database and Azure SQL Managed Instance](database/features-comparison.md). |
+|Database engine | |The database engine used in Azure SQL Managed Instance has near 100% compatibility with the latest SQL Server (Enterprise Edition) database engine. Some database engine features are exclusive to managed instances or are available in managed instances before they are shipped with SQL Server. Managed instances provide cloud-native capabilities and integrations such as a native [virtual network (VNet)](../virtual-network/virtual-networks-overview.md) implementation, automatic patching and version updates, [automated backups](database/automated-backups-overview.md), and [high availability](database/high-availability-sla.md). |
+|Server entity|Managed instance | Each managed instance is itself an instance of SQL Server. Databases created on a managed instance are colocated with one another, and you may run cross-database queries. You can connect to the managed instance and use instance-level features such as linked servers and the SQL Server Agent. |
+|Deployment option ||Managed instances may be deployed individually or as part of an instance pools (preview). Managed instances cannot currently be moved into, between, or out of instance pools.|
+||Single instance| A single [managed instance](managed-instance/sql-managed-instance-paas-overview.md) is deployed to a dedicated set of isolated virtual machines that run inside the customer's virtual network subnet. These machines form a [virtual cluster](managed-instance/connectivity-architecture-overview.md#high-level-connectivity-architecture). Multiple managed instances can be deployed into a single virtual cluster if desired. |
+||Instance pool (preview)|[Instance pools](managed-instance/instance-pools-overview.md) enable you to deploy multiple managed instances to the same virtual machine. Instance pools enable you to migrate smaller and less compute-intensive workloads to the cloud without consolidating them in a single larger managed instance. |
+|Purchasing model|vCore-based purchasing model| SQL Managed Instance is available under the [vCore-based purchasing model](managed-instance/service-tiers-managed-instance-vcore.md). [Azure Hybrid Benefit](azure-hybrid-benefit.md) is available for managed instances. |
+|Service tier| vCore-based service tiers| SQL Managed Instance offers two service tiers. Both service tiers guarantee 99.99% availability and enable you to independently select storage size and compute capacity. Select either the [general purpose or business critical service tier](managed-instance/sql-managed-instance-paas-overview.md#service-tiers) for a managed instance based upon your performance and latency requirements.|
+|Compute|Provisioned compute| SQL Managed Instance provides a specific amount of [compute resources](managed-instance/service-tiers-managed-instance-vcore.md#compute) that are continuously provisioned independent of workload activity, and bills for the amount of compute provisioned at a fixed price per hour. |
+|Hardware generation|Available hardware configurations| SQL Managed Instance [hardware generations](managed-instance/service-tiers-managed-instance-vcore.md#hardware-generations) include standard-series (Gen5), premium-series, and memory optimized premium-series hardware generations. |
+|Compute size | vCore-based sizing options | Compute size (service objective) is the maximum amount of CPU, memory, and storage resources available for a single managed instance or instance pool. Configure the compute size for your managed instance by selecting the appropriate service tier and hardware generation for your workload. Learn about [resource limits for managed instances](managed-instance/resource-limits.md). |
+
+## SQL Server on Azure VMs
+|Context|Term|More information|
+|:|:|:|
+|Azure service|SQL Server on Azure Virtual Machines (VMs) | [SQL Server on Azure VMs](virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) enables you to use full versions of SQL Server in the cloud without having to manage any on-premises hardware. SQL Server VMs simplify licensing costs when you pay as you go. You have both SQL Server and OS access with some automated manageability features for SQL Server VMs, such as the [ SQL IaaS Agent extension](virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md).|
+| Server entity | Virtual machine or VM | Azure VMs run in many geographic regions around the world. They also offer various machine sizes. The virtual machine image gallery allows you to create a SQL Server VM with the right version, edition, and operating system. |
+| Image | Windows VMs or Linux VMs | You can choose to deploy SQL Server VMs with [Windows-based images](virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) or [Linux-based images](virtual-machines/linux/sql-server-on-linux-vm-what-is-iaas-overview.md). Image selection specifies both the OS version and SQL Server edition for your SQL Server VM. |
+| Pricing | | Pricing for SQL Server on Azure VMs is based on SQL Server licensing, operating system (OS), and virtual machine cost. You can [reduce costs](virtual-machines/windows/pricing-guidance.md#reduce-costs) by optimizing your VM size and shutting down your VM when possible. |
+| | SQL Server licensing cost | Choose the appropriate [free](virtual-machines/windows/pricing-guidance.md#free-licensed-sql-server-editions) or [paid](virtual-machines/windows/pricing-guidance.md#paid-sql-server-editions) SQL Server edition for your usage and requirements. For paid editions, you may [pay per usage](virtual-machines/windows/pricing-guidance.md#pay-per-usage) (also known as pay as you go) or use [Azure Hybrid Benefit](virtual-machines/windows/licensing-model-azure-hybrid-benefit-ahb-change.md). |
+| | OS and virtual machine cost | OS and virtual machine cost is based upon factors including your choice of image, VM size, and storage configuration. |
+| VM configuration | | You need to configure settings including security, storage, and high availability/disaster recovery for your SQL Server VM. The easiest way to configure a SQL Server VM is to use one of our Marketplace images, but you can also use this [quick checklist](virtual-machines/windows/performance-guidelines-best-practices-checklist.md) for a series of best practices and guidelines to navigate these choices. |
+| | VM size | [VM size](virtual-machines/windows/performance-guidelines-best-practices-vm-size.md) determines processing power, memory, and storage capacity. You can [collect a performance baseline](virtual-machines/windows/performance-guidelines-best-practices-collect-baseline.md) and/or use the [SKU recommendation](/sql/dma/dma-sku-recommend-sql-db) tool to help select the best VM size for your workload. |
+| | Storage configuration | Your storage configuration options are determined by your selection of VM size and selection of storage settings including disk type, caching settings, and disk striping. Learn how to choose a VM size with [enough storage scalability](virtual-machines/windows/performance-guidelines-best-practices-storage.md) for your workload and a mixture of disks (usually in a storage pool) that meet the capacity and performance requirements of your business. |
+| | Security considerations | You can enable Microsoft Defender for SQL, integrate Azure Key Vault, control access, and secure connections to your SQL Server VM. Learn [security guidelines](virtual-machines/windows/security-considerations-best-practices.md) to establish secure access to SQL Server VMs. |
+| SQL IaaS Agent extension | | The [SQL IaaS Agent extension](virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md) (SqlIaasExtension) runs on SQL Server VMs to automate management and administration tasks. There's no extra cost associated with the extension. |
+| | Automated patching | [Automated Patching](virtual-machines/windows/automated-patching.md) establishes a maintenance window for a SQL Server VM when security updates will be automatically applied by the SQL IaaS Agent extension. Note that there may be other mechanisms for applying Automatic Updates. If you configure automated patching using the SQL IaaS Agent extension you should ensure that there are no other conflicting update schedules. |
+| | Automated backup | [Automated Backup v2](virtual-machines/windows/automated-backup.md) automatically configures Managed Backup to Microsoft Azure for all existing and new databases on a SQL Server VM running SQL Server 2016 or later Standard, Enterprise, or Developer editions. |
azure-vmware Concepts Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-security-recommendations.md
Title: Concepts - Security recommendations for Azure VMware Solution
-Description: Learn about tips and best practices to help protect Azure VMware Solution deployments from vulnerabilities and malicious actors.
+description: Learn about tips and best practices to help protect Azure VMware Solution deployments from vulnerabilities and malicious actors.
Last updated 01/10/2022
See the following information for recommendations to secure your HCX deployment.
| **Recommendation** | **Comments** | | :-- | :-- |
-| Stay current with HCX service updates | HCX service updates can include new features, software fixes, and security patches. Apply service updates during a maintenance window where no new HCX operations are queued up by following these [steps](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-F4AEAACB-212B-4FB6-AC36-9E5106879222.html). |
+| Stay current with HCX service updates | HCX service updates can include new features, software fixes, and security patches. Apply service updates during a maintenance window where no new HCX operations are queued up by following these [steps](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-F4AEAACB-212B-4FB6-AC36-9E5106879222.html). |
azure-web-pubsub Tutorial Build Chat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/tutorial-build-chat.md
Use the Azure CLI [az webpubsub hub create](/cli/azure/webpubsub/hub#az_webpubsu
> Replace &lt;domain-name&gt; with the name ngrok printed. ```azurecli-interactive
-az webpubsub hub create -n "<your-unique-resource-name>" -g "myResourceGroup" --hub-name "chat" --event-handler url-template="https://<domain-name>.ngrok.io/eventHandler" user-event-pattern="*" system-event="connected"
+az webpubsub hub create -n "<your-unique-resource-name>" -g "myResourceGroup" --hub-name "SampleChatHub" --event-handler url-template="https://<domain-name>.ngrok.io/eventHandler" user-event-pattern="*" system-event="connected"
``` After the update is completed, open the home page http://localhost:8080/https://docsupdatetracker.net/index.html, input your user name, youΓÇÖll see the connected message printed in the server console.
backup Backup Rbac Rs Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-rbac-rs-vault.md
Title: Manage Backups with Azure role-based access control
description: Use Azure role-based access control to manage access to backup management operations in Recovery Services vault. Previously updated : 01/12/2022 Last updated : 01/27/2022+++ # Use Azure role-based access control to manage Azure Backup recovery points
The following table captures the Backup management actions and corresponding min
| Modify backup policy of Azure VM backup | Backup Contributor | Recovery Services vault | | Delete backup policy of Azure VM backup | Backup Contributor | Recovery Services vault | | Stop backup (with retain data or delete data) on VM backup | Backup Contributor | Recovery Services vault |
+| | Virtual Machine Contributor | Source VM that got backed-up | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write |
### Minimum role requirements for the Azure File share backup
backup Backup Sql Server Database Azure Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-sql-server-database-azure-vms.md
Title: Back up multiple SQL Server VMs from the vault description: In this article, learn how to back up SQL Server databases on Azure virtual machines with Azure Backup from the Recovery Services vault Previously updated : 01/14/2022 Last updated : 01/27/2022
To create a backup policy:
You can enable auto-protection to automatically back up all existing and future databases to a standalone SQL Server instance or to an Always On availability group.
-* There's no limit on the number of databases you can select for auto-protection at a time. Discovery typically runs every eight hours. However, you can discover and protect new databases immediately if you manually run a discovery by selecting the **Rediscover DBs** option.
+* There's no limit on the number of databases you can select for auto-protection at a time. Discovery typically runs every eight hours. The auto-protection of a newly discovered database will be triggered within 32 hours. However, you can discover and protect new databases immediately if you manually run a discovery by selecting the **Rediscover DBs** option.
+* If the auto-protection operation on the newly discovered database fails, it'll be retried three times. If all three retries fail, the database won't be protected.
* You can't selectively protect or exclude databases from protection in an instance at the time you enable auto-protection. * If your instance already includes some protected databases, they'll remain protected under their respective policies even after you turn on auto-protection. All unprotected databases added later will have only a single policy that you define at the time of enabling auto-protection, listed under **Configure Backup**. However, you can change the policy associated with an auto-protected database later.
+* If the **Configure Protection** operation for the newly discovered database fails, it won't raise an alert. However, a failed backup job could be found on the **Backup jobs** page.
To enable auto-protection:
backup Sql Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/sql-support-matrix.md
Title: Azure Backup support matrix for SQL Server Backup in Azure VMs description: Provides a summary of support settings and limitations when backing up SQL Server in Azure VMs with the Azure Backup service. Previously updated : 01/04/2022 Last updated : 01/27/2022
You can use Azure Backup to back up SQL Server databases in Azure VMs hosted on
|Database size supported (beyond this, performance issues may come up) | 6 TB* | |Number of files supported in a database | 1000 | |Number of full backups supported per day | One scheduled backup. <br><br> Three on-demand backups. <br><br> We recommend not to trigger more than three backups per day. However, to allow user retries in case of failed attempts, hard limit for on-demand backups is set to nine attempts. |
+| Log shipping | When you enable [log shipping](/sql/database-engine/log-shipping/about-log-shipping-sql-server?view=sql-server-ver15&preserve-view=true) on the SQL server database that you are backing up, we recommend you to disable log backups in the backup policy. This is because, the log shipping (which automatically sends transaction logs from the primary to secondary database) will interfere with the log backups enabled through Azure Backup. <br><br> Therefore, if you enable log shipping, ensure that your policy only has full and/or differential backups enabled. |
_*The database size limit depends on the data transfer rate that we support and the backup time limit configuration. ItΓÇÖs not the hard limit. [Learn more](#backup-throughput-performance) on backup throughput performance._
bastion Bastion Vm Copy Paste https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-vm-copy-paste.md
Title: 'Copy and paste to and from a Windows virtual machine: Azure Bastion'
-description: Learn how copy and paste to and from an Windows VM using Bastion.
+description: Learn how copy and paste to and from a Windows VM using Bastion.
bastion Connect Native Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/connect-native-client-windows.md
-# Connect to a VM using Bastion and the native client on your Windows computer (Preview)
+# Connect to a VM using Bastion and the native client on your workstation (Preview)
-Azure Bastion now offers support for connecting to target VMs in Azure using a native RDP or SSH client on your Windows workstation. This feature lets you connect to your target VMs via Bastion using Azure CLI and expands your sign-in options to include local SSH key pair and Azure Active Directory (Azure AD). This article helps you configure Bastion with the required settings, and then connect to a VM in the VNet. For more information, see the [What is Azure Bastion?](bastion-overview.md).
+Azure Bastion now offers support for connecting to target VMs in Azure using a native RDP or SSH client on your local workstation. This feature lets you connect to your target VMs via Bastion using Azure CLI and expands your sign-in options to include local SSH key pair and Azure Active Directory (Azure AD). This article helps you configure Bastion with the required settings, and then connect to a VM in the VNet. For more information, see the [What is Azure Bastion?](bastion-overview.md).
> [!NOTE] > This configuration requires the Standard SKU for Azure Bastion.
Azure Bastion now offers support for connecting to target VMs in Azure using a n
Currently, this feature has the following limitations:
-* Native client support is not yet available for use from your local Linux workstation. If you are connecting to your target VM from a Linux workstation, use the Azure portal experience.
- * Signing in using an SSH private key stored in Azure Key Vault is not supported with this feature. Download your private key to a file on your local machine before signing in to your Linux VM using an SSH key pair. ## <a name="prereq"></a>Prerequisites
Before you begin, verify that you have met the following criteria:
* [Configure your Windows VM to be Azure AD-joined](../active-directory/devices/concept-azure-ad-join.md). * [Configure your Windows VM to be hybrid Azure AD-joined](../active-directory/devices/concept-azure-ad-join-hybrid.md).
-## Configure Bastion
+## <a name="configure"></a>Configure Bastion
Follow the instructions that pertain to your environment.
-### To modify an existing bastion host
+### <a name="modify-host"></a>To modify an existing bastion host
If you have already configured Bastion for your VNet, modify the following settings:
If you have already configured Bastion for your VNet, modify the following setti
:::image type="content" source="./media/connect-native-client-windows/update-host.png" alt-text="Settings for updating an existing host with Native Client Support box selected." lightbox="./media/connect-native-client-windows/update-host-expand.png":::
-### To configure a new bastion host
+### <a name="configure-new"></a>To configure a new bastion host
If you don't already have a bastion host configured, see [Create a bastion host](tutorial-create-host-portal.md#createhost). When configuring the bastion host, specify the following settings:
If you don't already have a bastion host configured, see [Create a bastion host]
:::image type="content" source="./media/connect-native-client-windows/new-host.png" alt-text="Settings for a new bastion host with Native Client Support box selected." lightbox="./media/connect-native-client-windows/new-host-expand.png":::
-## Verify roles and ports
+## <a name="verify"></a>Verify roles and ports
Verify that the following roles and ports are configured in order to connect.
-### Required roles
+### <a name="roles"></a>Required roles
* Reader role on the virtual machine. * Reader role on the NIC with private IP of the virtual machine.
To connect to a Windows VM using native client support, you must have the follow
* Inbound port: RDP (3389) *or* * Inbound port: Custom value (you will then need to specify this custom port when you connect to the VM via Azure Bastion)
-## <a name="connect"></a>Connect to a VM
+## <a name="connect"></a>Connect to a VM from a Windows local workstation
-This section helps you connect to your virtual machine. Use the steps that correspond to the type of VM you want to connect to.
+This section helps you connect to your virtual machine from a Windows local workstation. Use the steps that correspond to the type of VM you want to connect to.
1. Sign in to your Azure account and select your subscription containing your Bastion resource.
This section helps you connect to your virtual machine. Use the steps that corre
az account set --subscription "<subscription ID>" ```
-### Connect to a Linux VM
+### <a name="connect-linux"></a>Connect to a Linux VM
1. Sign in to your target Linux VM using one of the following options.
This section helps you connect to your virtual machine. Use the steps that corre
> If you want to specify a custom port value, you should also include the field **--resource-port** in the sign-in command. >
- * If you signing in to an Azure AD login-enabled VM, use the following command. To learn more about how to use Azure AD to sign in to your Azure Linux VMs, see [Azure Linux VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md).
+ * If you are signing in to an Azure AD login-enabled VM, use the following command. To learn more about how to use Azure AD to sign in to your Azure Linux VMs, see [Azure Linux VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md).
```azurecli-interactive az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "AAD"
This section helps you connect to your virtual machine. Use the steps that corre
```azurecli-interactive az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "password" --username "<Username>" ```
+
+ > [!NOTE]
+ > VM sessions using the **az network bastion ssh** command do not support file transfer. To use file transfer with SSH over Bastion, please see the section on the **az network bastion tunnel** command further below.
+ >
-### Connect to a Windows VM
+### <a name="connect-windows"></a>Connect to a Windows VM
1. Sign in to your target Windows VM using one of the following options.
This section helps you connect to your virtual machine. Use the steps that corre
az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>" ```
-1. Once you sign in to your target VM, the native client on your workstation will open up with your VM session; MSTSC for RDP sessions, and SSH CLI extension for SSH sessions.
+1. Once you sign in to your target VM, the native client on your workstation will open up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions.
+
+## <a name="connect-tunnel"></a>Connect to a VM using the *az network bastion tunnel* command
+
+This section helps you connect to your virtual machine using the *az network bastion tunnel* command, which allows you to:
+* Use native clients on *non*-Windows local workstations (ex: a Linux PC)
+* Use a native client of your choice
+* Set up concurrent VM sessions with Bastion
+* Access file transfer for SSH sessions
+
+1. Sign in to your Azure account and select your subscription containing your Bastion resource.
+
+ ```azurecli-interactive
+ az login
+ az account list
+ az account set --subscription "<subscription ID>"
+ ```
+
+2. Open the tunnel to your target VM using the following command:
+
+ ```azurecli-interactive
+ az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>"
+ ```
+3. Connect and log in to your target VM using SSH or RDP, the native client of your choice, and the local machine port you specified in Step 2.
## Next steps
batch Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/disk-encryption.md
Batch will apply one of these disk encryption technologies on compute nodes, bas
You won't be able to specify which encryption method will be applied to the nodes in your pool. Instead, you provide the target disks you want to encrypt on their nodes, and Batch can choose the appropriate encryption method, ensuring the specified disks are encrypted on the compute node. > [!IMPORTANT]
-> If you are creating your pool with a [custom image](batch-sig-images.md), you can enable disk encryption only if using Windows VMs.
+> If you are creating your pool with a Linux [custom image](batch-sig-images.md), you can only enable disk encryption only if your pool is using an [Encryption At Host Supported VM size](../virtual-machines/disk-encryption.md#supported-vm-sizes).
+> Encryption At Host is not currently supported on User Subscription Pools until the feature becomes [publicly available in Azure](../virtual-machines/disks-enable-host-based-encryption-portal.md#prerequisites).
## Azure portal
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 1/19/2022 Last updated : 1/28/2022
The following tables show the Microsoft Security Response Center (MSRC) updates
## January 2022 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |
-| Rel 22-01 | [5009557] | Latest Cumulative Update(LCU) | 6.39 | Jan 11, 2022 |
+| Rel 22-01 | [5010791] | Latest Cumulative Update(LCU) | 6.39 | Jan 18, 2022 |
| Rel 22-01 | [5006671] | IE Cumulative Updates | 2.118, 3.105, 4.98 | Oct 12, 2021 |
-| Rel 22-01 | [5009555] | Latest Cumulative Update(LCU) | 7.7 | Jan 11, 2022 |
-| Rel 22-01 | [5009546] | Latest Cumulative Update(LCU) | 5.63 | Jan 11, 2022 |
+| Rel 22-01 | [5010796] | Latest Cumulative Update(LCU) | 7.7 | Jan 17, 2022 |
+| Rel 22-01 | [5010790] | Latest Cumulative Update(LCU) | 5.63 | Jan 17, 2022 |
| Rel 22-01 | [5008867] | .NET Framework 3.5 Security and Quality Rollup | 2.118 | Jan 11, 2022 | | Rel 22-01 | [5008860] | .NET Framework 4.5.2 Security and Quality Rollup | 2.118 | Jan 11, 2022 | | Rel 22-01 | [5008868] | .NET Framework 3.5 Security and Quality Rollup | 4.98 | Jan 11, 2022 |
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 22-01 | [5008865] | .NET Framework 3.5 Security and Quality Rollup | 3.105 | Jan 11, 2022 | | Rel 22-01 | [5008869] | . NET Framework 4.5.2 Security and Quality Rollup | 3.105 | Jan 11, 2022 | | Rel 22-01 | [5008873] | . NET Framework 3.5 and 4.7.2 Cumulative Update | 6.39 | Jan 11, 2022 |
+| Rel 22-01 | [5008882] | .NET Framework 4.8 Security and Quality Rollup | 7.7 | Jan 11, 2022 |
| Rel 22-01 | [5009610] | Monthly Rollup | 2.118 | Jan 11, 2022 | | Rel 22-01 | [5009586] | Monthly Rollup | 3.105 | Jan 11, 2022 | | Rel 22-01 | [5009624] | Monthly Rollup | 4.98 | Jan 11, 2022 | | Rel 22-01 | [5001401] | Servicing Stack update | 3.105 | Apr 13, 2021 | | Rel 22-01 | [5001403] | Servicing Stack update | 4.98 | Apr 13, 2021 |
-| Rel 22-01OOB | [4578013] | Standalone Security Update | 4.98 | Aug 19, 2020 |
+| Rel 22-01 | [4578013] | Standalone Security Update | 4.98 | Aug 19, 2020 |
| Rel 22-01 | [5005698] | Servicing Stack update | 5.63 | Sep 14, 2021 | | Rel 22-01 | [5006749] | Servicing Stack update | 2.118 | July 13, 2021 |
-| Rel 22-01 | 5008287 | Servicing Stack update | 6.39 | Aug 10, 2021 |
| Rel 22-01 | [4494175] | Microcode | 5.63 | Sep 1, 2020 | | Rel 22-01 | [4494174] | Microcode | 6.39 | Sep 1, 2020 |
-[5009557]: https://support.microsoft.com/kb/5009557
+
+[5010791]: https://support.microsoft.com/kb/5010791
[5006671]: https://support.microsoft.com/kb/5006671
-[5009555]: https://support.microsoft.com/kb/5009555
-[5009546]: https://support.microsoft.com/kb/5009546
+[5010796]: https://support.microsoft.com/kb/5010796
+[5010790]: https://support.microsoft.com/kb/5010790
[5008867]: https://support.microsoft.com/kb/5008867 [5008860]: https://support.microsoft.com/kb/5008860 [5008868]: https://support.microsoft.com/kb/5008868
The following tables show the Microsoft Security Response Center (MSRC) updates
[5008865]: https://support.microsoft.com/kb/5008865 [5008869]: https://support.microsoft.com/kb/5008869 [5008873]: https://support.microsoft.com/kb/5008873
+[5008882]: https://support.microsoft.com/kb/5008882
[5009610]: https://support.microsoft.com/kb/5009610 [5009586]: https://support.microsoft.com/kb/5009586 [5009624]: https://support.microsoft.com/kb/5009624
The following tables show the Microsoft Security Response Center (MSRC) updates
[4494174]: https://support.microsoft.com/kb/4494174 + ## December 2021 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |
cognitive-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md
zone_pivot_groups: programming-languages-speech-services-nomore-variant
-# Get facial pose events
+# Get facial pose events for lip-sync
> [!NOTE] > At this time, viseme events are available only for English (US) [neural voices](language-support.md#text-to-speech).
cognitive-services Quickstart Translator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/quickstart-translator.md
params = {
'from': 'en', 'to': ['de', 'it'] }
-constructed_url = endpoint + path
headers = { 'Ocp-Apim-Subscription-Key': subscription_key,
params = {
'api-version': '3.0', 'to': ['de', 'it'] }
-constructed_url = endpoint + path
headers = { 'Ocp-Apim-Subscription-Key': subscription_key,
constructed_url = endpoint + path
params = { 'api-version': '3.0' }
-constructed_url = endpoint + path
headers = { 'Ocp-Apim-Subscription-Key': subscription_key,
params = {
'to': 'th', 'toScript': 'latn' }
-constructed_url = endpoint + path
headers = { 'Ocp-Apim-Subscription-Key': subscription_key,
params = {
'fromScript': 'thai', 'toScript': 'latn' }
-constructed_url = endpoint + path
headers = { 'Ocp-Apim-Subscription-Key': subscription_key,
params = {
'to': 'es', 'includeSentenceLength': True }
-constructed_url = endpoint + path
headers = { 'Ocp-Apim-Subscription-Key': subscription_key,
constructed_url = endpoint + path
params = { 'api-version': '3.0' }
-constructed_url = endpoint + path
headers = { 'Ocp-Apim-Subscription-Key': subscription_key,
params = {
'from': 'en', 'to': 'es' }
-constructed_url = endpoint + path
headers = { 'Ocp-Apim-Subscription-Key': subscription_key,
params = {
'from': 'en', 'to': 'es' }
-constructed_url = endpoint + path
headers = { 'Ocp-Apim-Subscription-Key': subscription_key,
cognitive-services Deploy Query Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/conversational-language-understanding/how-to/deploy-query-model.md
Previously updated : 01/07/2022 Last updated : 01/26/2022 ms.devlang: csharp, python
Simply select a model and click on deploy model in the Deploy model page.
:::image type="content" source="../media/deploy-model.png" alt-text="A screenshot showing the model deployment page in Language Studio." lightbox="../media/deploy-model.png":::
+> [!TIP]
+> If you're using the REST API, see the [quickstart](../quickstart.md?pivots=rest-api#deploy-your-model) and REST API [reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2021-11-01-preview/operations/Deployments_TriggerDeploymentJob) for examples and more information.
+ **Orchestration workflow projects deployments** When you're deploying an orchestration workflow project, A small window will show up for you to confirm your deployment, and configure parameters for connected services.
If you're connecting one or more LUIS applications, specify the deployment name,
* The *slot* deployment type requires you to pick between a production and staging slot. * The *version* deployment type requires you to specify the version you have published.
-No configurations are required for custom question answering and CLU connections, or unlinked intents.
+No configurations are required for custom question answering and conversational language understanding connections, or unlinked intents.
LUIS projects **must be published** to the slot configured during the Orchestration deployment, and custom question answering KBs must also be published to their Production slots.
You can get the full URL for your endpoint by going to the **Deploy model** page
:::image type="content" source="../media/prediction-url.png" alt-text="Screenshot showing the prediction request and URL" lightbox="../media/prediction-url.png":::
+Add your key to the `Ocp-Apim-Subscription-Key` header value, and replace the query and language parameters.
+
+> [!TIP]
+> As you construct your requests, see the [quickstart](../quickstart.md?pivots=rest-api#query-model) and REST API [reference documentation](https://aka.ms/clu-apis) for more information.
+ ### Use the client libraries (Azure SDK)
+You can also use the client libraries provided by the Azure SDK to send requests to your model.
+ > [!NOTE] > The client library for conversational language understanding is only available for: > * .NET
In a conversations project, you'll get predictions for both your intents and ent
## API response for an orchestration Workflow Project
-An orchestration workflow project returns with the response of the top scoring intent, and the response of the service it is connected to.
+Orchestration workflow projects return with the response of the top scoring intent, and the response of the service it is connected to.
- Within the intent, the *targetKind* parameter lets you determine the type of response that was returned by the orchestrator's top intent (conversation, LUIS, or QnA Maker). - You will get the response of the connected service in the *result* parameter.
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/conversational-language-understanding/quickstart.md
Previously updated : 11/02/2021 Last updated : 01/27/2022 zone_pivot_groups: usage-custom-language-features
cognitive-services Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/question-answering/how-to/analytics.md
Custom question answering uses Azure diagnostic logging to store the telemetry d
// All QnA Traffic AzureDiagnostics | where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
-| where OperationName=="QnAMaker GenerateAnswer" // This OperationName is valid for custom question answering enabled resources
+| where OperationName=="CustomQuestionAnswering QueryKnowledgebases" // This OperationName is valid for custom question answering enabled resources
| extend answer_ = tostring(parse_json(properties_s).answer) | extend question_ = tostring(parse_json(properties_s).question) | extend score_ = tostring(parse_json(properties_s).score)
let startDate = todatetime('2019-01-01');
let endDate = todatetime('2020-12-31'); AzureDiagnostics | where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
-| where OperationName=="QnAMaker GenerateAnswer" // This OperationName is valid for custom question answering enabled resources
+| where OperationName=="CustomQuestionAnswering QueryKnowledgebases" // This OperationName is valid for custom question answering enabled resources
| where TimeGenerated <= endDate and TimeGenerated >=startDate | extend kbId_ = tostring(parse_json(properties_s).kbId) | extend userId_ = tostring(parse_json(properties_s).userId)
AzureDiagnostics
// All unanswered questions AzureDiagnostics | where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
-| where OperationName=="QnAMaker GenerateAnswer" // This OperationName is valid for custom question answering enabled resources
+| where OperationName=="CustomQuestionAnswering QueryKnowledgebases" // This OperationName is valid for custom question answering enabled resources
| extend answer_ = tostring(parse_json(properties_s).answer) | extend question_ = tostring(parse_json(properties_s).question) | extend score_ = tostring(parse_json(properties_s).score)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/text-summarization/overview.md
Previously updated : 11/02/2021 Last updated : 01/26/2022 # What is text summarization (preview) in Azure Cognitive Service for Language?
-Text summarization is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. This extractive summarization feature can produce a summary by extracting sentences that collectively represent the most important or relevant information within a document. It condenses articles, papers, or documents to key sentences.
+Text summarization is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Use this article to learn more about this feature, and how to use it in your applications.
+
+This documentation contains the following article types:
* [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service. * [**How-to guides**](how-to/call-api.md) contain instructions for using the service in more specific or customized ways.
+## Text summarization feature
+Text summarization uses extractive text summarization to produce a summary of a document. It extracts sentences that collectively represent the most important or relevant information within the original content. This feature is designed to shorten content that could be considered too long to read. For example, it can condense articles, papers, or documents to key sentences.
-## Responsible AI
+As an example, consider the following paragraph of text:
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for language detection](/legal/cognitive-services/language-service/transparency-note-extractive-summarization?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+*"WeΓÇÖre delighted to announce that Cognitive Service for Language service now supports extractive summarization! In general, there are two approaches for automatic text summarization: extractive and abstractive. This feature provides extractive summarization. Text summarization is a feature that produces a text summary by extracting sentences that collectively represent the most important or relevant information within the original content. This feature is designed to shorten content that could be considered too long to read. Extractive summarization condenses articles, papers, or documents to key sentences."*
+
+The text summarization feature would simplify the text into the following key sentences:
++
+## Key features
+
+Text summarization supports the following features:
+
+* **Extracted sentences**: These sentences collectively convey the main idea of the document. TheyΓÇÖre original sentences extracted from the input documentΓÇÖs content.
+* **Rank score**: The rank score indicates how relevant a sentence is to a document's main topic. Text summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank.
+* **Maximum sentences**: Determine the maximum number of sentences to be returned. For example, if you request a three-sentence summary Text summarization will return the three highest scored sentences.
+
+## Get started with text summarization
+
+To use this feature, you submit raw unstructured text for analysis and handle the API output in your application. Analysis is performed as-is, with no additional customization to the model used on your data. There are two ways to use text summarization:
-## Next steps
+|Development option |Description | Links |
+||||
+| Language Studio | A web-based platform that enables you to try text summarization without needing writing code. | ΓÇó [Language Studio website](https://language.cognitive.azure.com/tryout/summarization) <br> ΓÇó [Quickstart: Use the Language studio](../language-studio.md) |
+| REST API or Client library (Azure SDK) | Integrate text summarization into your applications using the REST API, or the client library available in a variety of languages. | ΓÇó [Quickstart: Use text summarization](quickstart.md) |
-There are two ways to get started using the text summarization feature:
-* [Language Studio](../language-studio.md), which is a web-based platform that enables you to try several Azure Cognitive Service for Language features without needing to write code.
-* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
+## Input requirements and service limits
+
+* Text summarization takes raw unstructured text for analysis. See the [data and service limits](how-to/call-api.md#data-limits) in the how-to guide for more information.
+* Text summarization works with a variety of written languages. See [language support](language-support.md) for more information.
+
+## Reference documentation and code samples
+
+As you use text summarization in your applications, see the following reference documentation and samples for Azure Cognitive Services for Language:
+
+|Development option / language |Reference documentation |Samples |
+||||
+|REST API | [REST API documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-2-Preview-2/operations/Analyze) | |
+|C# | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics/samples) |
+| Java | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) |
+|JavaScript | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) |
+|Python | [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) |
+
+## Responsible AI
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which itΓÇÖs deployed. Read the [transparency note for text summarization](/legal/cognitive-services/language-service/transparency-note-extractive-summarization?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/known-issues.md
The following are known issues in the Communication Services Call Automation API
Up to 100 users can join a group call using the JS web calling SDK.
+##Android API emulators
+When utilizing Android API emulators some crashes are expected.
communication-services Classification Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/router/classification-concepts.md
-# Job classification concepts
+# Job classification
[!INCLUDE [Private Preview Disclaimer](../../includes/private-preview-include-section.md)]
-Azure Communication Services Job Router uses a process called **classification** when a Job is submitted. This article describes the different ways a Job can be classified and the effect this process has on it.
+When you submit a job to Job Router, you can either specify the queue, priority, and worker selectors manually or you can specify a classification policy to drive these values.
-## Job classification overview
-
-Job Router uses two primary methods for classifying a Job; static or dynamic. If the calling application has knowledge about the Queue ID, Priority, or Worker Selectors, the Job can be submitted without a Classification Policy; known as **static classification**. If you prefer to let Job Router decide the Queue ID, a Classification Policy can be used to modify the Job's properties; known as **dynamic classification**.
-
-When you submit a Job using the Job Router SDK, the process of classification will result in an event being sent to your Azure Communication Services Event Grid subscription. The events generated as part of the classification lifecycle give insights into what actions the Job Router is taking. For example, a successful classification will produce a **RouterJobClassified** and a failure will produce a **RouterJobClassificationFailed**.
+If you choose to use a classification policy, you will receive a [JobClassified Event][job_classified_event] or a [JobClassificationFailed Event][job_classify_failed_event] with the result. Once the job has been successfully classified, it will be automatically queued. If the classification process fails, you'll need to intervene to fix it.
The process of classifying a Job involves optionally setting the following properties: -- Queue ID - Priority - Worker Selectors
+- Queue ID
-## Static classification
-
-Submitting a Job with a pre-defined Queue ID, Priority, and Worker selectors allows you to get started quickly. Job Router will not modify these properties after submitting the Job unless you update it by specifying a Classification Policy prior to assignment to a Worker. You can update the Classification Policy property of a Job after submission, which will trigger the dynamic classification process.
-
-> [!NOTE]
-> You have the option of overriding the result of dynamic classification by using the Job Router SDK to update the Job properties manually. For example, you could specify a static Queue ID initially, then update the Job with a Classification Policy ID to be dynamically classified, then override the Queue ID.
-
-## Dynamic classification
-
-Specifying a classification policy when you submit a Job will allow Job Router to dynamically assign the Queue ID, Priority, and potentially modify the Worker selectors. This type of classification is beneficial since the calling application does not need to have knowledge of any Job properties including the Queue ID at runtime.
+## Prioritization rule
-### Queue selectors
+The priority of a Job can be resolved during classification using one of many rule engines.
-A Classification Policy can reference a `QueueSelector`, which is used by the classification process to determine which Queue ID will be chosen for a particular Job. The following `QueueSelector` types exist in Job Router and are applicable options to the Queue selection process during classification:
+See the [Rule concepts](router-rule-concepts.md) page for more information.
-**QueueLabelSelector -** When you create a Job Router Queue you can specify labels to help the Queue selection process during Job classification. This type of selector uses a collection of `LabelSelectorAttachment` types to offer the most flexibility in selecting the Queue during the classification process. Use this selector to allow the Job classification process to select the Queue ID based on its labels. For more information See the section [below](#using-labels-and-selectors-in-classification).
+## Worker selectors
-**QueueIdSelector -** This selector will enable the use of one of many rule engines to determine the Queue ID of the Job based on the result of the rule. Read the [RouterRule concepts](router-rule-concepts.md) page for more information.
+Each job carries a collection of worker selectors, that are evaluated against the worker labels. These are conditions that need to be true of a worker to be a match.
+You can use the classification policy to attach these conditions to a job. You can do this by specifying one or more selector attachments.
-### Worker selectors
+For more information see the section [below](#using-label-selector-attachments).
-A Worker selector in a Classification Policy contains a collection of `LabelSelectorAttachment` types, which is used by the classification process to attach Worker selectors to a Job based on its labels. For more information See the section [below](#using-labels-and-selectors-in-classification).
+## Queue selectors
-### Prioritization rule
+You can also specify a collection of label selector attachments to select the Queue based on its labels.
-The priority of a Job can be resolved during classification using one of many rule engines; similar to how the `QueueIdSelector` works. Read the [RouterRule concepts](router-rule-concepts.md) page for more information.
+For more information see the section [below](#using-label-selector-attachments).
-## Using labels and selectors in classification
+## Using label selector attachments
-Job Router uses the key/value pair "labels" of a Job, Worker, and Queue to make various decisions about routing. When using a `LabelSelectorAttachment` on a `QueueSelector`, it acts like a filter. When used within the context of `WorkerSelectors`, it attaches selectors to the initial set that was created with the job. The following `LabelSelectorAttachment` types can be used:
+The following label selector attachments are available:
**Static label selector -** Always attaches the given `LabelSelector`.
-**Conditional label selector -** Will evaluate a condition defined by a rule. If it resolves to `true`, then the specified collection of selectors will be applied.
+**Conditional label selector -** Will evaluate a condition defined by a [rule](router-rule-concepts.md). If it resolves to `true`, then the specified collection of selectors will be applied.
**Passthrough label selector -** Uses a key and `LabelOperator` to check for the existence of the key. This selector can be used in the `QueueLabelSelector` to match a Queue based on the set of labels. When used with the `WorkerSelectors`, the Job's key/value pair are attached to the `WorkerSelectors` of the Job. **Rule label selector -** Sources a collection of selectors from one of many rule engines. Read the [RouterRule concepts](router-rule-concepts.md) page for more information.
-**Weighted allocation label selector -** A collection of `WeightedAllocation` objects that each specify a percentage based weighting and a collection of selector to apply based on the weighting allocation. For example, you may want 30% of the Jobs to go to "Contoso" and 70% of Jobs to go to "Fabrikam".
+**Weighted allocation label selector -** Enables you to specify a percentage-based weighting and a collection of selectors to apply based on the weighting allocation. For example, you may want 30% of the Jobs to go to "Vendor 1" and 70% of Jobs to go to "Vendor 2".
## Reclassifying a job+ Once a Job has been classified, it can be reclassified in the following ways: 1. You can update the Job labels, which will cause the Job Router to evaluate the new labels with the previous Classification Policy. 2. You can update the Classification Policy ID of a Job, which will cause Job Router to process the existing Job against the new policy.
-3. An Exception Policy **trigger** can take the **action** of requesting a Job be reclassified
+3. An Exception Policy **trigger** can take the **action** of requesting a Job be reclassified.
> [!NOTE] > The Job Router SDK includes an `UpdateJobLabels` method which simply updates the labels without causing the Job Router to execute the reclassification process.+
+<!-- LINKS -->
+[subscribe_events]: ../../how-tos/router-sdk/subscribe-events.md
+[job_classified_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterjobclassified
+[job_classify_failed_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterjobclassificationfailed
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/router/concepts.md
-# Job Router concepts
+# Job Router key concepts
[!INCLUDE [Private Preview Disclaimer](../../includes/private-preview-include-section.md)]
-Azure Communication Services Job Router solves the problem of matching some abstract supply with some abstract demand on a system. Integrated with Azure Event Grid, Job Router delivers near real-time notifications to you, enabling you to build reactive applications to control the behavior of your Job Router instance.
+Azure Communication Services Job Router solves the problem of matching supply with demand.
-## Job Router overview
+A real-world example of this may be call center agents (supply) being matched to incoming support calls (demand).
-The Job Router SDKs can be used to build various business scenarios where you have the need to match a unit of work to a particular resource. For example, the work could be defined as a series of phone calls with many potential contact center agents, or a web chat request with a live agent handling multiple concurrent sessions with other people. The need to route some abstract unit of work to an available resource requires you to define the work, known as a [Job](#job), a [Queue](#queue), the [Worker](#worker), and a set of [Policies](#policies), which define the behavioral aspects of how these components interact with each other.
+## Job
-## Job Router architecture
+A Job represents a unit of work (demand), which needs to be routed to an available Worker (supply).
-Azure Communication Services Job Router uses events to notify your applications about actions within the service. The following diagrams illustrate a simplified flow common to Job Router; submitting a Job, registering a Worker, handling the Job Offer.
+A real-world example of this may be an incoming call or chat in the context of a call center.
### Job submission flow
-1. The Contoso application submits a Job to the Job Router in the Azure Communication Services instance.
-2. The Job is classified and an event is raised called **RouterJobClassified** which includes all the information about the Job and how the classification process may have modified its properties.
+1. Your application submits a Job via the Job Router SDK.
+2. The Job is classified and a [JobClassified Event][job_classified_event] is sent via EventGrid, which includes all the information about the Job and how the classification process may have modified its properties.
:::image type="content" source="../media/router/acs-router-job-submission.png" alt-text="Diagram showing Communication Services' Job Router submitting a job.":::
+## Worker
+
+A Worker represents the supply available to handle a Job. Each worker registers with one or more queues to receive jobs.
+
+A real-world example of this may be an agent working in a call center.
+ ### Worker registration flow
-1. When a Worker is ready to accept a Job, they register with the Job Router via Contoso's Application.
-2. Job Router then sends back a **RouterWorkerRegistered**
+1. When your Worker is ready to take on work, you can register the worker via the Job Router SDK.
+2. Job Router then sends a [WorkerRegistered Event][worker_registered_event]
:::image type="content" source="../media/router/acs-router-worker-registration.png" alt-text="Diagram showing Communication Services' Job Router worker registration.":::
-### Matching and accepting a job flow
+## Queue
+
+A Queue represents an ordered list of jobs waiting to be served by a worker. Workers will register with a queue to receive work from it.
+
+A real-world example of this may be a call queue in a call center.
+
+## Channel
+
+A Channel represents a grouping of jobs by some type. When a worker registers to receive work, they must also specify for which channels they can handle work, and how much of each can they handle concurrently. Channels are just a string discriminator and aren't explicitly created.
-1. When Job Router finds a matching Worker for a Job, it offers the work by sending a **RouterWorkerOfferIssued** which the Contoso Application would receive and send a signal to the connected user using a platform such as the Azure SignalR Service.
-2. The Worker accepts the Offer.
-3. Job Router sends an **RouterWorkerOfferAccepted** signifying to the Contoso Application the Worker is assigned to the Job.
+A real-world example of this may be `voice calls` or `chats` in a call center.
+
+## Offer
+
+An Offer is extended by JobRouter to a worker to handle a particular job when it determines a match. When this happens, you'll be notified via [EventGrid][subscribe_events]. You can either accept or decline the offer using the JobRouter SDK, or it will expire according to the time to live configured on the Distribution Policy.
+
+A real-world example of this may be the ringing of an agent in a call center.
+
+### Offer flow
+
+1. When Job Router finds a matching Worker for a Job, it offers the work by sending a [OfferIssued Event][offer_issued_event] via EventGrid.
+2. The Offer is accepted via the Job Router API.
+3. Job Router sends a [OfferAccepted Event][offer_accepted_event] signifying to the Contoso Application the Worker is assigned to the Job.
:::image type="content" source="../media/router/acs-router-accept-offer.png" alt-text="Diagram showing Communication Services' Job Router accept offer.":::
-## Real-time notifications
+## Distribution Policy
-Azure Communication Services relies on Event Grid's messaging platform to send notifications about what actions Job Router is taking on the workload you send. Job Router sends messages in the form of events whenever an important action happens such as Job lifecycle events including job creation, completion, offer acceptance, and many more.
+A Distribution Policy represents a configuration set that controls how jobs in a queue are distributed to workers registered with that queue.
+This configuration includes:
-## Job
+- How long an Offer is valid before it expires.
+- The distribution mode, which define the order in which workers are picked when there are multiple available.
+- How many concurrent offers can there be for a given job.
-A Job represents the unit of work, which needs to be routed to an available Worker. Jobs are defined using the Azure Communication Services Job Router SDKs or by submitting an authenticated request to the REST API. Jobs often contain a reference to some unique identifier you may have such as a call ID or a ticket number, along with the characteristics of the work being performed.
+### Distribution modes
-## Queue
+The 3 types of modes are
-When a Job is created it is assigned to a Queue, either statically at the time of submission, or dynamically through the application of a classification policy. Jobs are grouped together by their assigned Queue and can take on different characteristics depending on how you intend on distributing the workload. Queues require a **Distribution Policy** to determine how jobs are offered to eligible workers.
+- **Round Robin**: Workers are ordered by `Id` and the next worker after the previous one that got an offer is picked.
+- **Longest Idle**: The worker that has not been working on a job for the longest.
+- **Best Worker**: The workers that are best able to handle the job will be picked first. The logic to determine this can be optionally customized by specifying an expression or azure function to compare 2 workers and determine which one to pick.
-Queues in the Job Router can also contain Exception Policies that determine the behavior of Jobs when certain conditions arise. For example, you may want a Job to be moved to a different Queue, the priority increased, or both based on a timer or some other condition.
+## Labels
-## Worker
+You can attach labels to workers, jobs, and queues. These are key value pairs that can be of `string`, `number` or `boolean` data types.
-A Worker represents the supply available to handle a Job for a particular Queue. Each Worker registered with the Job Router comes with a set of **Labels**, their associated **Queues**, **channel configurations**, and a **total capacity score**. The Job Router uses these factors to determine when and how to route Jobs to a worker in real time.
+A real-world example of this may be the skill level of a particular worker or the team or geographic location.
-Azure Communication Services Job Router maintains and uses the status of a Worker using simple **Active**, **Inactive**, or **Draining** states to determine when available Jobs can be matched to a worker. Together with the status, the channel configuration, and the total capacity score, Job Router calculates viable Workers and issues Offers related to the Job.
+## Label selectors
-## Policies
+Label selectors can be attached to a job in order to target a subset of workers serving the queue.
-Azure Communication Services Job Router applies flexible Policies to attach dynamic behavior to various aspects of the system. Depending on the policy, a Job's Labels can be consumed & evaluated to alter a Job's priority, which Queue it should be in, and much more. Certain Policies in the Job Router offer inline function processing using PowerFx, or for more complex scenarios, a callback to an Azure Function.
+A real-world example of this may be a condition on an incoming call that the agent must have a minimum level of knowledge of a particular product.
-**Classification policy -** A classification policy helps Job Router define the Queue, the Priority, and can alter the Worker Selectors when the sender is unable or unaware of these parameters at the time of submission. For more information about classification, see the [classification concepts](classification-concepts.md) page.
+## Classification policy
-**Distribution policy -** When the Job Router receives a new Job, the Distribution Policy is used to locate a suitable Worker and manage the Job Offers. Workers are selected using different **modes**, and based on the policy, Job Router can notify one or more Workers concurrently.
+A classification policy can be used to dynamically select a queue, determine job priority and attach worker label selectors to a job by leveraging a rules engine.
-**Exception policy -** An exception policy controls the behavior of a Job based on a trigger and executes a desired action. The exception policy is attached to a Queue so it can control the behavior of Jobs in the Queue.
+## Exception policy
+
+An exception policy controls the behavior of a Job based on a trigger and executes a desired action. The exception policy is attached to a Queue so it can control the behavior of Jobs in the Queue.
## Next steps - [Router Rule concepts](router-rule-concepts.md) - [Classification concepts](classification-concepts.md)-- [Distribution concepts](distribution-concepts.md)
+- [How jobs are matched to workers](matching-concepts.md)
- [Quickstart guide](../../quickstarts/router/get-started-router.md) - [Manage queues](../../how-tos/router-sdk/manage-queue.md) - [Classifying a Job](../../how-tos/router-sdk/job-classification.md) - [Escalate a Job](../../how-tos/router-sdk/escalate-job.md) - [Subscribe to events](../../how-tos/router-sdk/subscribe-events.md)+
+<!-- LINKS -->
+[azure_sub]: https://azure.microsoft.com/free/dotnet/
+[cla]: https://cla.microsoft.com
+[nuget]: https://www.nuget.org/
+[netstandars2mappings]:https://github.com/dotnet/standard/blob/master/docs/versions.md
+[useraccesstokens]:https://docs.microsoft.com/azure/communication-services/quickstarts/access-tokens?pivots=programming-language-csharp
+[communication_resource_docs]: https://docs.microsoft.com/azure/communication-services/quickstarts/create-communication-resource?tabs=windows&pivots=platform-azp
+[communication_resource_create_portal]: https://docs.microsoft.com/azure/communication-services/quickstarts/create-communication-resource?tabs=windows&pivots=platform-azp
+[communication_resource_create_power_shell]: https://docs.microsoft.com/powershell/module/az.communication/new-azcommunicationservice
+[communication_resource_create_net]: https://docs.microsoft.com/azure/communication-services/quickstarts/create-communication-resource?tabs=windows&pivots=platform-net
+
+[subscribe_events]: ../../how-tos/router-sdk/subscribe-events.md
+[worker_registered_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerregistered
+[job_classified_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterjobclassified
+[offer_issued_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferissued
+[offer_accepted_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferaccepted
communication-services Distribution Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/router/distribution-concepts.md
- Title: Job distribution concepts for Azure Communication Services-
-description: Learn about the Azure Communication Services Job Router distribution concepts.
----- Previously updated : 10/14/2021----
-# Job distribution concepts
--
-Azure Communication Services Job Router uses a flexible distribution process, which involves the use of a policy and a Job offer lifecycle to assign Workers. This article describes the different ways a Job can be distributed, what the Job offer lifecycle is, and the effect this process has on Workers.
-
-## Job distribution overview
-
-Deciding how to distribute Jobs to Workers is a key feature of Job Router and the SDK offers a similarly flexible and extensible model for you to customize your environment. As described in the [classification concepts](classification-concepts.md) guide, once a Job has been classified, Job Router will look for a suitable Worker based on the characteristics of the Job and the Distribution Policy. Alternatively, if Workers are busy, Job Router will look for a suitable Job when a Worker becomes available. Worker suitability is decided across three characteristics; [an available channel](#channel-configurations), their [abilities,](#worker-abilities) and [status](#worker-status). Once a suitable Worker has been found, a check is performed to make sure they have an open channel the Job can be assigned to.
-
-These two approaches are key concepts in how Job Router initiates the discovery of Jobs or Workers.
-
-### Finding workers for a job
-
-Once a Job has completed the [classification process](classification-concepts.md), Job Router will apply the Distribution Policy configured on the Queue to select one or more workers who meet the worker selectors on the job and generate offers for those workers to take on the job.
-
-### Finding a job for a worker
-
-There are several scenarios, which will trigger Job Router to find a job for a worker:
--- When a Worker registers with Job Router-- When a Job is closed and the channel is released-- When a Job offer is declined or revoked-
-The distribution process is the same as finding Workers for a Job. When a worker is found, an [offer](#job-offer-overview) is generated.
-
-## Worker overview
-
-Workers **register** with Job Router using the SDK and supply the following basic information:
--- A worker ID and name-- Queue IDs-- Total capacity (number)-- A list of **channel configurations**-- A set of labels -
-Job Router will always hold a reference to any registered Worker even if they are manually or automatically **deregistered**.
-
-### Channel configurations
-
-Each Job requires a channel ID property representing a pre-configured Job Router channel or a custom channel. A channel configuration consists of a `channelId` string and a `capacityCostPerJob` number. Together they represent an abstract mode of communication and the cost of that mode. For example, most people can only be on one phone call at a time, thus a `Voice` channel may have a high cost of `100`. Alternatively, certain workloads such as chat can have a higher concurrency which means they have a lower cost. You can think of channel configurations as open slots in which a Job can be assigned or attached to. The following example illustrates this point:
-
-```csharp
-await client.RegisterWorkerAsync(
- id: "EdmontonWorker",
- queueIds: new[] { "XBOX_Queue", "XBOX_Escalation_Queue" },
- totalCapacity: 100,
- labels: new LabelCollection
- {
- { "Location", "Edmonton" },
- { "XBOX_Hardware", 7 },
- },
- channelConfigurations: new List<ChannelConfiguration>
- {
- new (
- channelId: ManagedChannels.AcsVoiceChannel,
- capacityCostPerJob: 100
- ),
- new (
- channelId: ManagedChannels.AcsChatChannel,
- capacityCostPerJob: 33
- )
- }
-);
-```
-
-The above worker is registered with two channel configurations each with unique costs per channel. The effective result is that the `EdmontonWorker` can handle three concurrent `ManagedChannels.AcsChatChannel` Jobs or one `ManagedChannels.AcsVoiceChannel` Job.
-
-Job Router includes the following pre-configured channel IDs for you to use:
--- ManagedChannels.AcsChatChannel-- ManagedChannels.AcsVoiceChannel-- ManagedChannels.AcsSMSChannel-
-New abstract channels can be created using the Job Router SDK as follows:
-
-```csharp
-await client.SetChannelAsync(
- id: "MakePizza",
- name: "Make a pizza"
-);
-
-await client.SetChannelAsync(
- id: "MakeDonairs",
- name: "Make a donair"
-);
-
-await client.SetChannelAsync(
- id: "MakeBurgers",
- name: "Make a burger"
-);
-```
-
-You can then use the channel when registering the Worker to represent their ability to take on a Job matching that channel ID as follows:
-
-```csharp
-await client.RegisterWorkerAsync(
- id: "PizzaCook",
- queueIds: new[] { "PizzaOrders", "DonairOrders", "BurgerOrders" },
- totalCapacity: 100,
- labels: new LabelCollection
- {
- { "Location", "New Jersey" },
- { "Language", "English" },
- { "PizzaMaker", 7 },
- { "DonairMaker", 10},
- { "BurgerMaker", 5}
- },
- channelConfigurations: new List<ChannelConfiguration>
- {
- new (
- channelId: MakePizza,
- capacityCostPerJob: 50
- ),
- new (
- channelId: MakeDonair,
- capacityCostPerJob: 33
- ),
- new (
- channelId: MakeBurger,
- capacityCostPerJob: 25
- )
- }
-);
-```
-
-The above example illustrates three abstract channels each with their own cost per Job. As such, the following Job concurrency examples are possible for the `PizzaCook` Worker:
-
-| MakePizza | MakeDonair | MakeBurger | Score |
-|--|--|--|--|
-| 2 | | | 100 |
-| | 3 | | 99 |
-| 1 | 1 | | 83 |
-| | 2 | 1 | 91 |
-| | | 4 | 100 |
-| | 1 | 2 | 83 |
-
-### Worker abilities
-
-Aside from the available channels a Worker may have, the distribution process uses the labels collection of the registered Worker to determine their suitability for a Job. In the pizza cook example above, the Worker has a label collection consisting of:
-
-```csharp
-new LabelCollection
- {
- { "Location", "New Jersey" },
- { "Language", "English" },
- { "PizzaMaker", 7 },
- { "DonairMaker", 10},
- { "BurgerMaker", 5}
- }
-```
-
-When a Job is submitted, the **worker selectors** are used to define the requirements for that particular unit of work. If a Job requires an English-speaking person who is good at making donairs, the SDK call would be as follows:
-
-```csharp
-await client.CreateJobAsync(
- channelId: "MakeDonair",
- channelReference: "ReceiptNumber_555123",
- queueId: "DonairOrders",
- priority: 1,
- workerSelectors: new List<LabelSelector>
- {
- new (
- key: "DonairMaker",
- @operator: LabelOperator.GreaterThanEqual,
- value: 8),
- new (
- key: "English",
- @operator: LabelOperator.GreaterThan,
- value: 5)
- });
-```
-
-### Worker status
-
-Since Job Router can handle concurrent Jobs for a Worker depending on their Channel Configurations, the concept of availability is represented by three states:
-
-**Active -** A Worker is registered with the Job Router and is willing to accept a Job
-
-**Draining -** A Worker has deregistered with the Job Router, however they are currently assigned one or more active Jobs
-
-**Inactive -** A Worker has deregistered with the Job Router and they have no active Jobs
-
-## Job offer overview
-
-When the distribution process locates a suitable Worker who has an open channel and has the correct status, a Job offer is generated and an event is sent. The Distribution Policy contains the following configurable properties for the offer:
-
-**OfferTTL -** The time-to-live for each offer generated
-
-**Mode -** The **distribution modes** which contain both `minConcurrentOffers` and `maxConcurrentOffers` properties. Set two integers for these two variables to control the concurrent numbers of active workers that job offer will be distributed. For example:
-
-```csharp
- "mode": {
- "kind": "longest-idle",
- "minConcurrentOffers": 1,
- "maxConcurrentOffers": 5,
- "bypassSelectors": false
- }
-}
-```
-
-In the above example, minConcurrentOffers and maxConcurrentOffers will distribute at least one offer and up to a maximum of five offers to active Workers who match the requirements of the Job.
-
-> [!Important]
-> When a Job offer is generated for a Worker it consumes one of the channel configurations matching the channel ID of the Job. The consumption of this channel means the Worker will not receive another offer unless additional capacity for that channel is available on the Worker. If the Worker declines the offer or the offer expires, the channel is released.
-
-### Job offer lifecycle
-
-The following Job offer lifecycle events can be observed through your Event Grid subscription:
--- RouterWorkerOfferIssued-- RouterWorkerOfferAccepted-- RouterWorkerOfferDeclined-- RouterWorkerOfferExpired-- RouterWorkerOfferRevoked-
-> [!NOTE]
-> An offer can be accepted or declined by a Worker by using the SDK while all other events are internally generated.
-
-## Distribution modes
-
-Job Router includes the following distribution modes:
-
-**LongestIdleMode -** Generates Offer for the longest idle Worker in a Queue
-
-**RoundRobinMode -** Given a collection of Workers, pick the next Worker after the last one that was picked ordered by ID.
-
-**BestWorkerMode -** Use the Job Router's [RuleEngine](router-rule-concepts.md) to choose a Worker based on their labels
-
-## Distribution summary
-
-Depending on several factors such as a Worker's status, channel configuration/capacity, the distribution policy's mode, and offer concurrency can influence the way Job offers are generated. It is suggested to start with a simple implementation and add complexity as your requirements dictate.
communication-services Matching Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/router/matching-concepts.md
+
+ Title: Job Matching
+
+description: Learn about the Azure Communication Services Job Router distribution concepts.
+++++ Last updated : 01/26/2022++
+zone_pivot_groups: acs-js-csharp
++
+# How jobs are matched to workers
++
+This document describes the registration of workers, the submission of jobs and how they're matched to each other.
+
+## Worker Registration
+
+Before a worker can receive offers to service a job, it must be registered.
+In order to register, we need to specify which queues the worker will listen on, which channels it can handle and a set of labels.
+
+In the following example we register a worker to
+
+- Listen on `queue-1` and `queue-2`
+- Be able to handle both the voice and chat channels. In this case, the worker could either take a single `voice` job at one time or two `chat` jobs at the same time. This is configured by specifying the total capacity of the worker and assigning a cost per job for each channel.
+- Have a set of labels that describe things about the worker that could help determine if it's a match for a particular job.
++
+```csharp
+var worker = await client.RegisterWorkerAsync(
+ id: "worker-1",
+ queueIds: new[] { "queue-1", "queue-2" },
+ totalCapacity: 2,
+ channelConfigurations: new List<ChannelConfiguration>
+ {
+ new ChannelConfiguration(channelId: "voice", capacityCostPerJob: 2),
+ new ChannelConfiguration(channelId: "chat", capacityCostPerJob: 1)
+ },
+ labels: new LabelCollection()
+ {
+ ["Skill"] = 11,
+ ["English"] = true,
+ ["French"] = false,
+ ["Vendor"] = "Acme"
+ }
+);
+```
+++
+```typescript
+let worker = await client.registerWorker({
+ id: "worker-1",
+ queueAssignments: [
+ { queueId: "queue-1" },
+ { queueId: "queue-2" }
+ ],
+ totalCapacity: 2,
+ channelConfigurations: [
+ { channelId: "voice", capacityCostPerJob: 2 },
+ { channelId: "chat", capacityCostPerJob: 1 }
+ ],
+ labels: {
+ Skill: 11,
+ English: true,
+ French: false,
+ Vendor: "Acme"
+ }
+});
+```
++
+## Job Submission
+
+In the following example, we'll submit a job that
+
+- Goes directly to `queue-1`.
+- For the `chat` channel.
+- With a label selector that specifies that any worker servicing this job must have a label of `English` set to `true`.
+- With a label selector that specifies that any worker servicing this job must have a label of `Skill` greater than `10` and this condition will expire after one minute.
+- With a label of `name` set to `John`.
++
+```csharp
+var job = await client.CreateJobAsync(
+ channelId: "chat",
+ queueId: "queue-1",
+ workerSelectors: new List<LabelSelector>
+ {
+ new LabelSelector(
+ key: "English",
+ @operator: LabelOperator.Equal,
+ value: true),
+ new LabelSelector(
+ key: "Skill",
+ @operator: LabelOperator.GreaterThan,
+ value: 10,
+ ttl: TimeSpan.FromMinutes(1)),
+ },
+ labels: new LabelCollection()
+ {
+ ["name"] = "John"
+ });
+```
+++
+```typescript
+let job = await client.createJob({
+ channelId: "chat",
+ queueId: "queue-1",
+ workerSelectors: [
+ { key: "English", operator: "equal", value: true },
+ { key: "Skill", operator: "greaterThanEqual", value: 10, ttl: "00:01:00" },
+ ],
+ labels: {
+ name: "John"
+ },
+});
+```
++
+Job Router will now try to match this job to an available worker listening on `queue-1` for the `chat` channel, with `English` set to `true` and `Skill` greater than `10`.
+Once a match is made, an offer is created. The distribution policy that is attached to the queue will control how many active offers there can be for a job and how long each offer is valid. [You'll receive][subscribe_events] an [OfferIssued Event][offer_issued_event] which would look like this:
+
+```json
+{
+ "workerId": "worker-1",
+ "jobId": "7f1df17b-570b-4ae5-9cf5-fe6ff64cc712",
+ "channelId": "chat",
+ "queueId": "queue-1",
+ "offerId": "525fec06-ab81-4e60-b780-f364ed96ade1",
+ "offerTimeUtc": "2021-06-23T02:43:30.3847144Z",
+ "expiryTimeUtc": "2021-06-23T02:44:30.3847674Z",
+ "jobPriority": 1,
+ "jobLabels": {
+ "name": "John"
+ }
+}
+```
+
+The [OfferIssued Event][offer_issued_event] includes details about the job, worker, how long the offer is valid and the `offerId` which you'll need to accept or decline the job.
+
+<!-- LINKS -->
+[subscribe_events]: ../../how-tos/router-sdk/subscribe-events.md
+[job_classified_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterjobclassified
+[offer_issued_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferissued
+
communication-services Router Rule Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/router/router-rule-concepts.md
Title: Router rules engine concepts for Azure Communication Services
+ Title: Job Router rule engines
description: Learn about the Azure Communication Services Job Router rules engine concepts.
Last updated 10/14/2021
+zone_pivot_groups: acs-js-csharp
-# Job Router rules engine concepts
+# Job Router rule engines
[!INCLUDE [Private Preview Disclaimer](../../includes/private-preview-include-section.md)]
-Azure Communication Services Job Router uses an extensible rules engine to process data and make decisions about your Jobs and Workers. This document covers what the rule engine does and why you may want to apply it in your implementation.
+Job Router can use one or more rule engines to process data and make decisions about your Jobs and Workers. This document covers what the rule engines do and why you may want to apply them in your implementation.
## Rules engine overview Controlling the behavior of your implementation can often include complex decision making. Job Router provides a flexible way to invoke behavior programmatically using various rule engines. Job Router's rule engines generally take a set of **labels** defined on objects such as a Job, a Queue, or a Worker as an input, apply the rule and produce an output.
-> [!NOTE]
-> Although the rule engine typically uses labels as input, it can also set values such as a Queue ID without the use of evaluating labels.
+Depending on where you apply rules in Job Router, the result can vary. For example, a Classification Policy can choose a Queue ID based on the labels defined on the input of a Job. In another example, a Distribution Policy can find the best Worker using a custom scoring rule.
-Depending on where you apply rules in Job Router, the result can vary. For example, a Classification Policy can choose a Queue ID based on the labels defined on the input of a Job. In another example, a Distribution Policy can find the best Worker using a custom scoring rule defined by the `RouterRule`.
+## Rule engine types
-### Example: Use a static rule in a classification policy to set the queue ID
+The following rule engine types exist in Job Router to provide flexibility in how your Jobs are processed.
-In this example a `StaticRule`, which is a type of `RouterRule` can be used to set the Queue ID of all Jobs, which reference the Classification Policy ID `XBOX_QUEUE_POLICY`.
+**Static rule -** Used to specify a static value such as selecting a specific Queue ID.
+
+**Expression rule -** Uses the [PowerFx](https://powerapps.microsoft.com/en-us/blog/what-is-microsoft-power-fx/) language to define your rule as an inline expression.
+
+**Azure Function rule -** Allows the Job Router to pass the input labels as a payload to an Azure Function and respond back with an output value.
+
+### Example: Use a static rule to set the priority of a job
+
+In this example a `StaticRule`, which is a subtype of `RouterRule` can be used to set the priority of all Jobs, which use this classification policy.
+ ```csharp await client.SetClassificationPolicyAsync(
- id: "XBOX_QUEUE_POLICY",
- queueSelector: new QueueIdSelector(new StaticRule("XBOX"))
-)
+ id: "my-policy-id",
+ prioritizationRule: new StaticRule(5)
+);
+```
+++
+```typescript
+await client.upsertClassificationPolicy({
+ id: "my-policy-id",
+ prioritizationRule: {
+ kind: "static-rule",
+ value: 5
+ }
+});
```
-## RouterRule types
-The following `RouterRule` types exist in Job Router to provide flexibility in how your Jobs are processed.
-**Static rule -** This rule can be used specify a static value such as selecting a specific Queue ID.
+### Example: Use an expression rule to set the priority of a job
-**Expression rule -** An expression rule uses the [PowerFx](https://powerapps.microsoft.com/en-us/blog/what-is-microsoft-power-fx/) language to define your rule as an inline expression.
+In this example a `ExpressionRule`, which is a subtype of `RouterRule` can be used to set the priority of all Jobs, which use this classification policy.
-**Azure Function rule -** Specifying a URI and an `AzureFunctionRuleCredential`, this rule allows the Job Router to pass the input labels as a payload and respond back with an output value. This rule type can be used when your requirements are complex.
+
+```csharp
+await client.SetClassificationPolicyAsync(
+ id: "my-policy-id",
+ new ExpressionRule("If(job.Urgent = true, 10, 5)")
+);
+```
+++
+```typescript
+await client.upsertClassificationPolicy({
+ id: "my-policy-id",
+ prioritizationRule: {
+ kind: "expression-rule",
+ expression: "If(job.Urgent = true, 10, 5)"
+ }
+});
+```
-> [!NOTE]
-> Although the **Direct Map rule** is part of the Job Router SDK, it is only supported in the `NearestQueueLabelSelector` at this time.
communication-services Closed Captions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/closed-captions.md
Here are main scenarios where Closed Captions are useful:
- **Accessibility**. Scenarios when audio can't be heard, either because of a noisy environment, such as an airport, or because of an environment that must be kept quiet, such as a hospital. - **Inclusivity**. Closed Captioning was developed to aid hearing-impaired people, but it could be useful for a language proficiency as well.
-![closed captions](../media/call-closed-caption.png)
+![closed captions work flow](../media/call-closed-caption.png)
## When to use Closed Captions
communication-services Job Classification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/how-tos/router-sdk/job-classification.md
Last updated 10/14/2021
+zone_pivot_groups: acs-js-csharp
#Customer intent: As a developer, I want Job Router to classify my Job for me.
Learn to use a classification policy in Job Router to dynamically resolve the qu
- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md). - Optional: Complete the quickstart to [get started with Job Router](../../quickstarts/router/get-started-router.md)
-## Static classification
+## Create a classification policy
-When creating a Job with the SDK, specify the queue, priority, and worker selectors only; this method is known as **static classification**. The following example would place a Job in the `XBOX_DEFAULT_QUEUE` with a priority of `1` and require workers to have a skill of `XBOX_Hardware` greater than or equal to `5`.
+The following example will leverage [PowerFx Expressions](https://powerapps.microsoft.com/en-us/blog/what-is-microsoft-power-fx/) to select both the queue and priority. The expression will attempt to match the Job label called `Region` equal to `NA` resulting in the Job being put in the `XBOX_NA_QUEUE` if found, otherwise the `XBOX_DEFAULT_QUEUE`. If the `XBOX_DEFAULT_QUEUE` was also not found, then the job will automatically be sent to the fallback queue `DEFAULT_QUEUE` as defined by `fallbackQueueId`. Additionally, the priority will be `10` if a label called `Hardware_VIP` was matched, otherwise it will be `1`.
-> [!NOTE]
-> A Job can be [reclassified after submission](#reclassify-a-job-after-submission) even if it was initially created without a classification policy. In this case, Job Router will evaluate the policy's behavior against the **labels** and make the necessary adjustments to the queue, priority, and worker selectors.
+
+```csharp
+var policy = await client.SetClassificationPolicyAsync(
+ id: "XBOX_NA_QUEUE_Priority_1_10",
+ name: "Select XBOX Queue and set priority to 1 or 10",
+ queueSelector: new QueueIdSelector(
+ new ExpressionRule("If(job.Region = \"NA\", \"XBOX_NA_QUEUE\", \"XBOX_DEFAULT_QUEUE\")")
+ ),
+ prioritizationRule: new ExpressionRule("If(job.Hardware_VIP = true, 10, 1)"),
+ fallbackQueueId: "DEFAULT_QUEUE"
+);
+```
+++
+```typescript
+await client.upsertClassificationPolicy({
+ id: "XBOX_NA_QUEUE_Priority_1_10",
+ fallbackQueueId: "DEFAULT_QUEUE",
+ queueSelector: {
+ kind: "queue-id",
+ rule: {
+ kind: "expression-rule",
+ expression: "If(job.Region = \"NA\", \"XBOX_NA_QUEUE\", \"XBOX_DEFAULT_QUEUE\")"
+ }
+ },
+ prioritizationRule: {
+ kind: "expression-rule",
+ expression: "If(job.Hardware_VIP = true, 10, 1)"
+ }
+```
++
+## Submit the job
+
+The following example will cause the classification policy to evaluate the Job labels. The outcome will place the Job in the queue called `XBOX_NA_QUEUE` and set the priority to `1`.
+ ```csharp var job = await client.CreateJobAsync(
- channelId: ManagedChannels.AcsVoiceChannel,
- channelReference: "12345",
- queueId: queue.Value.Id,
- priority: 1,
- workerSelectors: new List<LabelSelector>
+ channelId: "voice",
+ classificationPolicyId: "XBOX_NA_QUEUE_Priority_1_10",
+ labels: new LabelCollection()
{
- new (
- key: "Location",
- @operator: LabelOperator.Equal,
- value: "Edmonton")
- });
+ ["Region"] = "NA",
+ ["Caller_Id"] = "tel:7805551212",
+ ["Caller_NPA_NXX" = "780555",
+ ["XBOX_Hardware" = 7
+ }
+);
// returns a new GUID such as: 4ad7f4b9-a0ff-458d-b3ec-9f84be26012b ```
-## Dynamic classification
++
+```typescript
+await client.createJob({
+ channelId: "voice",
+ classificationPolicyId: : "XBOX_NA_QUEUE_Priority_1_10",
+ labels: {
+ Region: "NA",
+ Caller_Id: "tel:7805551212",
+ Caller_NPA_NXX: "780555",
+ XBOX_Hardware: 7
+ },
+});
+```
++
+## Attaching Worker Selectors
+
+You can use the classification policy to attach additional worker selectors to a job.
-As described above, an easy way of submitting a Job is to specify the Priority, Queue, and Worker Selectors during submission. When doing so, the sender needs to have knowledge about these characteristics. To avoid the sender having explicit knowledge about the inner workings of the Job Router's behavior, the sender can specify a **classification policy** along with a generic **labels** collection to invoke the dynamic behavior.
+### Static Attachments
-### Create a classification policy
+In this example, we are using a static attachment, which will always attach the specified label selector to a job.
-The following classification policy will use the low-code [PowerFx](https://powerapps.microsoft.com/en-us/blog/what-is-microsoft-power-fx/) language to select both the queue and priority. The expression will attempt to match the Job label called `Region` equal to `NA` resulting in the Job being put in the `XBOX_NA_QUEUE` if found, otherwise the `XBOX_DEFAULT_QUEUE`. If the `XBOX_DEFAULT_QUEUE` was also not found, then the job will automatically be sent to the fallback queue `DEFAULT_QUEUE` as defined by `fallbackQueueId`. Additionally, the priority will be `10` if a label called `Hardware_VIP` was matched, otherwise it will be `1`.
```csharp
-var policy = await client.SetClassificationPolicyAsync(
- id: "XBOX_NA_QUEUE_Priority_1_10",
- name: "Select XBOX Queue and set priority to 1 or 10",
- queueSelector: new QueueIdSelector(
- new ExpressionRule("If(job.Region = \"NA\", \"XBOX_NA_QUEUE\", \"XBOX_DEFAULT_QUEUE\")")
- ),
+await client.SetClassificationPolicyAsync(
+ id: "policy-1",
workerSelectors: new List<LabelSelectorAttachment> { new StaticLabelSelector(
- new LabelSelector(
- key: "Language",
- @operator: LabelOperator.Equal,
- value: "English")
+ new LabelSelector("Foo", LabelOperator.Equal, "Bar")
)
- },
- prioritizationRule: new ExpressionRule("If(job.Hardware_VIP = true, 10, 1)"),
- fallbackQueueId: "DEFAULT_QUEUE"
+ }
); ```
-### Submit the job
-The following example will cause the classification policy to evaluate the Job labels. The outcome will place the Job in the queue called `XBOX_NA_QUEUE` and set the priority to `1`.
+
+```typescript
+await client.upsertClassificationPolicy(
+ id: "policy-1",
+ workerSelectors: [
+ {
+ kind: "static",
+ labelSelector: { key: "foo", operator: "equal", value: "bar" }
+ }
+ ]
+);
+```
++
+### Conditional Attachments
+
+In this example, we are using a conditional attachment, which will evaluate a condition against the job labels to determine if the said label selectors should be attached to the job.
+ ```csharp
-var dynamicJob = await client.CreateJobAsync(
- channelId: ManagedChannels.AcsVoiceChannel,
- channelReference: "my_custom_reference_number",
- classificationPolicyId: "XBOX_NA_QUEUE_Priority_1_10",
- labels: new LabelCollection()
+await client.SetClassificationPolicyAsync(
+ id: "policy-1",
+ workerSelectors: new List<LabelSelectorAttachment>
{
- { "Region", "NA" },
- { "Caller_Id", "tel:7805551212" },
- { "Caller_NPA_NXX", "780555" },
- { "XBOX_Hardware", 7 }
+ new ConditionalLabelSelector(
+ condition: new ExpressionRule("job.Urgent = true"),
+ labelSelectors: new List<LabelSelector>
+ {
+ new LabelSelector("Foo", LabelOperator.Equal, "Bar")
+ })
} );
+```
-// returns a new GUID such as: 4ad7f4b9-a0ff-458d-b3ec-9f84be26012b
++
+```typescript
+await client.upsertClassificationPolicy(
+ id: "policy-1",
+ workerSelectors: [
+ {
+ kind: "conditional",
+ condition: {
+ kind: "expression-rule",
+ expression: "job.Urgent = true"
+ },
+ labelSelectors: [
+ { key: "Foo", operator: "equal", value: "Bar" }
+ ]
+ }
+ ]
+);
+```
++
+### Weighted Allocation Attachments
+
+In this example, we are using a weighted allocation attachment, which will divide up jobs according to the weightings specified and attach different selectors accordingly. Here, we are saying that 30% of jobs should go to workers with the label `Vendor` set to `A` and 70% should go to workers with the label `Vendor` set to `B`.
++
+```csharp
+await client.SetClassificationPolicyAsync(
+ id: "policy-1",
+ workerSelectors: new List<LabelSelectorAttachment>
+ {
+ new WeightedAllocationLabelSelector(new WeightedAllocation[]
+ {
+ new WeightedAllocation(
+ weight: 0.3,
+ labelSelectors: new List<LabelSelector>
+ {
+ new LabelSelector("Vendor", LabelOperator.Equal, "A")
+ }),
+ new WeightedAllocation(
+ weight: 0.7,
+ labelSelectors: new List<LabelSelector>
+ {
+ new LabelSelector("Vendor", LabelOperator.Equal, "B")
+ })
+ })
+ }
+);
``` ++
+```typescript
+
+await client.upsertClassificationPolicy(
+ id: "policy-1",
+ workerSelectors: [
+ {
+ kind: "weighted-allocation",
+ allocations: [
+ {
+ weight: 0.3,
+ labelSelectors: [{ key: "Vendor", operator: "equal", value: "A" }]
+ },
+ {
+ weight: 0.7,
+ labelSelectors: [{ key: "Vendor", operator: "equal", value: "B" }]
+ }
+ ]
+ }
+ ]
+);
+```
++ ## Reclassify a job after submission Once the Job Router has received, and classified a Job using a policy, you have the option of reclassifying it using the SDK. The following example illustrates one way to increase the priority of the Job to `10`, simply by specifying the **Job ID**, calling the `ReclassifyJobAsync` method, and including the `Hardware_VIP` label. + ```csharp var reclassifiedJob = await client.ReclassifyJobAsync( jobId: "4ad7f4b9-a0ff-458d-b3ec-9f84be26012b", classificationPolicyId: null, labelsToUpdate: new LabelCollection() {
- { "Hardware_VIP", true }
+ ["Hardware_VIP"] = true
} ); ```+++
+```typescript
+await client.reclassifyJob("4ad7f4b9-a0ff-458d-b3ec-9f84be26012b", {
+ classificationPolicyId: null,
+ labelsToUpdate: {
+ Hardware_VIP: true
+ }
+});
+```
+
communication-services Subscribe Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/how-tos/router-sdk/subscribe-events.md
Copy and paste the following json payload in a text file named `test.json`.
"Microsoft.Communication.RouterJobClosed", "Microsoft.Communication.RouterJobCancelled", "Microsoft.Communication.RouterJobExceptionTriggered",
- "Microsoft.Communication.RouterJobExceptionCleared",
"Microsoft.Communication.RouterWorkerOfferIssued", "Microsoft.Communication.RouterWorkerOfferAccepted", "Microsoft.Communication.RouterWorkerOfferDeclined",
communication-services Get Started Rooms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/rooms/get-started-rooms.md
zone_pivot_groups: acs-csharp-java
This quickstart will help you get started with Azure Communication Services Rooms. A `room` is a server-managed communications space for a known, fixed set of participants to collaborate for a pre-determined duration. The [rooms conceptual documentation](../../concepts/rooms/room-concept.md) covers more details and potential use cases for `rooms`. ::: zone pivot="programming-language-csharp" ::: zone-end ::: zone pivot="programming-language-java" ::: zone-end ## Object model
communication-services Get Started Router https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/router/get-started-router.md
Add the following `using` directives to the top of **Program.cs** to include the
```csharp using Azure.Communication.JobRouter;
+using Azure.Communication.JobRouter.Models;
``` Update `Main` function signature to be `async` and return a `Task`.
var client = new RouterClient(connectionString);
Job Router uses a distribution policy to decide how Workers will be notified of available Jobs and the time to live for the notifications, known as **Offers**. Create the policy by specifying the **ID**, a **name**, an **offerTTL**, and a distribution **mode**. ```csharp
-var distributionPolicy = await client.SetDistributionPolicyAsync(
- id: "Longest_Idle_45s_Min1Max10",
- name: "Longest Idle matching with a 45s offer expiration; min 1, max 10 offers",
- offerTTL: TimeSpan.FromSeconds(45),
- mode: new LongestIdleMode(
- minConcurrentOffers: 1,
- maxConcurrentOffers: 10)
+var distributionPolicy = await routerClient.SetDistributionPolicyAsync(
+ id: "distribution-policy-1",
+ name: "My Distribution Policy",
+ offerTTL: TimeSpan.FromSeconds(30),
+ mode: new LongestIdleMode()
); ``` ## Create a queue
-Jobs are organized into a logical Queue. Create the Queue by specifying an **ID**, **name**, and provide the **Distribution Policy** object's ID you created above.
+Create the Queue by specifying an **ID**, **name**, and provide the **Distribution Policy** object's ID you created above.
```csharp
-var queue = await client.SetQueueAsync(
- id: "XBOX_Queue",
- name: "XBOX Queue",
+var queue = await routerClient.SetQueueAsync(
+ id: "queue-1",
+ name: "My Queue",
distributionPolicyId: distributionPolicy.Value.Id ); ``` ## Submit a job
-The quickest way to get started is to specify the ID of the Queue, the priority, and worker requirements when submitting a Job. In the example below, a Job will be submitted directly to the **XBOX Queue** where the workers in that queue require a `Location` label matching the name `Edmonton`.
+
+Now, we can submit a job directly to that queue, with a worker selector the requires the worker to have the label `Some-Skill` greater than 10.
```csharp
-var job = await client.CreateJobAsync(
- channelId: ManagedChannels.AcsChatChannel,
- channelReference: "12345",
+var job = await routerClient.CreateJobAsync(
+ channelId: "my-channel",
queueId: queue.Value.Id, priority: 1,
- workerSelector: new List<LabelSelector>
+ workerSelectors: new List<LabelSelector>
{
- new (
- key: "Location",
- @operator: LabelOperator.Equal,
- value: "Edmonton")
+ new LabelSelector(
+ key: "Some-Skill",
+ @operator: LabelOperator.GreaterThan,
+ value: 10)
}); ``` ## Register a worker
-Register a Worker by referencing the Queue ID created previously along with a **capacity** value, **labels**, and **channel configuration** to ensure the `EdmontonWorker` is assigned to the `XBOX_Queue'.
+
+Now, we register a worker to receive work from that queue, with a label of `Some-Skill` equal to 11 and capacity on `my-channel`.
```csharp
-var edmontonWorker = await client.RegisterWorkerAsync(
- id: "EdmontonWorker",
- queueIds: new []{ queue.Value.Id },
- totalCapacity: 100,
+var worker = await routerClient.RegisterWorkerAsync(
+ id: "worker-1",
+ queueIds: new[] { queue.Value.Id },
+ totalCapacity: 1,
labels: new LabelCollection() {
- {"Location", "Edmonton"}
+ ["Some-Skill"] = 11
}, channelConfigurations: new List<ChannelConfiguration> {
- new (
- channelId: ManagedChannels.AcsVoiceChannel,
- capacityCostPerJob: 100)
+ new ChannelConfiguration("my-channel", 1)
} ); ```
-## Query the worker to observe the job offer
-Use the Job Router client connection to query the Worker and observe the ID of the Job against the ID
+### Offer
-```csharp
- // wait 500ms for the Job Router to offer the Job to the Worker
- Task.Delay(500).Wait();
+We should get a [RouterWorkerOfferIssued][offer_issued_event] from our [EventGrid subscription][subscribe_events].
+However, we could also wait a few seconds and then query the worker directly against the JobRouter API to see if an offer was issued to it.
- var result = await client.GetWorkerAsync(edmontonWorker.Value.Id);
-
- Console.WriteLine(
- $"Job ID: {job.Value.Id} offered to {edmontonWorker.Value.Id} " +
- $"should match Job ID attached to worker: {result.}");
+```csharp
+await Task.Delay(TimeSpan.FromSeconds(2));
+var result = await routerClient.GetWorkerAsync(worker.Value.Id);
+foreach (var offer in result.Value.Offers)
+{
+ Console.WriteLine($"Worker {worker.Value.Id} has an active offer for job {offer.JobId}");
+}
``` Run the application using `dotnet run` and observe the results.
Run the application using `dotnet run` and observe the results.
```console dotnet run
-Job 6b83c5ad-5a92-4aa8-b986-3989c791be91 offered to EdmontonWorker should match Job ID from offer attached to worker: 6b83c5ad-5a92-4aa8-b986-3989c791be91
+
+Worker worker-1 has an active offer for job 6b83c5ad-5a92-4aa8-b986-3989c791be91
``` > [!NOTE] > Running the application more than once will cause a new Job to be placed in the queue each time. This can cause the Worker to be offered a Job other than the one created when you run the above code. Since this can skew your request, considering removing Jobs in the queue each time. Refer to the SDK documentation for managing a Queue or a Job.+
+<!-- LINKS -->
+[subscribe_events]: ../../how-tos/router-sdk/subscribe-events.md
+[worker_registered_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerregistered
+[job_classified_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterjobclassified
+[offer_issued_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferissued
+[offer_accepted_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferaccepted
container-apps Background Processing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/background-processing.md
Next, install the Azure Container Apps extension to the CLI.
```azurecli az extension add \
- --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.0-py2.py3-none-any.whl
+ --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.2-py2.py3-none-any.whl
``` # [PowerShell](#tab/powershell) ```azurecli az extension add `
- --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.0-py2.py3-none-any.whl
+ --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.2-py2.py3-none-any.whl
```
container-apps Microservices Dapr Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/microservices-dapr-azure-resource-manager.md
Next, install the Azure Container Apps extension to the CLI.
```azurecli az extension add \
- --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.0-py2.py3-none-any.whl
+ --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.2-py2.py3-none-any.whl
``` # [PowerShell](#tab/powershell) ```azurecli az extension add `
- --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.0-py2.py3-none-any.whl
+ --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.2-py2.py3-none-any.whl
```
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/microservices-dapr.md
Next, install the Azure Container Apps extension to the CLI.
```azurecli az extension add \
- --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.0-py2.py3-none-any.whl
+ --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.2-py2.py3-none-any.whl
``` # [PowerShell](#tab/powershell) ```azurecli az extension add `
- --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.0-py2.py3-none-any.whl
+ --source https://workerappscliextension.blob.core.windows.net/azure-cli-extension/containerapp-0.2.2-py2.py3-none-any.whl
```
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/scale-app.md
The following example shows how to create a memory scaling rule.
- In this example, the container app scales when memory usage exceeds 50%. - At a minimum, a single replica remains in memory for apps that scale based on memory utilization.
+## Azure Pipelines
+
+Azure Pipelines scaling allows your container app to scale in or out depending on the number of jobs in the Azure DevOps agent pool. With Azure Pipelines, your app can scale to zero, but you need [at least one agent registered in the pool schedule additional agents](https://keda.sh/blog/2021-05-27-azure-pipelines-scaler/). For more information regarding this scaler, see [KEDA Azure Pipelines scaler](https://keda.sh/docs/2.4/scalers/azure-pipelines/).
+
+The following example shows how to create a memory scaling rule.
+
+```json
+{
+ ...
+ "resources": {
+ ...
+ "properties": {
+ ...
+ "template": {
+ ...
+ "scale": {
+ "minReplicas": "0",
+ "maxReplicas": "10",
+ "rules": [{
+ "name": "azdo-agent-scaler",
+ "custom": {
+ "type": "azure-pipelines",
+ "metadata": {
+ "poolID": "<pool id>",
+ "targetPipelinesQueueLength": "1"
+ },
+ "auth": [
+ {
+ "secretRef": "<secret reference pat>",
+ "triggerParameter": "personalAccessToken"
+ },
+ {
+ "secretRef": "<secret reference Azure DevOps url>",
+ "triggerParameter": "organizationURL"
+ }
+ ]
+ }
+ }]
+ }
+ }
+ }
+ }
+}
+```
+
+In this example, the container app scales when at least one job is waiting in the pool queue.
## Considerations
container-instances Container Instances Using Azure Container Registry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-using-azure-container-registry.md
Title: Deploy container image from Azure Container Registry
-description: Learn how to deploy containers in Azure Container Instances by pulling container images from an Azure container registry.
+ Title: Deploy container image from Azure Container Registry using a service principal
+description: Learn how to deploy containers in Azure Container Instances by pulling container images from an Azure container registry using a service principal.
Last updated 07/02/2020
-# Deploy to Azure Container Instances from Azure Container Registry
+# Deploy to Azure Container Instances from Azure Container Registry using a service principal
-[Azure Container Registry](../container-registry/container-registry-intro.md) is an Azure-based, managed container registry service used to store private Docker container images. This article describes how to pull container images stored in an Azure container registry when deploying to Azure Container Instances. A recommended way to configure registry access is to create an Azure Active Directory service principal and password, and store the login credentials in an Azure key vault.
+[Azure Container Registry](../container-registry/container-registry-intro.md) is an Azure-based, managed container registry service used to store private Docker container images. This article describes how to pull container images stored in an Azure container registry when deploying to Azure Container Instances. One way to configure registry access is to create an Azure Active Directory service principal and password, and store the login credentials in an Azure key vault.
## Prerequisites
## Limitations
-* You can't authenticate to Azure Container Registry to pull images during container group deployment by using a [managed identity](container-instances-managed-identity.md) configured in the same container group.
* You can't pull images from [Azure Container Registry](../container-registry/container-registry-vnet.md) deployed into an Azure Virtual Network at this time. ## Configure registry authentication
container-instances Using Azure Container Registry Mi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/using-azure-container-registry-mi.md
+
+ Title: Deploy container image from Azure Container Registry using a managed identity
+description: Learn how to deploy containers in Azure Container Instances by pulling container images from an Azure container registry using a managed identity.
++ Last updated : 11/11/2021+++
+# Deploy to Azure Container Instances from Azure Container Registry using a managed identity
+
+[Azure Container Registry][acr-overview] (ACR) is an Azure-based, managed container registry service used to store private Docker container images. This article describes how to pull container images stored in an Azure container registry when deploying to container groups with Azure Container Instances. One way to configure registry access is to create an Azure Active Directory managed identity.
+
+## Prerequisites
+
+**Azure container registry**: You need a premium SKU Azure container registry with at least one image. If you need to create a registry, see [Create a container registry using the Azure CLI][acr-get-started]. Be sure to take note of the registry's `id` and `loginServer`
+
+**Azure CLI**: The command-line examples in this article use the [Azure CLI](/cli/azure/) and are formatted for the Bash shell. You can [install the Azure CLI](/cli/azure/install-azure-cli) locally, or use the [Azure Cloud Shell][cloud-shell-bash].
+
+## Limitations
+
+> [!IMPORTANT]
+> Managed identity-authenticated container image pulls from ACR are not supported in Canada Central, South India, and West Central US at this time.
+
+* Virtual Network injected container groups don't support managed identity authentication image pulls with ACR.
+
+* Windows containers don't support managed identity-authenticated image pulls with ACR.
+
+* Container groups don't support pulling images from an Azure Container Registry using [private DNS zones][private-dns-zones].
+
+## Configure registry authentication
+
+Your container registry must have Trusted Services enabled. To find instructions on how to enable trusted services, see [Allow trusted services to securely access a network-restricted container registry][allow-access-trusted-services].
+
+## Create an identity
+
+Create an identity in your subscription using the [az identity create][az-identity-create] command. You can use the same resource group you used previously to create the container registry, or a different one.
+
+```azurecli-interactive
+az identity create --resource-group myResourceGroup --name myACRId
+```
+
+To configure the identity in the following steps, use the [az identity show][az-identity-show] command to store the identity's resource ID and service principal ID in variables.
+
+In order to properly configure the identity in future steps, use [az identity show][az-identity-show] to obtain and store the identity's resource ID and service principal ID in variables.
+
+```azurecli-interactive
+# Get resource ID of the user-assigned identity
+userID=$(az identity show --resource-group myResourceGroup --name myACRId --query id --output tsv)
+# Get service principal ID of the user-assigned identity
+spID=$(az identity show --resource-group myResourceGroup --name myACRId --query principalId --output tsv)
+```
+
+You'll need the identity's resource ID to sign in to the CLI from your virtual machine. To show the value:
+
+```bash
+echo $userID
+```
+
+The resource ID is of the form:
+
+```bash
+/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACRId
+```
+
+You'll also need the service principal ID to grant the managed identity access to your container registry. To show the value:
+
+```bash
+echo $spID
+```
+
+The service principal ID is of the form:
+
+```bash
+xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx
+```
+
+## Grant the identity a role assignment
+
+In order for your identity to access your container registry, you must grant it a role assignment. Use to following command to grant the `acrpull` role to the identity you've just created, making sure to provide your registry's ID and the service principal we obtained earlier:
+
+```azurecli-interactive
+az role assignment create --assignee $spID --scope <registry-id> --role acrpull
+```
+
+## Deploy using an Azure Resource Manager (ARM) template
+
+Start by copying the following JSON into a new file named `azuredeploy.json`. In Azure Cloud Shell, you can use Visual Studio Code to create the file in your working directory:
+
+```bash
+code azuredeploy.json
+```
+
+You can specify the properties of your Azure container registry in an ARM template by including the `imageRegistryCredentials` property in the container group definition. For example, you can specify the registry credentials directly:
+
+> [!NOTE]
+> This is not a comprehensive ARM template, but rather an example of what the `resources` section of a complete template would look like.
+
+```JSON
+{
+ "type": "Microsoft.ContainerInstance/containerGroups",
+ "apiVersion": "2021-09-01",
+ "name": "myContainerGroup",
+ "location": "norwayeast",
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACRId": {}
+ }
+ },
+ "properties": {
+ "containers": [
+ {
+ "name": "mycontainer",
+ "properties": {
+ "image": "myacr.azurecr.io/hello-world:latest",
+ "ports": [
+ {
+ "port": 80,
+ "protocol": "TCP"
+ }
+ ],
+ "resources": {
+ "requests": {
+ "cpu": 1,
+ "memoryInGB": 1
+ }
+ }
+ }
+ }
+ ],
+ "imageRegistryCredentials": [
+ {
+ "server":"myacr.azurecr.io",
+ "identity":"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACRId"
+ }
+ ],
+ "ipAddress": {
+ "ports": [
+ {
+ "port": 80,
+ "protocol": "TCP"
+ }
+ ],
+ "type": "public"
+ },
+ "osType": "Linux"
+ }
+ }
+```
+
+### Deploy the template
+
+Deploy your Resource Manager template with the following command:
+
+```azurecli-interactive
+az deployment group create --resource-group myResourceGroup --template-file azuredeploy.json
+```
+
+## Deploy using the Azure CLI
+
+To deploy a container group using managed identity to authenticate image pulls via the Azure CLI, use the following command, making sure that your `<dns-label>` is globally unique:
+
+```azurecli-interactive
+az container create --name my-containergroup --resource-group myResourceGroup --image <loginServer>/hello-world:v1 --acr-identity $userID --assign-identity $userID --ports 80 --dns-name-label <dns-label>
+```
+
+## Clean up resources
+
+To remove all resources from your Azure subscription, delete the resource group:
+
+```azurecli-interactive
+az group delete --name myResourceGroup
+```
+
+## Next Steps
+
+* [Learn how to deploy to Azure Container Instances from Azure Container Registry using a service principal][use-service-principal]
+
+<!-- Links Internal -->
+
+[use-service-principal]: ./container-instances-using-azure-container-registry.md
+[az-identity-show]: /cli/azure/identity#az_identity_show
+[az-identity-create]: /cli/azure/identity#az_identity_create
+[acr-overview]: ../container-registry/container-registry-intro.md
+[acr-get-started]: ../container-registry/container-registry-get-started-azure-cli.md
+[private-dns-zones]: ../dns/private-dns-privatednszone.md
+[allow-access-trusted-services]: ../container-registry/allow-access-trusted-services.md
+
+<!-- Links External -->
+[cloud-shell-bash]: https://shell.azure.com/bash
container-registry Allow Access Trusted Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/allow-access-trusted-services.md
Title: Access network-restricted registry using trusted Azure service description: Enable a trusted Azure service instance to securely access a network-restricted container registry to pull or push images Previously updated : 05/19/2021++ Last updated : 01/26/2022
-# Allow trusted services to securely access a network-restricted container registry (preview)
+# Allow trusted services to securely access a network-restricted container registry
Azure Container Registry can allow select trusted Azure services to access a registry that's configured with network access rules. When trusted services are allowed, a trusted service instance can securely bypass the registry's network rules and perform operations such as pull or push images. This article explains how to enable and use trusted services with a network-restricted Azure container registry. Use the Azure Cloud Shell or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.18 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-Allowing registry access by trusted Azure services is a **preview** feature.
- ## Limitations
-* For registry access scenarios that need a managed identity, only a system-assigned identity may be used. User-assigned managed identities aren't currently supported.
+* Certain registry access scenarios with trusted services require a [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md). Except where noted that a user-assigned managed identity is supported, only a system-assigned identity may be used.
* Allowing trusted services doesn't apply to a container registry configured with a [service endpoint](container-registry-vnet.md). The feature only affects registries that are restricted with a [private endpoint](container-registry-private-link.md) or that have [public IP access rules](container-registry-access-selected-networks.md) applied. ## About trusted services
Where indicated, access by the trusted service requires additional configuration
|Trusted service |Supported usage scenarios | Configure managed identity with RBAC role ||||
+| Azure Container Instances | [Authenticate with Azure Container Registry from Azure Container Instances](container-registry-auth-aci.md) | Yes, either system-assigned or user-assigned identity |
| Microsoft Defender for Cloud | Vulnerability scanning by [Microsoft Defender for container registries](scan-images-defender.md) | No | |ACR Tasks | [Access the parent registry or a different registry from an ACR Task](container-registry-tasks-cross-registry-authentication.md) | Yes | |Machine Learning | [Deploy](../machine-learning/how-to-deploy-custom-container.md) or [train](../machine-learning/how-to-train-with-custom-image.md) a model in a Machine Learning workspace using a custom Docker container image | Yes | |Azure Container Registry | [Import images](container-registry-import-images.md) to or from a network-restricted Azure container registry | No | > [!NOTE]
-> Curently, enabling the allow trusted services setting doesn't apply to certain other managed Azure services including App Service and Azure Container Instances.
+> Curently, enabling the allow trusted services setting doesn't apply to App Service.
## Allow trusted services - CLI
To disable or re-enable the setting in the portal:
Here's a typical workflow to enable an instance of a trusted service to access a network-restricted container registry. This workflow is needed when a service instance's managed identity is used to bypass the registry's network rules.
-1. Enable a system-assigned [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) in an instance of one of the [trusted services](#trusted-services) for Azure Container Registry.
+1. Enable a managed identity in an instance of one of the [trusted services](#trusted-services) for Azure Container Registry.
1. Assign the identity an [Azure role](container-registry-roles.md) to your registry. For example, assign the ACRPull role to pull container images. 1. In the network-restricted registry, configure the setting to allow access by trusted services. 1. Use the identity's credentials to authenticate with the network-restricted registry.
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/analytical-store-introduction.md
Title: What is Azure Cosmos DB Analytical Store?
+ Title: What is Azure Cosmos DB analytical store?
description: Learn about Azure Cosmos DB transactional (row-based) and analytical(column-based) store. Benefits of analytical store, performance impact for large-scale workloads, and auto sync of data from transactional store to analytical store
Using Azure Synapse Link, you can now build no-ETL HTAP solutions by directly li
## Features of analytical store
-When you enable analytical store on an Azure Cosmos DB container, a new column-store is internally created based on the operational data in your container. This column store is persisted separately from the row-oriented transactional store for that container. The inserts, updates, and deletes to your operational data are automatically synced to analytical store. You don't need the change feed or ETL to sync the data.
+When you enable analytical store on an Azure Cosmos DB container, a new column-store is internally created based on the operational data in your container. This column store is persisted separately from the row-oriented transactional store for that container. The inserts, updates, and deletes to your operational data are automatically synced to analytical store. You don't need the Change Feed or ETL to sync the data.
## Column store for analytical workloads on operational data
The following image shows transactional row store vs. analytical column store in
## Decoupled performance for analytical workloads
-There is no impact on the performance of your transactional workloads due to analytical queries, as the analytical store is separate from the transactional store. Analytical store does not need separate request units (RUs) to be allocated.
+There's no impact on the performance of your transactional workloads due to analytical queries, as the analytical store is separate from the transactional store. Analytical store doesn't need separate request units (RUs) to be allocated.
## Auto-Sync
At the end of each execution of the automatic sync process, your transactional d
## Scalability & elasticity
-By using horizontal partitioning, Azure Cosmos DB transactional store can elastically scale the storage and throughput without any downtime. Horizontal partitioning in the transactional store provides scalability & elasticity in auto-sync to ensure data is synced to the analytical store in near real time. The data sync happens regardless of the transactional traffic throughput, whether it is 1000 operations/sec or 1 million operations/sec, and it doesn't impact the provisioned throughput in the transactional store.
+By using horizontal partitioning, Azure Cosmos DB transactional store can elastically scale the storage and throughput without any downtime. Horizontal partitioning in the transactional store provides scalability & elasticity in auto-sync to ensure data is synced to the analytical store in near real time. The data sync happens regardless of the transactional traffic throughput, whether it's 1000 operations/sec or 1 million operations/sec, and it doesn't impact the provisioned throughput in the transactional store.
## <a id="analytical-schema"></a>Automatically handle schema updates
The following constraints are applicable on the operational data in Azure Cosmos
* Sample scenarios: * If your document's first level has 2000 properties, only the first 1000 will be represented.
- * If your documents have 5 levels with 200 properties in each one, all properties will be represented.
- * If your documents have 10 levels with 400 properties in each one, only the 2 first levels will be fully represented in analytical store. Half of the third level will also be represented.
+ * If your documents have five levels with 200 properties in each one, all properties will be represented.
+ * If your documents have ten levels with 400 properties in each one, only the two first levels will be fully represented in analytical store. Half of the third level will also be represented.
-* The hypothetical document below contains 4 properties and 3 levels.
+* The hypothetical document below contains four properties and three levels.
* The levels are `root`, `myArray`, and the nested structure within the `myArray`.
- * The properties are `id`, `myArray`, `myArray.nested1` and `myArray.nested2`.
- * The analytical store representation will have 2 columns, `id` and `myArray`. You can use Spark or T-SQL functions to also expose the nested structures as columns.
+ * The properties are `id`, `myArray`, `myArray.nested1`, and `myArray.nested2`.
+ * The analytical store representation will have two columns, `id`, and `myArray`. You can use Spark or T-SQL functions to also expose the nested structures as columns.
```json
The following constraints are applicable on the operational data in Azure Cosmos
} ```
-* While JSON documents (and Cosmos DB collections/containers) are case sensitive from the uniqueness perspective, analytical store is not.
+* While JSON documents (and Cosmos DB collections/containers) are case-sensitive from the uniqueness perspective, analytical store isn't.
- * **In the same document:** Properties names in the same level should be unique when compared case insensitively. For example, the following JSON document has "Name" and "name" in the same level. While it's a valid JSON document, it doesn't satisfy the uniqueness constraint and hence will not be fully represented in the analytical store. In this example, "Name" and "name" are the same when compared in a case insensitive manner. Only `"Name": "fred"` will be represented in analytical store, because it is the first occurrence. And `"name": "john"` won't be represented at all.
+ * **In the same document:** Properties names in the same level should be unique when compared case-insensitively. For example, the following JSON document has "Name" and "name" in the same level. While it's a valid JSON document, it doesn't satisfy the uniqueness constraint and hence won't be fully represented in the analytical store. In this example, "Name" and "name" are the same when compared in a case-insensitive manner. Only `"Name": "fred"` will be represented in analytical store, because it's the first occurrence. And `"name": "john"` won't be represented at all.
```json
The following constraints are applicable on the operational data in Azure Cosmos
* Documents with more properties than the initial schema will generate new columns in analytical store. * Columns can't be removed. * The deletion of all documents in a collection doesn't reset the analytical store schema.
- * There is not schema versioning. The last version inferred from transactional store is what you will see in analytical store.
+ * There's not schema versioning. The last version inferred from transactional store is what you'll see in analytical store.
* Currently Azure Synapse Spark can't read properties that contain some special characters in their names, listed below. Azure Synapse SQL serverless isn't affected.
- * : (Colon)
- * ` (Grave accent)
- * , (Comma)
- * ; (Semicolon)
- * {}
- * ()
- * \n
- * \t
- * = (Equal sign)
- * " (Quotation mark)
+ * `:`
+ * ```
+ * `,`
+ * `;`
+ * `{}`
+ * `()`
+ * `\n`
+ * `\t`
+ * `=`
+ * `"`
> [!NOTE] > White spaces are also listed in the Spark error message returned when you reach this limitation. But we have added a special treatment for white spaces, please check out more details in the items below. * If you have properties names using the characters listed above, the alternatives are: * Change your data model in advance to avoid these characters.
- * Since we currently we don't support schema reset, you can change your application to add a redundant property with a similar name, avoiding these characters.
+ * Since currently we don't support schema reset, you can change your application to add a redundant property with a similar name, avoiding these characters.
* Use Change Feed to create a materialized view of your container without these characters in properties names. * Use the `dropColumn` Spark option to ignore the affected columns and load all other columns into a DataFrame. The syntax is:
df = spark.read\
.load() ```
-* The following BSON datatypes are not supported and won't be represented in analytical store:
+* The following BSON datatypes aren't supported and won't be represented in analytical store:
* Decimal128 * Regular Expression * DB Pointer * JavaScript * Symbol
- * MinKey / MaxKey
+ * MinKey/MaxKey
-* When using DateTime strings that follows the ISO 8601 UTC standard, expect the following behavior:
+* When using DateTime strings that follow the ISO 8601 UTC standard, expect the following behavior:
* Spark pools in Azure Synapse will represent these columns as `string`. * SQL serverless pools in Azure Synapse will represent these columns as `varchar(8000)`.
There are two types of schema representation in the analytical store. These type
#### Full fidelity schema for SQL API accounts
-It is possible to use full fidelity Schema for SQL (Core) API accounts, instead of the default option, by setting the schema type when enabling Synapse Link on a Cosmos DB account for the first time. Here are the considerations about changing the default schema representation type:
+It's possible to use full fidelity Schema for SQL (Core) API accounts, instead of the default option, by setting the schema type when enabling Synapse Link on a Cosmos DB account for the first time. Here are the considerations about changing the default schema representation type:
* This option is only valid for accounts that **don't** have Synapse Link already enabled. * It isn't possible to reset the schema representation type, from well-defined to full fidelity or vice-versa. * Currently Azure Cosmos DB API for MongoDB accounts aren't compatible with this possibility of changing the schema representation. All MongoDB accounts will always have full fidelity schema representation type.
- * Currently this change can't be made through the Azure portal. All database accounts that have Synapse LinK enabled by the Azure portal will have the default schema representation type, well defined schema.
+ * Currently this change can't be made through the Azure portal. All database accounts that have Synapse Link enabled by the Azure portal will have the default schema representation type, well-defined schema.
The schema representation type decision must be made at the same time that Synapse Link is enabled on the account, using Azure CLI or PowerShell.
The well-defined schema representation creates a simple tabular representation o
* The first document defines the base schema and property must always have the same type across all documents. The only exceptions are: * From null to any other data type.The first non-null occurrence defines the column data type. Any document not following the first non-null datatype won't be represented in analytical store. * From `float` to `integer`. All documents will be represented in analytical store.
- * From `integer` to `float`. All documents will be represented in analytical store. However, to read this data with Azure Synapse SQL serverless pools, you must use a WITH clause to convert the column to `varchar`. And after this initial conversion, it is possible to convert it again to a number. Please check the example below, where **num** initial value was an integer and the second one was a float.
+ * From `integer` to `float`. All documents will be represented in analytical store. However, to read this data with Azure Synapse SQL serverless pools, you must use a WITH clause to convert the column to `varchar`. And after this initial conversion, it's possible to convert it again to a number. Please check the example below, where **num** initial value was an integer and the second one was a float.
```SQL SELECT CAST (num as float) as num
FROM OPENROWSET(PROVIDER = 'CosmosDB',
WITH (num varchar(100)) AS [IntToFloat] ```
- * Properties that don't follow the base schema data type won't be represented in analytical store. For example, consider the documents below: the first one defined the analytical store base schema. The second document, where `id` is `"2"`, **doesn't** have a well-defined schema since property `"code"` is a string and the first document has `"code"` as a number. In this case, the analytical store registers the data type of `"code"` as `integer` for lifetime of the container. The second document will still be included in analytical store, but its `"code"` property will not.
+ * Properties that don't follow the base schema data type won't be represented in analytical store. For example, consider the documents below: the first one defined the analytical store base schema. The second document, where `id` is `"2"`, **doesn't** have a well-defined schema since property `"code"` is a string and the first document has `"code"` as a number. In this case, the analytical store registers the data type of `"code"` as `integer` for lifetime of the container. The second document will still be included in analytical store, but its `"code"` property won't.
* `{"id": "1", "code":123}` * `{"id": "2", "code": "123"}` > [!NOTE]
- > The condition above doesn't apply for null properties. For example, `{"a":123} and {"a":null}` is still well defined.
+ > The condition above doesn't apply for null properties. For example, `{"a":123} and {"a":null}` is still well-defined.
> [!NOTE] > The condition above doesn't change if you update `"code"` of document `"1"` to a string in your transactional store. In analytical store, `"code"` will be kept as `integer` since currently we don't support schema reset.
-* Array types must contain a single repeated type. For example, `{"a": ["str",12]}` is not a well-defined schema because the array contains a mix of integer and string types.
+* Array types must contain a single repeated type. For example, `{"a": ["str",12]}` isn't a well-defined schema because the array contains a mix of integer and string types.
> [!NOTE]
-> If the Azure Cosmos DB analytical store follows the well-defined schema representation and the specification above is violated by certain items, those items will not be included in the analytical store.
+> If the Azure Cosmos DB analytical store follows the well-defined schema representation and the specification above is violated by certain items, those items won't be included in the analytical store.
-* Expect different behavior in regard to different types in well defined schema:
+* Expect different behavior in regard to different types in well-defined schema:
* Spark pools in Azure Synapse will represent these values as `undefined`. * SQL serverless pools in Azure Synapse will represent these values as `NULL`.
The leaf property `streetNo` within the nested object `address` will be represen
**Data type to suffix map**
-Here is a map of all the property data types and their suffix representations in the analytical store:
+Here's a map of all the property data types and their suffix representations in the analytical store:
|Original data type |Suffix |Example | ||||
Here is a map of all the property data types and their suffix representations in
Data tiering refers to the separation of data between storage infrastructures optimized for different scenarios. Thereby improving the overall performance and cost-effectiveness of the end-to-end data stack. With analytical store, Azure Cosmos DB now supports automatic tiering of data from the transactional store to analytical store with different data layouts. With analytical store optimized in terms of storage cost compared to the transactional store, allows you to retain much longer horizons of operational data for historical analysis.
-After the analytical store is enabled, based on the data retention needs of the transactional workloads, you can configure the 'Transactional Store Time to Live (Transactional TTL)' property to have records automatically deleted from the transactional store after a certain time period. Similarly, the 'Analytical Store Time To Live (Analytical TTL)' allows you to manage the lifecycle of data retained in the analytical store independent from the transactional store. By enabling analytical store and configuring TTL properties, you can seamlessly tier and define the data retention period for the two stores.
+After the analytical store is enabled, based on the data retention needs of the transactional workloads, you can configure the transactional store Time-to-Live (TTTL) property to have records automatically deleted from the transactional store after a certain time period. Similarly, the analytical store Time-to-Live (ATTL)' allows you to manage the lifecycle of data retained in the analytical store independent from the transactional store. By enabling analytical store and configuring TTL properties, you can seamlessly tier and define the data retention period for the two stores.
+
+## Backup
+
+Currently analytical store doesn't support backup and restore, and your backup policy can't be planned relying on that. For more information, check the limitations section of [this](synapse-link.md#limitations) document. While continuous backup mode isn't supported in database accounts with Synapse Link enabled, periodic backup mode is.
+
+With periodic backup mode and existing containers, you can:
+
+ ### Fully rebuild analytical store when TTTL >= ATTL
+
+ The original container is restored without analytical store. But you can enable it and it will be rebuild with all data that existing in the container.
+
+ ### Partially rebuild analytical store when TTTL < ATTL
+
+The data that was only in analytical store isn't restored, but it will be kept available for queries as long as you keep the original container. Analytical store is only deleted when you delete the container. You analytical queries in Azure Synapse Analytics can read data from both original and restored container's analytical stores. Example:
+
+ * Container `OnlineOrders` has TTTL set to one month and ATTL set for one year.
+ * When you restore it to `OnlineOrdersNew` and turn on analytical store to rebuild it, there will be only one month of data in both transactional and analytical store.
+ * Original container `OnlineOrders` isn't deleted and its analytical store is still available.
+ * New data is only ingested into `OnlineOrdersNew`.
+ * Analytical queries will do a UNION ALL from analytical stores while the original data is still relevant.
+
+If you want to delete the original container but don't want to lose its analytical store data, you can persist the analytical store of the original container in another Azure data service. Synapse Analytics has the capability to perform joins between data stored in different locations. An example: A Synapse Analytics query joins analytical store data with external tables located in Azure Blob Storage, Azure Data Lake Store, etc.
+
+It's important to note that the data in the analytical store has a different schema than what exists in the transactional store. While you can generate snapshots of your analytical store data, and export it to any Azure Data service, at no RUs costs, we can't guarantee the use of this snapshot to back feed the transactional store. This process isn't supported.
-> [!NOTE]
->Currently analytical store doesn't support backup and restore. Your backup policy can't be planned relying on analytical store. For more information, check the limitations section of [this](synapse-link.md#limitations) document. It is important to note that the data in the analytical store has a different schema than what exists in the transactional store. While you can generate snapshots of your analytical store data, at no RUs costs, we cannot guarantee the use of this snapshot to backfeed the transactional store. This process is not supported.
## Global Distribution
If you have a globally distributed Azure Cosmos DB account, after you enable ana
## Partitioning
-Analytical store partitioning is completely independent of partitioning in the transactional store. By default, data in analytical store is not partitioned. If your analytical queries have frequently used filters, you have the option to partition based on these fields for better query performance. To learn more, see the [introduction to custom partitioning](custom-partitioning-analytical-store.md) and [how to configure custom partitioning](configure-custom-partitioning.md) articles.
+Analytical store partitioning is completely independent of partitioning in the transactional store. By default, data in analytical store isn't partitioned. If your analytical queries have frequently used filters, you have the option to partition based on these fields for better query performance. To learn more, see the [introduction to custom partitioning](custom-partitioning-analytical-store.md) and [how to configure custom partitioning](configure-custom-partitioning.md) articles.
## Security
By decoupling the analytical storage system from the analytical compute system,
## <a id="analytical-store-pricing"></a> Pricing
-Analytical store follows a consumption-based pricing model where you are charged for:
+Analytical store follows a consumption-based pricing model where you're charged for:
-* Storage: the volume of the data retained in the analytical store every month including historical data as defined by Analytical TTL.
+* Storage: the volume of the data retained in the analytical store every month including historical data as defined by analytical TTL.
* Analytical write operations: the fully managed synchronization of operational data updates to the analytical store from the transactional store (auto-sync) * Analytical read operations: the read operations performed against the analytical store from Azure Synapse Analytics Spark pool and serverless SQL pool run times.
-Analytical store pricing is separate from the transaction store pricing model. There is no concept of provisioned RUs in the analytical store. See [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for full details on the pricing model for analytical store.
+Analytical store pricing is separate from the transaction store pricing model. There's no concept of provisioned RUs in the analytical store. See [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for full details on the pricing model for analytical store.
Data in the analytics store can only be accessed through Azure Synapse Link, which is done in the Azure Synapse Analytics runtimes: Azure Synapse Apache Spark pools and Azure Synapse serverless SQL pools. See [Azure Synapse Analytics pricing page](https://azure.microsoft.com/pricing/details/synapse-analytics/) for full details on the pricing model to access data in analytical store.
Azure Synapse serverless SQL pools. See [Azure Synapse Analytics pricing page](h
In order to get a high-level cost estimate to enable analytical store on an Azure Cosmos DB container, from the analytical store perspective, you can use the [Azure Cosmos DB Capacity planner](https://cosmos.azure.com/capacitycalculator/) and get an estimate of your analytical storage and write operations costs. Analytical read operations costs depends on the analytics workload characteristics but as a high-level estimate, scan of 1 TB of data in analytical store typically results in 130,000 analytical read operations, and results in a cost of $0.065. > [!NOTE]
-> Analytical store read operations estimates are not included in the Cosmos DB cost calculator since they are a function of your analytical workload. While the above estimate is for scanning 1TB of data in analytical store, applying filters reduces the volume of data scanned and this determines the exact number of analytical read operations given the consumption pricing model. A proof-of-concept around the analytical workload would provide a more finer estimate of analytical read operations. This estimate does not include the cost of Azure Synapse Analytics.
+> Analytical store read operations estimates aren't included in the Cosmos DB cost calculator since they are a function of your analytical workload. While the above estimate is for scanning 1TB of data in analytical store, applying filters reduces the volume of data scanned and this determines the exact number of analytical read operations given the consumption pricing model. A proof-of-concept around the analytical workload would provide a more finer estimate of analytical read operations. This estimate doesn't include the cost of Azure Synapse Analytics.
## <a id="analytical-ttl"></a> Analytical Time-to-Live (TTL)
If analytical store is enabled, inserts, updates, deletes to operational data ar
Analytical TTL on a container is set using the `AnalyticalStoreTimeToLiveInSeconds` property:
-* If the value is set to "0", missing (or set to null): the analytical store is disabled and no data is replicated from transactional store to analytical store
+* If the value is set to `0` or set to `null`: the analytical store is disabled and no data is replicated from transactional store to analytical store
-* If present and the value is set to "-1": the analytical store retains all historical data, irrespective of the retention of the data in the transactional store. This setting indicates that the analytical store has infinite retention of your operational data
+* If the value is set to `-1`: the analytical store retains all historical data, irrespective of the retention of the data in the transactional store. This setting indicates that the analytical store has infinite retention of your operational data
-* If present and the value is set to some positive number "n": items will expire from the analytical store "n" seconds after their last modified time in the transactional store. This setting can be leveraged if you want to retain your operational data for a limited period of time in the analytical store, irrespective of the retention of the data in the transactional store
+* If the value is set to any positive integer `n` number: items will expire from the analytical store `n` seconds after their last modified time in the transactional store. This setting can be leveraged if you want to retain your operational data for a limited period of time in the analytical store, irrespective of the retention of the data in the transactional store
Some points to consider:
Some points to consider:
* While transactional TTL can be set at the container or item level, analytical TTL can only be set at the container level currently. * You can achieve longer retention of your operational data in the analytical store by setting analytical TTL >= transactional TTL at the container level. * The analytical store can be made to mirror the transactional store by setting analytical TTL = transactional TTL.
-* If you have analytical TTL bigger than transactional TTL, at some point in time you will have data that only exists in analytical store. This data is read only.
+* If you have analytical TTL bigger than transactional TTL, at some point in time you'll have data that only exists in analytical store. This data is read only.
How to enable analytical store on a container:
To learn more, see the following docs:
* [Azure Synapse Link for Azure Cosmos DB](synapse-link.md)
-* Checkout the learn module on how to [Design hybrid transactional and analytical processing using Azure Synapse Analytics](/learn/modules/design-hybrid-transactional-analytical-processing-using-azure-synapse-analytics/)
+* Check out the learn module on how to [Design hybrid transactional and analytical processing using Azure Synapse Analytics](/learn/modules/design-hybrid-transactional-analytical-processing-using-azure-synapse-analytics/)
* [Get started with Azure Synapse Link for Azure Cosmos DB](configure-synapse-link.md)
cosmos-db Modeling Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/modeling-data.md
While schema-free databases, like Azure Cosmos DB, make it super easy to store a
How is data going to be stored? How is your application going to retrieve and query data? Is your application read-heavy, or write-heavy?
-After reading this article, you will be able to answer the following questions:
+After reading this article, you'll be able to answer the following questions:
* What is data modeling and why should I care? * How is modeling data in Azure Cosmos DB different to a relational database?
For comparison, let's first see how we might model data in a relational database
:::image type="content" source="./media/sql-api-modeling-data/relational-data-model.png" alt-text="Relational database model" border="false":::
-When working with relational databases, the strategy is to normalize all your data. Normalizing your data typically involves taking an entity, such as a person, and breaking it down into discrete components. In the example above, a person can have multiple contact detail records, as well as multiple address records. Contact details can be further broken down by further extracting common fields like a type. The same applies to address, each record can be of type *Home* or *Business*.
+When working with relational databases, the strategy is to normalize all your data. Normalizing your data typically involves taking an entity, such as a person, and breaking it down into discrete components. In the example above, a person may have multiple contact detail records, as well as multiple address records. Contact details can be further broken down by further extracting common fields like a type. The same applies to address, each record can be of type *Home* or *Business*.
The guiding premise when normalizing data is to **avoid storing redundant data** on each record and rather refer to data. In this example, to read a person, with all their contact details and addresses, you need to use JOINS to effectively compose back (or denormalize) your data at run time.
Now let's take a look at how we would model the same data as a self-contained en
} ```
-Using the approach above we have **denormalized** the person record, by **embedding** all the information related to this person, such as their contact details and addresses, into a *single JSON* document.
+Using the approach above we've **denormalized** the person record, by **embedding** all the information related to this person, such as their contact details and addresses, into a *single JSON* document.
In addition, because we're not confined to a fixed schema we have the flexibility to do things like having contact details of different shapes entirely. Retrieving a complete person record from the database is now a **single read operation** against a single container and for a single item. Updating a person record, with their contact details and addresses, is also a **single write operation** against a single item.
In general, use embedded data models when:
* There are **contained** relationships between entities. * There are **one-to-few** relationships between entities.
-* There is embedded data that **changes infrequently**.
-* There is embedded data that will not grow **without bound**.
-* There is embedded data that is **queried frequently together**.
+* There's embedded data that **changes infrequently**.
+* There's embedded data that will not grow **without bound**.
+* There's embedded data that is **queried frequently together**.
> [!NOTE] > Typically denormalized data models provide better **read** performance.
Take this JSON snippet.
} ```
-This might be what a post entity with embedded comments would look like if we were modeling a typical blog, or CMS, system. The problem with this example is that the comments array is **unbounded**, meaning that there is no (practical) limit to the number of comments any single post can have. This may become a problem as the size of the item could grow infinitely large.
+This might be what a post entity with embedded comments would look like if we were modeling a typical blog, or CMS, system. The problem with this example is that the comments array is **unbounded**, meaning that there's no (practical) limit to the number of comments any single post can have. This may become a problem as the size of the item could grow infinitely large.
As the size of the item grows the ability to transmit the data over the wire as well as reading and updating the item, at scale, will be impacted.
Comment items:
This model has the three most recent comments embedded in the post container, which is an array with a fixed set of attributes. The other comments are grouped in to batches of 100 comments and stored as separate items. The size of the batch was chosen as 100 because our fictitious application allows the user to load 100 comments at a time.
-Another case where embedding data is not a good idea is when the embedded data is used often across items and will change frequently.
+Another case where embedding data isn't a good idea is when the embedded data is used often across items and will change frequently.
Take this JSON snippet.
Take this JSON snippet.
} ```
-This could represent a person's stock portfolio. We have chosen to embed the stock information into each portfolio document. In an environment where related data is changing frequently, like a stock trading application, embedding data that changes frequently is going to mean that you are constantly updating each portfolio document every time a stock is traded.
+This could represent a person's stock portfolio. We have chosen to embed the stock information into each portfolio document. In an environment where related data is changing frequently, like a stock trading application, embedding data that changes frequently is going to mean that you're constantly updating each portfolio document every time a stock is traded.
Stock *zaza* may be traded many hundreds of times in a single day and thousands of users could have *zaza* on their portfolio. With a data model like the above we would have to update many thousands of portfolio documents many times every day leading to a system that won't scale well. ## Referencing data
-Embedding data works nicely for many cases but there are scenarios when denormalizing your data will cause more problems than it is worth. So what do we do now?
+Embedding data works nicely for many cases but there are scenarios when denormalizing your data will cause more problems than it's worth. So what do we do now?
-Relational databases are not the only place where you can create relationships between entities. In a document database, you can have information in one document that relates to data in other documents. We do not recommend building systems that would be better suited to a relational database in Azure Cosmos DB, or any other document database, but simple relationships are fine and can be useful.
+Relational databases aren't the only place where you can create relationships between entities. In a document database, you may have information in one document that relates to data in other documents. We don't recommend building systems that would be better suited to a relational database in Azure Cosmos DB, or any other document database, but simple relationships are fine and may be useful.
In the JSON below we chose to use the example of a stock portfolio from earlier but this time we refer to the stock item on the portfolio instead of embedding it. This way, when the stock item changes frequently throughout the day the only document that needs to be updated is the single stock document.
An immediate downside to this approach though is if your application is required
### What about foreign keys?
-Because there is currently no concept of a constraint, foreign-key or otherwise, any inter-document relationships that you have in documents are effectively "weak links" and will not be verified by the database itself. If you want to ensure that the data a document is referring to actually exists, then you need to do this in your application, or through the use of server-side triggers or stored procedures on Azure Cosmos DB.
+Because there's currently no concept of a constraint, foreign-key or otherwise, any inter-document relationships that you have in documents are effectively "weak links" and won't be verified by the database itself. If you want to ensure that the data a document is referring to actually exists, then you need to do this in your application, or through the use of server-side triggers or stored procedures on Azure Cosmos DB.
### When to reference
Joining documents:
This would work. However, loading either an author with their books, or loading a book with its author, would always require at least two additional queries against the database. One query to the joining document and then another query to fetch the actual document being joined.
-If all this join table is doing is gluing together two pieces of data, then why not drop it completely?
-Consider the following.
+If this join is only gluing together two pieces of data, then why not drop it completely?
+Consider the following example.
```json Author documents:
Book documents:
{"id": "b4", "name": "Deep Dive into Azure Cosmos DB", "authors": ["a2"]} ```
-Now, if I had an author, I immediately know which books they have written, and conversely if I had a book document loaded I would know the IDs of the author(s). This saves that intermediary query against the join table reducing the number of server round trips your application has to make.
+Now, if I had an author, I immediately know which books they've written, and conversely if I had a book document loaded I would know the IDs of the author(s). This saves that intermediary query against the join table reducing the number of server round trips your application has to make.
## Hybrid data models
-We've now looked embedding (or denormalizing) and referencing (or normalizing) data, each have their upsides and each have compromises as we have seen.
+We've now looked embedding (or denormalizing) and referencing (or normalizing) data, each have their upsides and each have compromises as we've seen.
It doesn't always have to be either or, don't be scared to mix things up a little.
Book documents:
Here we've (mostly) followed the embedded model, where data from other entities are embedded in the top-level document, but other data is referenced.
-If you look at the book document, we can see a few interesting fields when we look at the array of authors. There is an `id` field that is the field we use to refer back to an author document, standard practice in a normalized model, but then we also have `name` and `thumbnailUrl`. We could have stuck with `id` and left the application to get any additional information it needed from the respective author document using the "link", but because our application displays the author's name and a thumbnail picture with every book displayed we can save a round trip to the server per book in a list by denormalizing **some** data from the author.
+If you look at the book document, we can see a few interesting fields when we look at the array of authors. There's an `id` field that is the field we use to refer back to an author document, standard practice in a normalized model, but then we also have `name` and `thumbnailUrl`. We could have stuck with `id` and left the application to get any additional information it needed from the respective author document using the "link", but because our application displays the author's name and a thumbnail picture with every book displayed we can save a round trip to the server per book in a list by denormalizing **some** data from the author.
Sure, if the author's name changed or they wanted to update their photo we'd have to go and update every book they ever published but for our application, based on the assumption that authors don't change their names often, this is an acceptable design decision. In the example, there are **pre-calculated aggregates** values to save expensive processing on a read operation. In the example, some of the data embedded in the author document is data that is calculated at run-time. Every time a new book is published, a book document is created **and** the countOfBooks field is set to a calculated value based on the number of book documents that exist for a particular author. This optimization would be good in read heavy systems where we can afford to do computations on writes in order to optimize reads.
-The ability to have a model with pre-calculated fields is made possible because Azure Cosmos DB supports **multi-document transactions**. Many NoSQL stores cannot do transactions across documents and therefore advocate design decisions, such as "always embed everything", due to this limitation. With Azure Cosmos DB, you can use server-side triggers, or stored procedures, that insert books and update authors all within an ACID transaction. Now you don't **have** to embed everything into one document just to be sure that your data remains consistent.
+The ability to have a model with pre-calculated fields is made possible because Azure Cosmos DB supports **multi-document transactions**. Many NoSQL stores can't do transactions across documents and therefore advocate design decisions, such as "always embed everything", due to this limitation. With Azure Cosmos DB, you can use server-side triggers, or stored procedures, that insert books and update authors all within an ACID transaction. Now you don't **have** to embed everything into one document just to be sure that your data remains consistent.
## Distinguishing between different document types
Review documents:
} ```
-## Next steps
+## Data modeling for Azure Synapse Link and Azure Cosmos DB analytical store
+
+[Azure Synapse Link for Azure Cosmos DB](../synapse-link.md) is a cloud-native hybrid transactional and analytical processing (HTAP) capability that enables you to run near real-time analytics over operational data in Azure Cosmos DB. Azure Synapse Link creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics.
+
+This integration happens through [Azure Cosmos DB analytical store](../analytical-store-introduction.md), a columnar representation of your transactional data that enables large-scale analytics without any impact to your transactional workloads. This analytical store is suitable for fast, cost-effective queries on large operational data sets, without copying data and impacting the performance of your transactional workloads. When you create a container with analytical store enabled, or when you enable analytical store on an existing container, all transactional inserts, updates, and deletes are synchronized with analytical store in near real time, no Change Feed or ETL jobs are required.
+
+With Synapse Link, you can now directly connect to your Azure Cosmos DB containers from Azure Synapse Analytics and access the analytical store, at no Request Units (RUs) costs. Azure Synapse Analytics currently supports Synapse Link with Synapse Apache Spark and serverless SQL pools. If you have a globally distributed Azure Cosmos DB account, after you enable analytical store for a container, it will be available in all regions for that account.
+
+### Analytical store automatic schema inference
+
+While Azure Cosmos DB transactional store is considered row-oriented semi-structured data, analytical store has columnar and structured format. This conversion is automatically made for customers, using the schema inference rules described [here](../analytical-store-introduction.md). There are limits in the conversion process: maximum number of nested levels, maximum number of properties, unsupported data types, and more.
+
+> [!NOTE]
+> In the context of analytical store, we consider the following structures as property:
+> * JSON "elements" or "string-value pairs separated by a `:` ".
+> * JSON objects, delimited by `{` and `}`.
+> * JSON arrays, delimited by `[` and `]`.
+
+You can minimize the impact of the schema inference conversions, and maximize your analytical capabilities, by using following techniques.
+
+### Normalization
+
+Normalization becomes meaningless since with Azure Synapse Link you can join between your containers, using T-SQL or Spark SQL. The expected benefits of normalization are:
+ * Smaller data footprint in both transactional and analytical store.
+ * Smaller transactions.
+ * Fewer properties per document.
+ * Data structures with fewer nested levels.
+
+Please note that these last two factors, fewer properties and fewer levels, help in the performance of your analytical queries but also decrease the chances of parts of your data not being represented in the analytical store. As described in the article on automatic schema inference rules, there are limits to the number of levels and properties that are represented in analytical store.
+
+Another important factor for normalization is that SQL serverless pools in Azure Synapse support result sets with up to 1000 columns, and exposing nested columns also counts towards that limit. In other words, both analytical store and Synapse SQL serverless pools have a limit of 1000 properties.
+
+But what to do since denormalization is an important data modeling technique for Azure Cosmos DB? The answer is that you must find the right balance for your transactional and analytical workloads.
+
+### Partition Key
+
+Your Azure Cosmos DB partition key (PK) isn't used in analytical store. And now you can use [analytical store custom partitioning](https://devblogs.microsoft.com/cosmosdb/custom-partitioning-azure-synapse-link/) to copies of analytical store using any PK that you want. Because of this isolation, you can choose a PK for your transactional data with focus on data ingestion and point reads, while cross-partition queries can be done with Azure Synapse Link. Let's see an example:
+
+In a hypothetical global IoT scenario, `device id` is a good PK since all devices have a similar data volume and with that you won't have a hot partition problem. But if you want to analyze the data of more than one device, like "all data from yesterday" or "totals per city", you may have problems since those are cross-partition queries. Those queries can hurt your transactional performance since they use part of your throughput in RUs to run. But with Azure Synapse Link, you can run these analytical queries at no RUs costs. Analytical store columnar format is optimized for analytical queries and Azure Synapse Link leverages this characteristic to allow great performance with Azure Synapse Analytics runtimes.
+
+### Data types and properties names
+
+The automatic schema inference rules article lists what are the supported data types. While unsupported data type blocks the representation in analytical store, supported datatypes may be processed differently by the Azure Synapse runtimes. One example is: When using DateTime strings that follow the ISO 8601 UTC standard, Spark pools in Azure Synapse will represent these columns as string and SQL serverless pools in Azure Synapse will represent these columns as varchar(8000).
+
+Another challenge is that not all characters are accepted by Azure Synapse Spark. While white spaces are accepted, characters like colon, grave accent, and comma aren't. Let's say that your document has a property named **"First Name, Last Name"**. This property will be represented in analytical store and Synapse SQL serverless pool can read it without a problem. But since it is in analytical store, Azure Synapse Spark can't read any data from analytical store, including all other properties. At the end of the day, you can't use Azure Synapse Spark when you have one property using the unsupported characters in their names.
+
+### Data flattening
+
+All properties in the root level of your Cosmos DB data will be represented in analytical store as a column and everything else that is in deeper levels of your document data model will be represented as JSON, also in nested structures. Nested structures demand extra processing from Azure Synapse runtimes to flatten the data in structured format, what may be a challenge in big data scenarios.
++
+The document below will have only two columns in analytical store, `id` and `contactDetails`. All other data, `email` and `phone`, will require extra processing through SQL functions to be individually read.
+
+```json
+
+{
+ "id": "1",
+ "contactDetails": [
+ {"email": thomas@andersen.com},
+ {"phone": "+1 555 555-5555", "extension": 5555}
+ ]
+}
+```
+
+The document below will have three columns in analytical store, `id`, `email`, and `phone`. All data is directly accessible as columns.
+
+```json
+
+{
+ "id": "1",
+ "email": thomas@andersen.com,
+ "phone": "+1 555 555-5555", "extension": 5555
+}
+```
+
+### Data tiering
+
+Azure Synapse Link allows you to reduce costs from the following perspectives:
+
+ * Fewer queries running in your transactional database.
+ * A PK optimized for data ingestion and point reads, reducing data footprint, hot partition scenarios, and partitions splits.
+ * Data tiering since analytical ttl (attl) is independent from transactional ttl (tttl). You can keep your transactional data in transactional store for a few days, weeks, months, and keep the data in analytical store for years or for ever. Analytical store columnar format brings a natural data compression, from 50% up to 90%. And its cost per GB is ~10% of transactional store actual price. Please check the [analytical store overview](../analytical-store-introduction.md) to read about the current backup limitations.
+ * No ETL jobs running in your environment, meaning that you don't need to provision RUs for them.
+
+### Controlled redundancy
+
+This is a great alternative for situations when a data model already exists and can't be changed. And the existing data model doesn't fit well into analytical store due to automatic schema inference rules like the limit of nested levels or the maximum number of properties. If this is your case, you can leverage [Azure Cosmos DB Change Feed](../change-feed.md) to replicate your data into another container, applying the required transformations for a Synapse Link friendly data model. Let's see an example:
+
+#### Scenario
+
+Container `CustomersOrdersAndItems` is used to store on-line orders including customer and items details: billing address, delivery address, delivery method, delivery status, items price, etc. Only the first 1000 properties are represented and key information isn't included in analytical store, blocking Azure Synapse Link usage. The container has PBs of records it's not possible to change the application and remodel the data.
+
+Another perspective of the problem is the big data volume. Billions of rows are constantly used by the Analytics Department, what prevents them to use tttl for old data deletion. Maintaining the entire data history in the transactional database because of analytical needs forces them to constantly increase RUs provisioning, impacting costs. Transactional and analytical workloads compete for the same resources at the same time.
+
+What to do?
+
+#### Solution with Change Feed
+
+* The engineering team decided to use Change Feed to populate three new containers: `Customers`, `Orders`, and `Items`. With Change Feed they are normalizing and flattening the data. Unnecessary information is removed from the data model and each container has close to 100 properties, avoiding data loss due to automatic schema inference limits.
+* These new containers have analytical store enabled and now the Analytics Department is using Synapse Analytics to read the data, reducing the RUs usage since the analytical queries are happening in Synapse Apache Spark and serverless SQL pools.
+* Container `CustomersOrdersAndItems` now has tttl set to keep data for six months only, which allows for another RUs usage reduction, since there's a minimum of 10 RUs per GB in Azure Cosmos DB. Less data, fewer RUs.
++
+## Takeaways
The biggest takeaways from this article are to understand that data modeling in a schema-free world is as important as ever.
-Just as there is no single way to represent a piece of data on a screen, there is no single way to model your data. You need to understand your application and how it will produce, consume, and process the data. Then, by applying some of the guidelines presented here you can set about creating a model that addresses the immediate needs of your application. When your applications need to change, you can leverage the flexibility of a schema-free database to embrace that change and evolve your data model easily.
+Just as there's no single way to represent a piece of data on a screen, there's no single way to model your data. You need to understand your application and how it will produce, consume, and process the data. Then, by applying some of the guidelines presented here you can set about creating a model that addresses the immediate needs of your application. When your applications need to change, you can leverage the flexibility of a schema-free database to embrace that change and evolve your data model easily.
+
+## Next steps
* To learn more about Azure Cosmos DB, refer to the service's [documentation](https://azure.microsoft.com/documentation/services/cosmos-db/) page.
Data Modeling and Partitioning - a Real-World Example](how-to-model-partition-ex
* See the learn module on how to [Model and partition your data in Azure Cosmos DB.](/learn/modules/model-partition-data-azure-cosmos-db/)
+* Configure and use [Azure Synapse Link for Azure Cosmos DB](../configure-synapse-link.md).
+ * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+ * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cost-management-billing View All Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/view-all-accounts.md
tags: billing
Previously updated : 09/15/2021 Last updated : 01/28/2022
Azure portal supports the following type of billing accounts:
- **Microsoft Online Services Program**: A billing account for a Microsoft Online Services Program is created when you sign up for Azure through the Azure website. For example, when you sign up for an [Azure Free Account](https://azure.microsoft.com/offers/ms-azr-0044p/), [account with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) or as a [Visual studio subscriber](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/). -- **Enterprise Agreement**: A billing account for an Enterprise Agreement is created when your organization signs an [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/) to use Azure. You can have a maximum of 2000 subscriptions in an Enterprise Agreement.
+- **Enterprise Agreement**: A billing account for an Enterprise Agreement is created when your organization signs an [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/) to use Azure. You can have a maximum of 2000 subscriptions in an Enterprise Agreement. You can also have an unlimited number of enrollment accounts, effectively allowing an unlimited number of subscriptions.
- **Microsoft Customer Agreement**: A billing account for a Microsoft Customer Agreement is created when your organization works with a Microsoft representative to sign a Microsoft Customer Agreement. Some customers in select regions, who sign up through the Azure website for an [account with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) or an [Azure Free Account](https://azure.microsoft.com/offers/ms-azr-0044p/) may have a billing account for a Microsoft Customer Agreement as well. You can have a maximum of 20 subscriptions in a Microsoft Customer Agreement for an individual. A Microsoft Customer Agreement for an enterprise doesn't have a limit on the number of subscriptions. For more information, see [Get started with your billing account for Microsoft Customer Agreement](../understand/mca-overview.md).
data-factory Author Management Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/author-management-hub.md
To override the generated Resource Manager template parameters when publishing f
### Triggers
-Triggers determine when a pipeline run should be kicked off. Currently triggers can be on a wall clock schedule, operate on a periodic interval, or depend on an event. For more information, learn about [trigger execution](concepts-pipeline-execution-triggers.md#trigger-execution). In the management hub, you can create, edit, delete, or view the current state of a trigger.
+Triggers determine when a pipeline run should be kicked off. Currently triggers can be on a wall clock schedule, operate on a periodic interval, or depend on an event. For more information, learn about [trigger execution](concepts-pipeline-execution-triggers.md#trigger-execution-with-json). In the management hub, you can create, edit, delete, or view the current state of a trigger.
:::image type="content" source="media/author-management-hub/management-hub-triggers.png" alt-text="Screenshot that shows where to create, edit, delete, nor view the current state of a trigger.":::
data-factory Built In Preinstalled Components Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/built-in-preinstalled-components-ssis-integration-runtime.md
Previously updated : 05/14/2020 Last updated : 01/28/2022 # Built-in and preinstalled components on Azure-SSIS Integration Runtime
data-factory Compare Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/compare-versions.md
The following table compares the features of Data Factory with the features of D
| Feature | Version 1 | Current version | | - | | |
-| Datasets | A named view of data that references the data that you want to use in your activities as inputs and outputs. Datasets identify data within different data stores, such as tables, files, folders, and documents. For example, an Azure Blob dataset specifies the blob container and folder in Azure Blob storage from which the activity should read the data.<br/><br/>**Availability** defines the processing window slicing model for the dataset (for example, hourly, daily, and so on). | Datasets are the same in the current version. However, you do not need to define **availability** schedules for datasets. You can define a trigger resource that can schedule pipelines from a clock scheduler paradigm. For more information, see [Triggers](concepts-pipeline-execution-triggers.md#trigger-execution) and [Datasets](concepts-datasets-linked-services.md). |
+| Datasets | A named view of data that references the data that you want to use in your activities as inputs and outputs. Datasets identify data within different data stores, such as tables, files, folders, and documents. For example, an Azure Blob dataset specifies the blob container and folder in Azure Blob storage from which the activity should read the data.<br/><br/>**Availability** defines the processing window slicing model for the dataset (for example, hourly, daily, and so on). | Datasets are the same in the current version. However, you do not need to define **availability** schedules for datasets. You can define a trigger resource that can schedule pipelines from a clock scheduler paradigm. For more information, see [Triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json) and [Datasets](concepts-datasets-linked-services.md). |
| Linked services | Linked services are much like connection strings, which define the connection information that's necessary for Data Factory to connect to external resources. | Linked services are the same as in Data Factory V1, but with a new **connectVia** property to utilize the Integration Runtime compute environment of the current version of Data Factory. For more information, see [Integration runtime in Azure Data Factory](concepts-integration-runtime.md) and [Linked service properties for Azure Blob storage](connector-azure-blob-storage.md#linked-service-properties). | | Pipelines | A data factory can have one or more pipelines. A pipeline is a logical grouping of activities that together perform a task. You use startTime, endTime, and isPaused to schedule and run pipelines. | Pipelines are groups of activities that are performed on data. However, the scheduling of activities in the pipeline has been separated into new trigger resources. You can think of pipelines in the current version of Data Factory more as "workflow units" that you schedule separately via triggers. <br/><br/>Pipelines do not have "windows" of time execution in the current version of Data Factory. The Data Factory V1 concepts of startTime, endTime, and isPaused are no longer present in the current version of Data Factory. For more information, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md) and [Pipelines and activities](concepts-pipelines-activities.md). | | Activities | Activities define actions to perform on your data within a pipeline. Data movement (copy activity) and data transformation activities (such as Hive, Pig, and MapReduce) are supported. | In the current version of Data Factory, activities still are defined actions within a pipeline. The current version of Data Factory introduces new [control flow activities](concepts-pipelines-activities.md#control-flow-activities). You use these activities in a control flow (looping and branching). Data movement and data transformation activities that were supported in V1 are supported in the current version. You can define transformation activities without using datasets in the current version. |
in the current version, you can also monitor data factories by using [Azure Moni
## Next steps
-Learn how to create a data factory by following step-by-step instructions in the following quickstarts: [PowerShell](quickstart-create-data-factory-powershell.md), [.NET](quickstart-create-data-factory-dot-net.md), [Python](quickstart-create-data-factory-python.md), [REST API](quickstart-create-data-factory-rest-api.md).
+Learn how to create a data factory by following step-by-step instructions in the following quickstarts: [PowerShell](quickstart-create-data-factory-powershell.md), [.NET](quickstart-create-data-factory-dot-net.md), [Python](quickstart-create-data-factory-python.md), [REST API](quickstart-create-data-factory-rest-api.md).
data-factory Concepts Data Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-redundancy.md
Previously updated : 11/05/2020 Last updated : 01/27/2022 # **Azure Data Factory data redundancy**
-Azure Data Factory data includes metadata (pipeline, datasets, linked services, integration runtime and triggers) and monitoring data (pipeline, trigger, and activity runs).
+Azure Data Factory data includes metadata (pipeline, datasets, linked services, integration runtime, and triggers) and monitoring data (pipeline, trigger, and activity runs).
-In all regions (except Brazil South and Southeast Asia), Azure Data Factory data is stored and replicated in the [paired region](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) to protect against metadata loss. During regional datacenter failures, Microsoft may initiate a regional failover of your Azure Data Factory instance. In most cases, no action is required on your part. When the Microsoft-managed failover has completed, you will be able to access your Azure Data Factory in the failover region.
+In all regions (except Brazil South and Southeast Asia), Azure Data Factory data is stored and replicated in the [paired region](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) to protect against metadata loss. During regional datacenter failures, Microsoft may initiate a regional failover of your Azure Data Factory instance. In most cases, no action is required on your part. When the Microsoft-managed failover has completed, you'll be able to access your Azure Data Factory in the failover region.
-Due to data residency requirements in Brazil South, and Southeast Asia, Azure Data Factory data is stored on [local region only](../storage/common/storage-redundancy.md#locally-redundant-storage). For Southeast Asia, all the data are stored in Singapore. For Brazil South, all data are stored in Brazil. When the region is lost due to a significant disaster, Microsoft will not be able to recover your Azure Data Factory data.
+Due to data residency requirements in Brazil South, and Southeast Asia, Azure Data Factory data is stored on [local region only](../storage/common/storage-redundancy.md#locally-redundant-storage). For Southeast Asia, all the data are stored in Singapore. For Brazil South, all data are stored in Brazil. When the region is lost due to a significant disaster, Microsoft won't be able to recover your Azure Data Factory data.
> [!NOTE] > Microsoft-managed failover does not apply to self-hosted integration runtime (SHIR) since this infrastructure is typically customer-managed. If the SHIR is set up on Azure VM, then the recommendation is to leverage [Azure site recovery](../site-recovery/site-recovery-overview.md) for handling the [Azure VM failover](../site-recovery/azure-to-azure-architecture.md) to another region.
Due to data residency requirements in Brazil South, and Southeast Asia, Azure Da
## **Using source control in Azure Data Factory**
-To ensure that you are able to track and audit the changes made to your Azure data factory metadata, you should consider setting up source control for your Azure Data Factory. It will also enable you to access your metadata JSON files for pipelines, datasets, linked services, and trigger. Azure Data Factory enables you to work with different Git repository (Azure DevOps and GitHub).
+To ensure you can track and audit the changes made to your metadata, you should consider setting up source control for your Azure Data Factory. It will also enable you to access your metadata JSON files for pipelines, datasets, linked services, and trigger. Azure Data Factory enables you to work with different Git repository (Azure DevOps and GitHub).
Learn how to set up [source control in Azure Data Factory](./source-control.md).
Azure Data Factory enables you to move data among data stores located on-premise
## See also - [Azure Regional Pairs](../availability-zones/cross-region-replication-azure.md)-- [Data residency in Azure](https://azure.microsoft.com/global-infrastructure/data-residency/)
+- [Data residency in Azure](https://azure.microsoft.com/global-infrastructure/data-residency/)
data-factory Concepts Datasets Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-datasets-linked-services.md
Previously updated : 09/09/2021 Last updated : 01/28/2022 # Datasets in Azure Data Factory and Azure Synapse Analytics
Last updated 09/09/2021
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article describes what datasets are, how they are defined in JSON format, and how they are used in Azure Data Factory and Synapse pipelines.
+This article describes what datasets are, how theyΓÇÖre defined in JSON format, and how theyΓÇÖre used in Azure Data Factory and Synapse pipelines.
-If you are new to Data Factory, see [Introduction to Azure Data Factory](introduction.md) for an overview. For more information about Azure Synapse see [What is Azure Synapse](../synapse-analytics/overview-what-is.md)
+If youΓÇÖre new to Data Factory, see [Introduction to Azure Data Factory](introduction.md) for an overview. For more information about Azure Synapse, see [What is Azure Synapse](../synapse-analytics/overview-what-is.md)
## Overview
-A data factory or Synapse workspace can have one or more pipelines. A **pipeline** is a logical grouping of **activities** that together perform a task. The activities in a pipeline define actions to perform on your data. Now, a **dataset** is a named view of data that simply points or references the data you want to use in your **activities** as inputs and outputs. Datasets identify data within different data stores, such as tables, files, folders, and documents. For example, an Azure Blob dataset specifies the blob container and folder in Blob storage from which the activity should read the data.
+An Azure Data Factory or Synapse workspace can have one or more pipelines. A **pipeline** is a logical grouping of **activities** that together perform a task. The activities in a pipeline define actions to perform on your data. Now, a **dataset** is a named view of data that simply points or references the data you want to use in your **activities** as inputs and outputs. Datasets identify data within different data stores, such as tables, files, folders, and documents. For example, an Azure Blob dataset specifies the blob container and folder in Blob Storage from which the activity should read the data.
Before you create a dataset, you must create a [**linked service**](concepts-linked-services.md) to link your data store to the service. Linked services are much like connection strings, which define the connection information needed for the service to connect to external resources. Think of it this way; the dataset represents the structure of the data within the linked data stores, and the linked service defines the connection to the data source. For example, an Azure Storage linked service links a storage account. An Azure Blob dataset represents the blob container and the folder within that Azure Storage account that contains the input blobs to be processed.
-Here is a sample scenario. To copy data from Blob storage to a SQL Database, you create two linked
+HereΓÇÖs a sample scenario. To copy data from Blob storage to a SQL Database, you create two linked
The following diagram shows the relationships among pipeline, activity, dataset, and linked :::image type="content" source="media/concepts-datasets-linked-services/relationship-between-data-factory-entities.png" alt-text="Relationship between pipeline, activity, dataset, linked services":::
+## Create a dataset with UI
+
+# [Azure Data Factory](#tab/data-factory)
+
+To create a dataset with the Azure Data Factory Studio, select the Author tab (with the pencil icon), and then the plus sign icon, to choose **Dataset**.
++
+YouΓÇÖll see the new dataset window to choose any of the connectors available in Azure Data Factory, to set up an existing or new linked service.
++
+Next youΓÇÖll be prompted to choose the dataset format.
++
+Finally, you can choose an existing linked service of the type you selected for the dataset, or create a new one if one isnΓÇÖt already defined.
++
+Once you create the dataset, you can use it within any pipelines in the Azure Data Factory.
+
+# [Synapse Analytics](#tab/synapse-analytics)
+
+To create a dataset with the Synapse Studio, select the Data tab, and then the plus sign icon, to choose **Integration dataset**.
++
+YouΓÇÖll see the new integration dataset window to choose any of the connectors available in Azure Synapse, to set up an existing or new linked service.
++
+Next youΓÇÖll be prompted to choose the dataset format.
++
+Finally, you can choose an existing linked service of the type you selected for the dataset, or create a new one if one isnΓÇÖt already defined.
++
+Once you create the dataset, you can use it within any pipelines within the Synapse workspace.
++ ## Dataset JSON A dataset is defined in the following JSON format:
In Data Flow, datasets are used in source and sink transformations. The datasets
## Dataset type
-The service supports many different types of datasets, depending on the data stores you use. You can find the list of supported data stores from [Connector overview](connector-overview.md) article. Click a data store to learn how to create a linked service and a dataset for it.
+The service supports many different types of datasets, depending on the data stores you use. You can find the list of supported data stores from [Connector overview](connector-overview.md) article. Select a data store to learn how to create a linked service and a dataset for it.
For example, for a Delimited Text dataset, the dataset type is set to **DelimitedText** as shown in the following JSON sample:
You can create datasets by using one of these tools or SDKs: [.NET API](quicksta
Here are some differences between datasets in Data Factory current version (and Azure Synapse), and the legacy Data Factory version 1: -- The external property is not supported in the current version. It's replaced by a [trigger](concepts-pipeline-execution-triggers.md).-- The policy and availability properties are not supported in the current version. The start time for a pipeline depends on [triggers](concepts-pipeline-execution-triggers.md).-- Scoped datasets (datasets defined in a pipeline) are not supported in the current version.
+- The external property isnΓÇÖt supported in the current version. It's replaced by a [trigger](concepts-pipeline-execution-triggers.md).
+- The policy and availability properties arenΓÇÖt supported in the current version. The start time for a pipeline depends on [triggers](concepts-pipeline-execution-triggers.md).
+- Scoped datasets (datasets defined in a pipeline) arenΓÇÖt supported in the current version.
## Next steps See the following tutorial for step-by-step instructions for creating pipelines and datasets by using one of these tools or SDKs.
data-factory Concepts Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-linked-services.md
The following diagram shows the relationships among pipeline, activity, dataset,
:::image type="content" source="media/concepts-datasets-linked-services/relationship-between-data-factory-entities.png" alt-text="Relationship between pipeline, activity, dataset, linked services":::
+## Linked service with UI
+
+# [Azure Data Factory](#tab/data-factory)
+
+To create a new linked service in Azure Data Factory Studio, select the **Manage** tab and then **linked services**, where you can see any existing linked services you defined. Select **New** to create a new linked service.
++
+After selecting New to create a new linked service you will be able to choose any of the supported connectors and configure its details accordingly. Thereafter you can use the linked service in any pipelines you create.
++
+# [Synapse Analytics](#tab/synapse-analytics)
+
+To create a new linked service in Synapse Studio, select the **Manage** tab and then **linked services**, where you can see any existing linked services you defined. Select **New** to create a new linked service.
++
+After selecting New to create a new linked service you will be able to choose any of the supported connectors and configure its details accordingly. Thereafter you can use the linked service in any pipelines you create.
+++++ ## Linked service JSON A linked service is defined in JSON format as follows:
data-factory Concepts Pipeline Execution Triggers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-pipeline-execution-triggers.md
Previously updated : 09/09/2021 Last updated : 01/27/2022
A _pipeline run_ in Azure Data Factory and Azure Synapse defines an instance of
Pipeline runs are typically instantiated by passing arguments to parameters that you define in the pipeline. You can execute a pipeline either manually or by using a _trigger_. This article provides details about both ways of executing a pipeline.
-## Manual execution (on-demand)
+## Create triggers with UI
+
+To manually trigger a pipeline or configure a new scheduled, tumbling window, storage event, or custom event trigger, select Add trigger at the top of the pipeline editor.
++
+If you choose to manually trigger the pipeline, it will execute immediately. Otherwise if you choose New/Edit, you will be prompted with the add triggers window to either choose an existing trigger to edit, or create a new trigger.
++
+You will see the trigger configuration window, allowing you to choose the trigger type.
++
+Read more about [scheduled](#schedule-trigger-with-json), [tumbling window](#tumbling-window-trigger), [storage event](#event-based-trigger), and [custom event](#event-based-trigger) triggers below.
++
+## Manual execution (on-demand) with JSON
The manual execution of a pipeline is also referred to as _on-demand_ execution.
For a complete sample, see [Quickstart: Create a data factory by using the .NET
> [!NOTE] > You can use the .NET SDK to invoke pipelines from Azure Functions, from your web services, and so on.
-## Trigger execution
+## Trigger execution with JSON
Triggers are another way that you can execute a pipeline run. Triggers represent a unit of processing that determines when a pipeline execution needs to be kicked off. Currently, the service supports three types of triggers:
Triggers are another way that you can execute a pipeline run. Triggers represent
- Event-based trigger: A trigger that responds to an event.
-Pipelines and triggers have a many-to-many relationship (except for the tumbling window trigger).Multiple triggers can kick off a single pipeline, or a single trigger can kick off multiple pipelines. In the following trigger definition, the **pipelines** property refers to a list of pipelines that are triggered by the particular trigger. The property definition includes values for the pipeline parameters.
+Pipelines and triggers have a many-to-many relationship (except for the tumbling window trigger). Multiple triggers can kick off a single pipeline, or a single trigger can kick off multiple pipelines. In the following trigger definition, the **pipelines** property refers to a list of pipelines that are triggered by the particular trigger. The property definition includes values for the pipeline parameters.
### Basic trigger definition ```json
Pipelines and triggers have a many-to-many relationship (except for the tumbling
} ```
-## Schedule trigger
+## Schedule trigger with JSON
A schedule trigger runs pipelines on a wall-clock schedule. This trigger supports periodic and advanced calendar options. For example, the trigger supports intervals like "weekly" or "Monday at 5:00 PM and Thursday at 9:00 PM." The schedule trigger is flexible because the dataset pattern is agnostic, and the trigger doesn't discern between time-series and non-time-series data. For more information about schedule triggers and, for examples, see [Create a trigger that runs a pipeline on a schedule](how-to-create-schedule-trigger.md).
The following table provides a comparison of the tumbling window trigger and sch
## Event-based trigger
-An event-based trigger runs pipelines in response to an event. There are two flavors of event based triggers.
+An event-based trigger runs pipelines in response to an event. There are two flavors of event-based triggers.
* _Storage event trigger_ runs a pipeline against events happening in a Storage account, such as the arrival of a file, or the deletion of a file in Azure Blob Storage account.
-* _Custom event trigger_ processes and handles [custom topics](../event-grid/custom-topics.md) in Event Grid
+* _Custom event trigger_ processes and handles [custom articles](../event-grid/custom-topics.md) in Event Grid
For more information about event-based triggers, see [Storage Event Trigger](how-to-create-event-trigger.md) and [Custom Event Trigger](how-to-create-custom-event-trigger.md).
data-factory Control Flow System Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-system-variables.md
These system variables can be referenced anywhere in the pipeline JSON.
## Schedule trigger scope
-These system variables can be referenced anywhere in the trigger JSON for triggers of type [ScheduleTrigger](concepts-pipeline-execution-triggers.md#schedule-trigger).
+These system variables can be referenced anywhere in the trigger JSON for triggers of type [ScheduleTrigger](concepts-pipeline-execution-triggers.md#schedule-trigger-with-json).
| Variable Name | Description | | | |
These system variables can be referenced anywhere in the trigger JSON for trigge
| Variable Name | Description | | |
-| @triggerBody().event.eventType | Type of events that triggered the Custom Event Trigger run. Event type is customer defined field and take on any values of string type. |
+| @triggerBody().event.eventType | Type of events that triggered the Custom Event Trigger run. Event type is customer-defined field and take on any values of string type. |
| @triggerBody().event.subject | Subject of the custom event that caused the trigger to fire. | | @triggerBody().event.data._keyName_ | Data field in custom event is a free from JSON blob, which customer can use to send messages and data. Please use data._keyName_ to reference each field. For example, @triggerBody().event.data.callback returns the value for the _callback_ field stored under _data_. | | @trigger().startTime | Time at which the trigger fired to invoke the pipeline run. |
data-factory Copy Activity Data Consistency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-data-consistency.md
Previously updated : 09/09/2021 Last updated : 01/27/2022 # Data consistency verification in copy activity
When you move data from source to destination store, the copy activity provides
## Supported data stores and scenarios -- Data consistency verification is supported by all the connectors except FTP, sFTP, and HTTP.
+- Data consistency verification is supported by all the connectors except FTP, SFTP, HTTP, Snowflake, Office 365 and Azure Databricks Delta Lake.
- Data consistency verification is not supported in staging copy scenario. - When copying binary files, data consistency verification is only available when 'PreserveHierarchy' behavior is set in copy activity. - When copying multiple binary files in single copy activity with data consistency verification enabled, you have an option to either abort the copy activity or continue to copy the rest by enabling fault tolerance setting to skip inconsistent files.
data-factory Copy Activity Performance Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-performance-features.md
- Previously updated : 09/29/2021 Last updated : 01/27/2022
Last updated 09/29/2021
This article outlines the copy activity performance optimization features that you can leverage in Azure Data Factory and Synapse pipelines.
+## Configuring performance features with UI
+
+When you select a Copy activity on the pipeline editor canvas and choose the Settings tab in the activity configuration area below the canvas, you will see options to configure all of the performance features detailed below.
++ ## Data Integration Units A Data Integration Unit is a measure that represents the power (a combination of CPU, memory, and network resource allocation) of a single unit within the service. Data Integration Unit only applies to [Azure integration runtime](concepts-integration-runtime.md#azure-integration-runtime), but not [self-hosted integration runtime](concepts-integration-runtime.md#self-hosted-integration-runtime).
data-factory Data Factory Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-service-identity.md
Previously updated : 10/22/2021 Last updated : 01/27/2022
There are two types of supported managed identities:
Managed identity provides the below benefits: -- [Store credential in Azure Key Vault](store-credentials-in-key-vault.md), in which case managed identity is used for Azure Key Vault authentication.
+- [Store credential in Azure Key Vault](store-credentials-in-key-vault.md), in which case-managed identity is used for Azure Key Vault authentication.
- Access data stores or computes using managed identity authentication, including Azure Blob storage, Azure Data Explorer, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, Azure SQL Managed Instance, Azure Synapse Analytics, REST, Databricks activity, Web activity, and more. Check the connector and activity articles for details.-- Managed identity is also used to encrypt/decrypt data and metadata using the customer managed key stored in Azure Key Vault, providing double encryption.
+- Managed identity is also used to encrypt/decrypt data and metadata using the customer-managed key stored in Azure Key Vault, providing double encryption.
## System-assigned managed identity
You can retrieve the managed identity from Azure portal or programmatically. The
#### Retrieve system-assigned managed identity using Azure portal
+# [Azure Data Factory](#tab/data-factory)
+You can find the managed identity information from Azure portal -> your data factory or Synapse workspace -> Properties.
++
+# [Synapse Analytics](#tab/synapse-analytics)
+ You can find the managed identity information from Azure portal -> your data factory or Synapse workspace -> Properties. +++ - Managed Identity Object ID - Managed Identity Tenant (only applicable for Azure Data Factory)
Call below API in the request:
GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}?api-version=2018-06-01 ```
-**Response**: You will get response like shown in below example. The "identity" section is populated accordingly.
+**Response**: YouΓÇÖll get response like shown in below example. The "identity" section is populated accordingly.
```json {
data-factory Data Migration Guidance Hdfs Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-migration-guidance-hdfs-azure-storage.md
Previously updated : 8/30/2019 Last updated : 01/27/2022 # Use Azure Data Factory to migrate data from an on-premises Hadoop cluster to Azure Storage
Here's the estimated price based on our assumptions:
## Next steps -- [Copy files from multiple containers by using Azure Data Factory](solution-template-copy-files-multiple-containers.md)
+- [Copy files from multiple containers by using Azure Data Factory](solution-template-copy-files-multiple-containers.md)
data-factory Data Migration Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-migration-guidance-overview.md
Previously updated : 7/30/2019 Last updated : 01/27/2022 # Use Azure Data Factory to migrate data from your data lake or data warehouse to Azure
data-factory Data Migration Guidance S3 Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-migration-guidance-s3-azure-storage.md
Previously updated : 8/04/2019 Last updated : 01/27/2022 # Use Azure Data Factory to migrate data from Amazon S3 to Azure Storage
Migrate data over private link:
- In this architecture, data migration is done over a private peering link between AWS Direct Connect and Azure Express Route such that data never traverses over public Internet. It requires use of AWS VPC and Azure Virtual network. - You need to install ADF self-hosted integration runtime on a Windows VM within your Azure virtual network to achieve this architecture. You can manually scale up your self-hosted IR VMs or scale out to multiple VMs (up to 4 nodes) to fully utilize your network and storage IOPS/bandwidth. -- If it is acceptable to transfer data over HTTPS but you want to lock down network access to source S3 to a specific IP range, you can adopt a variation of this architecture by removing AWS VPC and replacing private link with HTTPS. You will want to keep Azure Virtual and self-hosted IR on Azure VM so you can have a static publicly routable IP for filtering purpose. - Both initial snapshot data migration and delta data migration can be achieved using this architecture. ## Implementation best practices
data-factory Encrypt Credentials Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/encrypt-credentials-self-hosted-integration-runtime.md
Previously updated : 01/15/2018 Last updated : 01/27/2022
data-factory How To Create Custom Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-custom-event-trigger.md
As of today custom event trigger supports a __subset__ of [advanced filtering op
* StringIn * StringNotIn
-Click **+New** to add new filter conditions.
+Select **+New** to add new filter conditions.
-Additionally, custom event triggers obey the [same limitations as event grid](../event-grid/event-filtering.md#limitations), including:
+Additionally, custom event triggers obey the [same limitations as Event Grid](../event-grid/event-filtering.md#limitations), including:
* 5 advanced filters and 25 filter values across all the filters per custom event trigger * 512 characters per string value
The following table provides an overview of the schema elements that are related
| JSON element | Description | Type | Allowed values | Required | ||-||||
-| `scope` | The Azure Resource Manager resource ID of the event grid topic. | String | Azure Resource Manager ID | Yes |
+| `scope` | The Azure Resource Manager resource ID of the Event Grid topic. | String | Azure Resource Manager ID | Yes |
| `events` | The type of events that cause this trigger to fire. | Array of strings | | Yes, at least one value is expected. | | `subjectBeginsWith` | The `subject` field must begin with the provided pattern for the trigger to fire. For example, _factories_ only fire the trigger for event subjects that start with *factories*. | String | | No | | `subjectEndsWith` | The `subject` field must end with the provided pattern for the trigger to fire. | String | | No |
Azure Data Factory uses Azure role-based access control (RBAC) to prohibit unaut
To successfully create or update a custom event trigger, you need to sign in to Data Factory with an Azure account that has appropriate access. Otherwise, the operation will fail with an _Access Denied_ error.
-Data Factory doesn't require special permission to your Event Grid. You also do *not* need to assign special Azure RBAC permission to the Data Factory service principal for the operation.
+Data Factory doesn't require special permission to your Event Grid. You also do *not* need to assign special Azure RBAC role permission to the Data Factory service principal for the operation.
Specifically, you need `Microsoft.EventGrid/EventSubscriptions/Write` permission on `/subscriptions/####/resourceGroups//####/providers/Microsoft.EventGrid/topics/someTopics`. ## Next steps
-* Get detailed information about [trigger execution](concepts-pipeline-execution-triggers.md#trigger-execution).
+* Get detailed information about [trigger execution](concepts-pipeline-execution-triggers.md#trigger-execution-with-json).
* Learn how to [reference trigger metadata in pipeline runs](how-to-use-trigger-parameterization.md).
data-factory How To Create Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-event-trigger.md
There are three noticeable call outs in the workflow related to Event triggering
## Next steps
-* For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution).
+* For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json).
* Learn how to reference trigger metadata in pipeline, see [Reference Trigger Metadata in Pipeline Runs](how-to-use-trigger-parameterization.md)
data-factory How To Create Schedule Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-schedule-trigger.md
This section shows you how to use Azure CLI to create, start, and monitor a sche
### Sample Code
-1. In your working direactory, create a JSON file named **MyTrigger.json** with the trigger's properties. For this example use the following content:
+1. In your working directory, create a JSON file named **MyTrigger.json** with the trigger's properties. For this example use the following content:
> [!IMPORTANT] > Before you save the JSON file, set the value of the **startTime** element to the current UTC time. Set the value of the **endTime** element to one hour past the current UTC time.
The following table provides a high-level overview of the major schema elements
|: |: | | **startTime** | A Date-Time value. For simple schedules, the value of the **startTime** property applies to the first occurrence. For complex schedules, the trigger starts no sooner than the specified **startTime** value. <br> For UTC time zone, format is `'yyyy-MM-ddTHH:mm:ssZ'`, for other time zone, format is `'yyyy-MM-ddTHH:mm:ss'`. | | **endTime** | The end date and time for the trigger. The trigger doesn't execute after the specified end date and time. The value for the property can't be in the past. This property is optional. <br> For UTC time zone, format is `'yyyy-MM-ddTHH:mm:ssZ'`, for other time zone, format is `'yyyy-MM-ddTHH:mm:ss'`. |
-| **timeZone** | The time zone the trigger is created in. This setting impact **startTime**, **endTime**, and **schedule**. See [list of supported time zone](#time-zone-option) |
+| **timeZone** | The time zone the trigger is created in. This setting affects **startTime**, **endTime**, and **schedule**. See [list of supported time zone](#time-zone-option) |
| **recurrence** | A recurrence object that specifies the recurrence rules for the trigger. The recurrence object supports the **frequency**, **interval**, **endTime**, **count**, and **schedule** elements. When a recurrence object is defined, the **frequency** element is required. The other elements of the recurrence object are optional. | | **frequency** | The unit of frequency at which the trigger recurs. The supported values include "minute," "hour," "day," "week," and "month." | | **interval** | A positive integer that denotes the interval for the **frequency** value, which determines how often the trigger runs. For example, if the **interval** is 3 and the **frequency** is "week," the trigger recurs every 3 weeks. |
The examples assume that the **interval** value is 1, and that the **frequency**
## Next steps -- For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution).-- Learn how to reference trigger metadata in pipeline, see [Reference Trigger Metadata in Pipeline Runs](how-to-use-trigger-parameterization.md)
+- For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json).
+- Learn how to reference trigger metadata in pipeline, see [Reference Trigger Metadata in Pipeline Runs](how-to-use-trigger-parameterization.md)
data-factory How To Create Tumbling Window Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-tumbling-window-trigger.md
To monitor trigger runs and pipeline runs in the Azure portal, see [Monitor pipe
## Next steps
-* For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution).
+* For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json).
* [Create a tumbling window trigger dependency](tumbling-window-trigger-dependency.md). * Learn how to reference trigger metadata in pipeline, see [Reference Trigger Metadata in Pipeline Runs](how-to-use-trigger-parameterization.md)
data-factory How To Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-expression-language-functions.md
In this document, we will primarily focus on learning fundamental concepts with
## Azure Data Factory UI and parameters
-If you are new to Azure Data Factory parameter usage in ADF user interface, please review [Data Factory UI for linked services with parameters](./parameterize-linked-services.md#ui-experience) and [Data Factory UI for metadata driven pipeline with parameters](./how-to-use-trigger-parameterization.md#data-factory-ui) for visual explanation.
+If you are new to Azure Data Factory parameter usage in ADF user interface, please review [Data Factory UI for linked services with parameters](./parameterize-linked-services.md#ui-experience) and [Data Factory UI for metadata driven pipeline with parameters](./how-to-use-trigger-parameterization.md#data-factory-ui) for a visual explanation.
## Parameter and expression concepts
data-factory How To Fixed Width https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-fixed-width.md
Previously updated : 8/18/2019 Last updated : 01/27/2022
data-factory How To Run Self Hosted Integration Runtime In Windows Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-run-self-hosted-integration-runtime-in-windows-container.md
Previously updated : 08/05/2020 Last updated : 01/28/2022 # How to run Self-Hosted Integration Runtime in Windows container
Currently we don't support below features when running Self-Hosted Integration R
### Next steps - Review [integration runtime concepts in Azure Data Factory](./concepts-integration-runtime.md).-- Learn how to [create a self-hosted integration runtime in the Azure portal](./create-self-hosted-integration-runtime.md).
+- Learn how to [create a self-hosted integration runtime in the Azure portal](./create-self-hosted-integration-runtime.md).
data-factory How To Sqldb To Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-sqldb-to-cosmosdb.md
Previously updated : 04/29/2020 Last updated : 01/27/2022 # Migrate normalized database schema from Azure SQL Database to Azure Cosmos DB denormalized container
data-factory How To Use Trigger Parameterization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-use-trigger-parameterization.md
This section shows you how to pass meta data information from trigger to pipelin
1. Go to the **Authoring Canvas** and edit a pipeline
-1. Click on the blank canvas to bring up pipeline settings. Do not select any activity. You may need to pull up the setting panel from the bottom of the canvas, as it may have been collapsed
+1. Select on the blank canvas to bring up pipeline settings. DonΓÇÖt select any activity. You may need to pull up the setting panel from the bottom of the canvas, as it may have been collapsed
1. Select **Parameters** section and select **+ New** to add parameters
This section shows you how to pass meta data information from trigger to pipelin
1. Add triggers to pipeline, by clicking on **+ Trigger**.
-1. Create or attach a trigger to the pipeline, and click **OK**
+1. Create or attach a trigger to the pipeline, and select **OK**
1. In the following page, fill in trigger meta data for each parameter. Use format defined in [System Variable](control-flow-system-variables.md) to retrieve trigger information. You don't need to fill in the information for all parameters, just the ones that will assume trigger metadata values. For instance, here we assign trigger run start time to *parameter_1*.
To use the values in pipeline, utilize parameters _@pipeline().parameters.parame
## Next steps
-For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution).
+For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json).
data-factory Monitor Schema Logs Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-schema-logs-events.md
The following lists of attributes are used for monitoring.
"start":"", "end":"", "status":"",
+ "location": "",
"properties": { "Parameters": {
The following lists of attributes are used for monitoring.
"ExecutionStart": "", "TriggerId": "", "SubscriptionId": ""
- }
+ },
+ "Predecessors": [
+ {
+ "Name": "",
+ "Id": "",
+ "InvokedByType": ""
+ }
+ ]
} } ```
The following lists of attributes are used for monitoring.
|**start**| String | The start time of the activity runs in timespan UTC format. | `2017-06-26T20:55:29.5007959Z`. | |**end**| String | The end time of the activity runs in timespan UTC format. If the diagnostic log shows an activity has started but not yet ended, the property value is `1601-01-01T00:00:00Z`. | `2017-06-26T20:55:29.5007959Z` | |**status**| String | The final status of the pipeline run. Possible property values are `Succeeded` and `Failed`. | `Succeeded`|
+|**location**| String | The Azure region of the pipeline run. | `eastasia`|
+|**predecessors**| String | The calling object of the pipeline along with ID. | `Manual`|
### Trigger-run log attributes
data-factory Monitor Visually https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-visually.md
The pipeline run grid contains the following columns:
| Annotations | Filterable tags associated with a pipeline | | Parameters | Parameters for the pipeline run (name/value pairs) | | Error | If the pipeline failed, the run error |
+| Run | **Original**, **Rerun**, or **Rerun (Latest)** |
| Run ID | ID of the pipeline run | You need to manually select the **Refresh** button to refresh the list of pipeline and activity runs. Autorefresh is currently not supported.
data-factory Tutorial Control Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-control-flow.md
Previously updated : 9/27/2019 Last updated : 01/28/2022 # Branching and chaining activities in a Data Factory pipeline
You did the following tasks in this tutorial:
You can now continue to the Concepts section for more information about Azure Data Factory. > [!div class="nextstepaction"]
->[Pipelines and activities](concepts-pipelines-activities.md)
+>[Pipelines and activities](concepts-pipelines-activities.md)
data-factory Tutorial Transform Data Hive Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-transform-data-hive-virtual-network.md
Previously updated : 01/22/2018 Last updated : 01/28/2022 # Transform data in Azure Virtual Network using Hive activity in Azure Data Factory
data-factory Tutorial Transform Data Spark Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-transform-data-spark-powershell.md
description: 'This tutorial provides step-by-step instructions for transforming
Previously updated : 01/22/2018 Last updated : 01/28/2022
databox Data Box Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-overview.md
Previously updated : 07/22/2021 Last updated : 01/28/2022 #Customer intent: As an IT admin, I need to understand what Data Box is and how it works so I can use it to import on-premises data into Azure or export data from Azure.
Data Box is ideally suited to transfer data sizes larger than 40 TBs in scenario
Here are the various scenarios where Data Box can be used to import data to Azure.
+ - **Onetime migration** - when a large amount of on-premises data is moved to Azure.
- Moving a media library from offline tapes into Azure to create an online media library.
- - Migrating your VM farm, SQL server, and applications to Azure
- - Moving historical data to Azure for in-depth analysis and reporting using HDInsight
+ - Migrating your VM farm, SQL server, and applications to Azure.
+ - Moving historical data to Azure for in-depth analysis and reporting using HDInsight.
- **Initial bulk transfer** - when an initial bulk transfer is done using Data Box (seed) followed by incremental transfers over the network.
- - For example, backup solutions partners such as Commvault and Data Box are used to move initial large historical backup to Azure. Once complete, the incremental data is transferred via network to Azure storage.
+ - For example, backup solutions partners such as Commvault and Data Box are used to move initial large historical backup to Azure. Once complete, the incremental data is transferred via network to Microsoft Azure Storage.
- **Periodic uploads** - when large amount of data is generated periodically and needs to be moved to Azure. For example in energy exploration, where video content is generated on oil rigs and windmill farms.
The Data Box includes the following components:
A typical import flow includes the following steps:
-1. **Order** - Create an order in the Azure portal, provide shipping information, and the destination Azure storage account for your data. If the device is available, Azure prepares and ships the device with a shipment tracking ID.
+1. **Order** - Create an order in the Azure portal, provide shipping information, and the destination storage account for your data. If the device is available, Azure prepares and ships the device with a shipment tracking ID.
2. **Receive** - Once the device is delivered, cable the device for network and power using the specified cables. (The power cable is included with the device. You'll need to procure the data cables.) Turn on and connect to the device. Configure the device network and mount shares on the host computer from where you want to copy the data.
Throughout this process, you are notified via email on all status changes. For m
A typical export flow includes the following steps:
-1. **Order** - Create an export order in the Azure portal, provide shipping information, and the source Azure storage account for your data. If the device is available, Azure prepares a device. Data is copied from your Azure Storage account to the Data Box. Once the data copy is complete, Microsoft ships the device with a shipment tracking ID.
+1. **Order** - Create an export order in the Azure portal, provide shipping information, and the source storage account for your data. If the device is available, Azure prepares a device. Data is copied from your storage account to the Data Box. Once the data copy is complete, Microsoft ships the device with a shipment tracking ID.
2. **Receive** - Once the device is delivered, cable the device for network and power using the specified cables. (The power cable is included with the device. You'll need to procure the data cables.) Turn on and connect to the device. Configure the device network and mount shares on the host computer to which you want to copy the data.
Throughout the export process, you are notified via email on all status changes.
## Region availability
-Data Box can transfer data based on the region in which service is deployed, the country or region you ship the device to, and the target Azure storage account where you transfer the data.
+Data Box can transfer data based on the region in which service is deployed, the country or region you ship the device to, and the target storage account where you transfer the data.
### For import
Data Box can transfer data based on the region in which service is deployed, the
- **Destination storage accounts** - The storage accounts that store the data are available in all Azure regions where the service is available.
+## Data resiliency
+
+The Data Box service is geographical in nature and has a single active deployment in one region within each country or commerce boundary. For data resiliency, a passive instance of the service is maintained in a different region, usually within the same country or commerce boundary. In a few cases, the paired region is outside the country or commerce boundary.
+
+In the extreme event of any Azure region being affected by a disaster, the Data Box service will be made available through the corresponding paired region. Both ongoing and new orders will be tracked and fulfilled through the service via the paired region. Failover is automatic, and is handled by Microsoft.
+
+For regions paired with a region within the same country or commerce boundary, no action is required. Microsoft is responsible for recovery, which could take up to 72 hours.
+
+For regions that donΓÇÖt have a paired region within the same geographic or commerce boundary, the customer will be notified to create a new Data Box order from a different, available region and copy their data to Azure in the new region. New orders would be required for the Brazil South and Southeast Asia regions.
+
+For more information, see [Business continuity and disaster recovery (BCDR): Azure Paired Regions](../best-practices-availability-paired-regions.md).
++ ## Next steps - Review the [Data Box system requirements](data-box-system-requirements.md).
event-grid Delivery Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/delivery-properties.md
Authorization: BEARER SlAV32hkKG...
``` > [!NOTE]
-> Defining authorization headers is a sensible option when your destination is a Webhook. It should not be used for [functions subscribed with a resource id](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions/create-or-update#azurefunctioneventsubscriptiondestination), Service Bus, Event Hubs, and Hybrid Connections as those destinations support their own authentication schemes when used with Event Grid.
+> Defining authorization headers is a sensible option when your destination is a Webhook. It should not be used for [functions subscribed with a resource id](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions/create-or-update#azurefunctioneventsubscriptiondestination), Service Bus, Event Hubs, and Hybrid Connections as those destinations support their own authentication schemes when used with Event Grid.
### Service Bus example Azure Service Bus supports the use of following message properties when sending single messages.
event-grid System Topics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/system-topics.md
In the past, a system topic was implicit and wasn't exposed for simplicity. Syst
## Lifecycle of system topics You can create a system topic in two ways: -- Create an [event subscription on an Azure resource as an extension resource](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions/create-or-update), which automatically creates a system topic with the name in the format: `<Azure resource name>-<GUID>`. The system topic created in this way is automatically deleted when the last event subscription for the topic is deleted.
+- Create an [event subscription on an Azure resource as an extension resource](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions/create-or-update), which automatically creates a system topic with the name in the format: `<Azure resource name>-<GUID>`. The system topic created in this way is automatically deleted when the last event subscription for the topic is deleted.
- Create a system topic for an Azure resource, and then create an event subscription for that system topic. When you use this method, you can specify a name for the system topic. The system topic isn't deleted automatically when the last event subscription is deleted. You need to manually delete it. When you use the Azure portal, you're always using this method. When you create an event subscription using the [**Events** page of an Azure resource](blob-event-quickstart-portal.md#subscribe-to-the-blob-storage), the system topic is created first and then the subscription for the topic is created. You can explicitly create a system topic first by using the [**Event Grid System Topics** page](create-view-manage-system-topics.md#create-a-system-topic) and then create a subscription for that topic.
-When you use [CLI](create-view-manage-system-topics-cli.md), [REST](/rest/api/eventgrid/version2021-12-01/event-subscriptions/create-or-update), or [Azure Resource Manager template](create-view-manage-system-topics-arm.md), you can choose either of the above methods. We recommend that you create a system topic first and then create a subscription on the topic, as it's the latest way of creating system topics.
+When you use [CLI](create-view-manage-system-topics-cli.md), [REST](/rest/api/eventgrid/controlplane-version2021-12-01/event-subscriptions/create-or-update), or [Azure Resource Manager template](create-view-manage-system-topics-arm.md), you can choose either of the above methods. We recommend that you create a system topic first and then create a subscription on the topic, as it's the latest way of creating system topics.
### Failure to create system topics The system topic creation fails if you have set up Azure policies in such a way that the Event Grid service can't create it. For example, you may have a policy that allows creation of only certain types of resources (for example: Azure Storage, Azure Event Hubs, and so on.) in the subscription.
event-grid Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/whats-new.md
This release corresponds to REST API version 2021-06-01-preview, which includes
- This release corresponds to the `2019-06-01` API version. - It adds support to the following new functionalities: * [Domains](event-domains.md)
- * Pagination and search filter for resources list operations. For an example, see [Topics - List By Subscription](/rest/api/eventgrid/version2021-06-01-preview/partner-namespaces/list-by-subscription).
+ * Pagination and search filter for resources list operations. For an example, see [Topics - List By Subscription](/rest/api/eventgrid/controlplane-version2021-06-01-preview/partner-namespaces/list-by-subscription).
* [Service Bus queue as destination](handler-service-bus.md) * [Advanced filtering](event-filtering.md#advanced-filtering) ## 4.1.0-preview (2019-03) - This release corresponds to the 2019-02-01-preview API version. - It adds support to the following new functionalities:
- * Pagination and search filter for resources list operations. For an example, see [Topics - List By Subscription](/rest/api/eventgrid/version2021-06-01-preview/partner-namespaces/list-by-subscription).
+ * Pagination and search filter for resources list operations. For an example, see [Topics - List By Subscription](/rest/api/eventgrid/controlplane-version2021-06-01-preview/partner-namespaces/list-by-subscription).
* [Manual create/delete of domain topics](how-to-event-domains.md) * [Service Bus Queue as destination](handler-service-bus.md)
firewall Dns Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/dns-settings.md
Previously updated : 11/23/2021 Last updated : 01/28/2022
If all DNS servers are unavailable, there's no fallback to another DNS server.
### Health checks
-DNS proxy performs five-second health check loops for as long as the upstream servers report as unhealthy. Once an upstream server is considered healthy, the firewall stops health checks until the next error. When a healthy proxy returns an error during the exchange, the firewall selects another DNS server in the list.
+DNS proxy performs five-second health check loops for as long as the upstream servers report as unhealthy. The health checks are a recursive DNS query to the root name server. Once an upstream server is considered healthy, the firewall stops health checks until the next error. When a healthy proxy returns an error, the firewall selects another DNS server in the list.
## Next steps
firewall Service Tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/service-tags.md
Previously updated : 4/5/2021 Last updated : 01/28/2022 # Azure Firewall service tags
-A service tag represents a group of IP address prefixes to help minimize complexity for security rule creation. You cannot create your own service tag, nor specify which IP addresses are included within a tag. Microsoft manages the address prefixes encompassed by the service tag, and automatically updates the service tag as addresses change.
+A service tag represents a group of IP address prefixes to help minimize complexity for security rule creation. You canΓÇÖt create your own service tag, nor specify which IP addresses are included within a tag. Microsoft manages the address prefixes encompassed by the service tag, and automatically updates the service tag as addresses change.
Azure Firewall service tags can be used in the network rules destination field. You can use them in place of specific IP addresses.
See [Virtual network service tags](../virtual-network/service-tags-overview.md#a
## Configuration
-Azure Firewall supports configuration of Service Tags via PowerShell, Azure CLI, or the Azure portal.
+Azure Firewall supports configuration of service tags via PowerShell, Azure CLI, or the Azure portal.
### Configure via Azure PowerShell
$ResourceGroup = "AzureFirewall-RG"
$azfirewall = Get-AzFirewall -Name $FirewallName -ResourceGroupName $ResourceGroup ```
-Next, we must create a new Rule. For the Source or Destination, you can specify the text value of the Service Tag you wish to leverage, as mentioned earlier above in this article.
+Next, we must create a new rule. For the Destination, you can specify the text value of the service tag you wish to leverage, as mentioned previously.
````Create new Network Rules using Service Tags $rule = New-AzFirewallNetworkRule -Name "AllowSQL" -Description "Allow access to Azure Database as a Service (SQL, MySQL, PostgreSQL, Datawarehouse)" -SourceAddress "10.0.0.0/16" -DestinationAddress Sql -DestinationPort 1433 -Protocol TCP $ruleCollection = New-AzFirewallNetworkRuleCollection -Name "Data Collection" -Priority 1000 -Rule $rule -ActionType Allow ````
-Next, we must update the variable containing our Azure Firewall definition with the new Network Rules we created.
+Next, we must update the variable containing our Azure Firewall definition with the new network rules we created.
````Merge the new rules into our existing Azure Firewall variable $azFirewall.NetworkRuleCollections.add($ruleCollection)
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/overview.md
Title: Overview of Azure Blueprints description: Understand how the Azure Blueprints service enables you to create, define, and deploy artifacts in your Azure environment. Previously updated : 06/21/2021 Last updated : 01/28/2022 # What is Azure Blueprints?
as artifacts:
|Policy Assignment | Subscription, Resource Group | Allows assignment of a policy or initiative to the subscription the blueprint is assigned to. The policy or initiative must be within the scope of the blueprint definition location. If the policy or initiative has parameters, these parameters are assigned at creation of the blueprint or during blueprint assignment. | |Role Assignment | Subscription, Resource Group | Add an existing user or group to a built-in role to make sure the right people always have the right access to your resources. Role assignments can be defined for the entire subscription or nested to a specific resource group included in the blueprint. |
+> [!NOTE]
+> Each artifact must be 2 MB or less. If the artifact exceeds 2 MB, you'll get an HTTP 500 error (Internal Server Error).
+ ### Blueprint definition locations When creating a blueprint definition, you'll define where the blueprint is saved. Blueprints can be
hdinsight Troubleshoot Lost Key Vault Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/troubleshoot-lost-key-vault-access.md
Look at [Azure Key Vault availability and redundancy](../../key-vault/general/di
### KV accidental deletion
-* Restore deleted key on KV to auto recover. For more information, see [Recover Deleted Key](/rest/api/keyvault/recoverdeletedkey).
+* Restore deleted key on KV to auto recover. For more information, see [Recover Deleted Key](/rest/api/keyvault/keys/recover-deleted-key).
* Reach out to KV team to recover from accidental deletions. ### KV access policy changed
If you didn't see your problem or are unable to solve your issue, visit one of t
* Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts.
-* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
hdinsight Hdinsight Apps Install Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-apps-install-applications.md
The following list shows the published applications:
|[Striim for Real-Time Data Integration to HDInsight](https://azuremarketplace.microsoft.com/marketplace/apps/striim.striimbyol) |Hadoop,HBase,Storm,Spark,Kafka |Striim (pronounced "stream") is an end-to-end streaming data integration + intelligence platform, enabling continuous ingestion, processing, and analytics of disparate data streams. | |[Jumbune Enterprise-Accelerating BigData Analytics](https://azuremarketplace.microsoft.com/marketplace/apps/impetus-infotech-india-pvt-ltd.impetus_jumbune) |Hadoop, Spark |At a high level, Jumbune assists enterprises by, 1. Accelerating Tez, MapReduce & Spark engine based Hive, Java, Scala workload performance. 2. Proactive Hadoop Cluster Monitoring, 3. Establishing Data Quality management on distributed file system. | |[Kyligence Enterprise](https://azuremarketplace.microsoft.com/marketplace/apps/kyligence.kyligence-cloud-saas) |Hadoop,HBase,Spark |Powered by Apache Kylin, Kyligence Enterprise Enables BI on Big Data. As an enterprise OLAP engine on Hadoop, Kyligence Enterprise empowers business analyst to architect BI on Hadoop with industry-standard data warehouse and BI methodology. |
-|[Starburst Presto for Azure HDInsight](https://azuremarketplace.microsoft.com/marketplace/apps/starburstdatainc1582306810515.starburst-enterprise-presto?tab=Overview) |Hadoop |Presto is a fast and scalable distributed SQL query engine. Architected for the separation of storage and compute, Presto is perfect for querying data in Azure Data Lake Storage, Azure Blob Storage, SQL and NoSQL databases, and other data sources. |
|[StreamSets Data Collector for HDInsight Cloud](https://azuremarketplace.microsoft.com/marketplace/apps/streamsets.streamsets-data-collector-hdinsight) |Hadoop,HBase,Spark,Kafka |StreamSets Data Collector is a lightweight, powerful engine that streams data in real time. Use Data Collector to route and process data in your data streams. It comes with a 30 day trial license. | |[Trifacta Wrangler Enterprise](https://azuremarketplace.microsoft.com/marketplace/apps/trifactainc1587522950142.trifactaazure) |Hadoop, Spark,HBase |Trifacta Wrangler Enterprise for HDInsight supports enterprise-wide data wrangling for any scale of data. The cost of running Trifacta on Azure is a combination of Trifacta subscription costs plus the Azure infrastructure costs for the virtual machines. | |[Unifi Data Platform](https://www.crunchbase.com/organization/unifi-software) |Hadoop,HBase,Storm,Spark |The Unifi Data Platform is a seamlessly integrated suite of self-service data tools designed to empower the business user to tackle data challenges that drive incremental revenue, reduce costs or operational complexity. |
iot-hub Iot Hub Understand Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-understand-ip-address.md
Previously updated : 04/21/2021 Last updated : 01/28/2022
Last updated 04/21/2021
The IP address prefixes of IoT Hub public endpoints are published periodically under the _AzureIoTHub_ [service tag](../virtual-network/service-tags-overview.md). > [!NOTE]
-> For devices that are deployed inside of on-premises networks, Azure IoT Hub supports VNET connectivity integration with private endpoints. See [IoT Hub support for VNet](./virtual-network-support.md) for more information.
+> For devices that are deployed inside of on-premises networks, Azure IoT Hub supports VNET connectivity integration with private endpoints. See [IoT Hub support for VNet](./virtual-network-support.md) for more information.
You may use these IP address prefixes to control connectivity between IoT Hub and your devices or network assets in order to implement a variety of network isolation goals:
You may use these IP address prefixes to control connectivity between IoT Hub an
||--|-| | Ensure your devices and services communicate with IoT Hub endpoints only | [Device-to-cloud](./iot-hub-devguide-messaging.md), and [cloud-to-device](./iot-hub-devguide-messages-c2d.md) messaging, [direct methods](./iot-hub-devguide-direct-methods.md), [device and module twins](./iot-hub-devguide-device-twins.md) and [device streams](./iot-hub-device-streams-overview.md) | Use _AzureIoTHub_ and _EventHub_ service tags to discover IoT Hub, and Event Hub IP address prefixes and configure ALLOW rules on your devices' and services' firewall setting for those IP address prefixes accordingly; drop traffic to other destination IP addresses you do not want the devices or services to communicate with. | | Ensure your IoT Hub device endpoint receives connections only from your devices and network assets | [Device-to-cloud](./iot-hub-devguide-messaging.md), and [cloud-to-device](./iot-hub-devguide-messages-c2d.md) messaging, [direct methods](./iot-hub-devguide-direct-methods.md), [device and module twins](./iot-hub-devguide-device-twins.md) and [device streams](./iot-hub-device-streams-overview.md) | Use IoT Hub [IP filter feature](iot-hub-ip-filtering.md) to allow connections from your devices and network asset IP addresses (see [limitations](#limitations-and-workarounds) section). |
-| Ensure your routes' custom endpoint resources (storage accounts, service bus and event hubs) are reachable from your network assets only | [Message routing](./iot-hub-devguide-messages-d2c.md) | Follow your resource's guidance on restrict connectivity (for example via [firewall rules](../storage/common/storage-network-security.md), [private links](../private-link/private-endpoint-overview.md), or [service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md)); use _AzureIoTHub_ service tags to discover IoT Hub IP address prefixes and add ALLOW rules for those IP prefixes on your resource's firewall configuration (see [limitations](#limitations-and-workarounds) section). |
+| Ensure your routes' custom endpoint resources (storage accounts, service bus and event hubs) are reachable from your network assets only | [Message routing](./iot-hub-devguide-messages-d2c.md) | Follow your resource's guidance on restricting connectivity (for example via [private links](../private-link/private-endpoint-overview.md), [service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md), or [firewall rules](../event-hubs/event-hubs-ip-filtering.md#trusted-microsoft-services)); For details on firewall restrictions see the [limitations](#limitations-and-workarounds) section). |
## Best practices * The IP address of an IoT hub is subject to change without notice. To minimize disruption, use the IoT hub hostname (for example, myhub.azure-devices.net) for networking and firewall configuration whenever possible.
-* For constrained IoT systems without domain name resolution (DNS), IoT Hub IP address ranges are published periodically via service tags before changes taking effect. It is therefore important that you develop processes to regularly retrieve and use the latest service tags. This process can be automated via the [service tags discovery API](../virtual-network/service-tags-overview.md#service-tags-on-premises). Note that Service tags discovery API is still in preview and in some cases may not produce the full list of tags and IP addresses. Until discovery API is generally available, consider using the [service tags in downloadable JSON format](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files).
+* For constrained IoT systems without domain name resolution (DNS), IoT Hub IP address ranges are published periodically via service tags before changes taking effect. It is therefore important that you develop processes to regularly retrieve and use the latest service tags. This process can be automated via the [service tags discovery API](../virtual-network/service-tags-overview.md#service-tags-on-premises). Note that service tags discovery API is still in preview and in some cases may not produce the full list of tags and IP addresses. Until discovery API is generally available, consider using the [service tags in downloadable JSON format](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files).
-* Use the *AzureIoTHub.[region name]* tag to identify IP prefixes used by IoT hub endpoints in a specific region. To account for datacenter disaster recovery, or [regional failover](iot-hub-ha-dr.md) ensure connectivity to IP prefixes of your IoT Hub's geo-pair region is also enabled.
+* Use the *AzureIoTHub.[region name]* tag to identify IP prefixes used by IoT Hub endpoints in a specific region. To account for datacenter disaster recovery, or [regional failover](iot-hub-ha-dr.md) ensure connectivity to IP prefixes of your IoT Hub's geo-pair region is also enabled.
-* Setting up firewall rules in IoT Hub may block off connectivity needed to run Azure CLI and PowerShell commands against your IoT Hub. To avoid this, you can add ALLOW rules for your clients' IP address prefixes to re-enable CLI or PowerShell clients to communicate with your IoT Hub.
+* Setting up firewall rules in IoT Hub may block off connectivity needed to run Azure CLI and PowerShell commands against your IoT Hub. To avoid this, you can add ALLOW rules for your clients' IP address prefixes to re-enable CLI or PowerShell clients to communicate with your IoT Hub.
* When adding ALLOW rules in your devices' firewall configuration, it is best to provide specific [ports used by applicable protocols](./iot-hub-devguide-protocols.md#port-numbers).
You may use these IP address prefixes to control connectivity between IoT Hub an
* IoT Hub IP filter feature has a limit of 100 rules. This limit and can be raised via requests through Azure Customer Support.
-* Your configured [IP filtering rules](iot-hub-ip-filtering.md) are only applied on your IoT Hub IP endpoints and not on your IoT hub's built-in Event Hub endpoint. If you also require IP filtering to be applied on the Event Hub where your messages are stored, you may do so bringing your own Event Hub resource where you can configure your desired IP filtering rules directly. To do so, you need to provision your own Event Hub resource and set up [message routing](./iot-hub-devguide-messages-d2c.md) to send your messages to that resource instead of your IoT Hub's built-in Event Hub. Finally, as discussed in the table above, to enable message routing functionality you also need to allow connectivity from IoT Hub's IP address prefixes to your provisioned Event Hub resource.
+* Your configured [IP filtering rules](iot-hub-ip-filtering.md) by default are only applied on your IoT Hub IP endpoints and not on your IoT hub's built-in Event Hub endpoint. If you also require IP filtering to be applied on the Event Hub where your messages are stored, you may do so by selecting the "Apply IP filters to the built-in endpoint" option in the IoT Hub Network settings. You may also do so by bringing your own Event Hub resource where you can configure your desired IP filtering rules directly. To do so, you need to provision your own Event Hub resource and set up [message routing](./iot-hub-devguide-messages-d2c.md) to send your messages to that resource instead of your IoT Hub's built-in Event Hub.
-* When routing to a storage account, allowing traffic from IoT Hub's IP address prefixes is only possible when the storage account is in a different region as your IoT Hub.
+* IoT Hub Service Tags only contain IP ranges for inbound connections. To limit firewall access on other Azure services to data coming from IoT Hub Message Routing, please choose the appropriate "Allow Trusted Microsoft Services" option for your service (e.g. [Event Hub](../event-hubs/event-hubs-ip-filtering.md#trusted-microsoft-services), [Service Bus](..//service-bus-messaging/service-bus-service-endpoints.md#trusted-microsoft-services), [Azure Storage](../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services)).
## Support for IPv6
key-vault Manage With Cli2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/manage-with-cli2.md
For detailed steps on registering an application with Azure Active Directory you
To register an application in Azure Active Directory: ```azurecli
-az ad sp create-for-rbac -n "MyApp" --password "hVFkk965BuUv"
+az ad sp create-for-rbac -n "MyApp" --password "hVFkk965BuUv" --role Contributor
# If you don't specify a password, one will be created for you. ```
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
ms.suite: integration Previously updated : 10/05/2021 Last updated : 01/28/2022
Before you can add a trigger to a blank workflow, make sure that the workflow de
1. To save your work, on the designer toolbar, select **Save**.
- When you save a workflow for the first time, and that workflow starts with a Request trigger, the Logic Apps service automatically generates a URL for an endpoint that's created by the Request trigger. Later, when you test your workflow, you send a request to this URL, which fires the trigger and starts the workflow run.
+ When you save a workflow for the first time, and that workflow starts with a Request trigger, Azure Logic Apps automatically generates a URL for an endpoint that's created by the Request trigger. Later, when you test your workflow, you send a request to this URL, which fires the trigger and starts the workflow run.
### Add the Office 365 Outlook action
To find the fully qualified domain names (FQDNs) for connections, follow these s
## Trigger the workflow
-In this example, the workflow runs when the Request trigger receives an inbound request, which is sent to the URL for the endpoint that's created by the trigger. When you saved the workflow for the first time, the Logic Apps service automatically generated this URL. So, before you can send this request to trigger the workflow, you need to find this URL.
+In this example, the workflow runs when the Request trigger receives an inbound request, which is sent to the URL for the endpoint that's created by the trigger. When you saved the workflow for the first time, Azure Logic Apps automatically generated this URL. So, before you can send this request to trigger the workflow, you need to find this URL.
1. On the workflow designer, select the Request trigger that's named **When an HTTP request is received**.
To stop the trigger from firing the next time when the trigger condition is met,
1. Save your changes. This step resets your trigger's current state. 1. [Reactivate your workflow](#disable-enable-workflows).
+* When a workflow is disabled, you can still resubmit runs.
+ > [!NOTE] > The disable workflow and stop logic app operations have different effects. For more information, review > [Considerations for stopping logic apps](#considerations-stop-logic-apps).
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
ms.suite: integration Previously updated : 09/13/2021 Last updated : 01/28/2022
In Visual Studio Code, you can view all the deployed logic apps in your Azure su
Stopping a logic app affects workflow instances in the following ways:
-* The Logic Apps service cancels all in-progress and pending runs immediately.
+* Azure Logic Apps cancels all in-progress and pending runs immediately.
-* The Logic Apps service doesn't create or run new workflow instances.
+* Azure Logic Apps doesn't create or run new workflow instances.
* Triggers won't fire the next time that their conditions are met. However, trigger states remember the points where the logic app was stopped. So, if you restart the logic app, the triggers fire for all unprocessed items since the last run. To stop a trigger from firing on unprocessed items since the last run, clear the trigger state before you restart the logic app:
- 1. In Visual Studio Code, on the left toolbar, select the Azure icon.
+ 1. In Visual Studio Code, on the left toolbar, select the Azure icon.
1. In the **Azure: Logic Apps (Standard)** pane, expand your subscription, which shows all the deployed logic apps for that subscription. 1. Expand your logic app, and then expand the node that's named **Workflows**. 1. Open a workflow, and edit any part of that workflow's trigger.
Stopping a logic app affects workflow instances in the following ways:
Deleting a logic app affects workflow instances in the following ways:
-* The Logic Apps service cancels in-progress and pending runs immediately, but doesn't run cleanup tasks on the storage used by the app.
+* Azure Logic Apps cancels in-progress and pending runs immediately, but doesn't run cleanup tasks on the storage used by the app.
-* The Logic Apps service doesn't create or run new workflow instances.
+* Azure Logic Apps doesn't create or run new workflow instances.
* If you delete a workflow and then recreate the same workflow, the recreated workflow won't have the same metadata as the deleted workflow. To refresh the metadata, you have to resave any workflow that called the deleted workflow. That way, the caller gets the correct information for the recreated workflow. Otherwise, calls to the recreated workflow fail with an `Unauthorized` error. This behavior also applies to workflows that use artifacts in integration accounts and workflows that call Azure functions.
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-limits-and-config.md
The following tables list the values for a single workflow definition:
| Name | Limit | Notes | | - | -- | -- | | Workflows per region per subscription | 1,000 workflows ||
+| Workflow - Maximum name length | 43 characters | Previously 80 characters |
| Triggers per workflow | 10 triggers | This limit applies only when you work on the JSON workflow definition, whether in code view or an Azure Resource Manager (ARM) template, not the designer. | | Actions per workflow | 500 actions | To extend this limit, you can use nested workflows as necessary. | | Actions nesting depth | 8 actions | To extend this limit, you can use nested workflows as necessary. |
logic-apps Manage Logic Apps With Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/manage-logic-apps-with-azure-portal.md
Title: Manage logic apps in the Azure portal
description: Edit, enable, disable, or delete logic apps by using the Azure portal. ms.suite: integration----++ Previously updated : 04/23/2021 Last updated : 01/28/2022 # Manage logic apps in the Azure portal
You can manage logic apps using the [Azure portal](https://portal.azure.com) or
To stop the trigger from firing the next time when the trigger condition is met, disable your logic app. In the Azure portal, you can enable or disable a [single logic app](#disable-enable-single-logic-app) or [multiple logic apps at the same time](#disable-or-enable-multiple-logic-apps). Disabling a logic app affects workflow instances in the following ways:
-* The Logic Apps services continues all in-progress and pending runs until they finish. Based on the volume or backlog, this process might take time to complete.
+* Azure Logic Apps continues all in-progress and pending runs until they finish. Based on the volume or backlog, this process might take time to complete.
-* The Logic Apps service doesn't create or run new workflow instances.
+* Azure Logic Apps doesn't create or run new workflow instances.
* The trigger won't fire the next time that its conditions are met. However, the trigger state remembers the point at which the logic app was stopped. So, if you reactivate the logic app, the trigger fires for all the unprocessed items since the last run.
To stop the trigger from firing the next time when the trigger condition is met,
1. To confirm whether your operation succeeded or failed, on the main Azure toolbar, open the **Notifications** list (bell icon).
+> [!NOTE]
+> When a logic app workflow is disabled, you can still resubmit runs.
+ <a name="disable-or-enable-multiple-logic-apps"></a> ### Disable or enable multiple logic apps
To stop the trigger from firing the next time when the trigger condition is met,
You can delete a single logic app or multiple logic apps at the same time. Deleting a logic app affects workflow instances in the following ways:
-* The Logic Apps service makes a best effort to cancel any in-progress and pending runs.
+* Azure Logic Apps makes a best effort to cancel any in-progress and pending runs.
Even with a large volume or backlog, most runs are canceled before they finish or start. However, the cancellation process might take time to complete. Meanwhile, some runs might get picked up for execution while the service works through the cancellation process.
-* The Logic Apps service doesn't create or run new workflow instances.
+* Azure Logic Apps doesn't create or run new workflow instances.
* If you delete a workflow and then recreate the same workflow, the recreated workflow won't have the same metadata as the deleted workflow. You have to resave any workflow that called the deleted workflow. That way, the caller gets the correct information for the recreated workflow. Otherwise, calls to the recreated workflow fail with an `Unauthorized` error. This behavior also applies to workflows that use artifacts in integration accounts and workflows that call Azure functions.
logic-apps Manage Logic Apps With Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/manage-logic-apps-with-visual-studio.md
ms.suite: integration
Previously updated : 04/23/2021 Last updated : 01/28/2022 # Manage logic apps with Visual Studio
To check the status and diagnose problems with logic app runs, you can review th
To stop the trigger from firing the next time when the trigger condition is met, disable your logic app. Disabling a logic app affects workflow instances in the following ways:
-* The Logic Apps service continues all in-progress and pending runs until they finish. Based on the volume or backlog, this process might take time to complete.
+* Azure Logic Apps continues all in-progress and pending runs until they finish. Based on the volume or backlog, this process might take time to complete.
-* The Logic Apps service doesn't create or run new workflow instances.
+* Azure Logic Apps doesn't create or run new workflow instances.
* The trigger won't fire the next time that its conditions are met.
In Cloud Explorer, open your logic app's shortcut menu, and select **Disable**.
![Disable your logic app in Cloud Explorer](./media/manage-logic-apps-with-visual-studio/disable-logic-app-cloud-explorer.png)
+> [!NOTE]
+> When a logic app workflow is disabled, you can still resubmit runs.
+ <a name="enable-logic-apps"></a> ### Enable logic apps
In Cloud Explorer, open your logic app's shortcut menu, and select **Enable**.
Deleting a logic app affects workflow instances in following ways:
-* The Logic Apps service makes a best effort to cancel any in-progress and pending runs.
+* Azure Logic Apps makes a best effort to cancel any in-progress and pending runs.
Even with a large volume or backlog, most runs are canceled before they finish or start. However, the cancellation process might take time to complete. Meanwhile, some runs might get picked up for execution while the runtime works through the cancellation process.
-* The Logic Apps service doesn't create or run new workflow instances.
+* Azure Logic Apps doesn't create or run new workflow instances.
* If you delete a workflow and then recreate the same workflow, the recreated workflow won't have the same metadata as the deleted workflow. You have to resave any workflow that called the deleted workflow. That way, the caller gets the correct information for the recreated workflow. Otherwise, calls to the recreated workflow fail with an `Unauthorized` error. This behavior also applies to workflows that use artifacts in integration accounts and workflows that call Azure functions.
machine-learning Concept Network Data Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-network-data-access.md
The following diagram shows the general flow of a data access call. In this exam
:::image type="content" source="./media/concept-network-data-access/data-access-flow.svg" alt-text="Diagram of the logic flow when accessing data":::
+### Scenarios and identities
+
+The following table lists what identities should be used for specific scenarios:
+
+| Scenario | Use workspace</br>Managed Service Identity (MSI) | Identity to use |
+|--|--|--|
+| Access from UI | Yes | Workspace MSI |
+| Access from UI | No | User's Identity |
+| Access from Job | Yes/No | Compute MSI |
+| Access from Notebook | Yes/No | User's identity |
+ ## Azure Storage Account When using an Azure Storage Account from Azure Machine Learning studio, you must add the managed identity of the workspace to the following Azure RBAC roles for the storage account:
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-data.md
Previously updated : 10/21/2021 Last updated : 01/28/2022
machine-learning How To Connect Data Ui https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-connect-data-ui.md
To ensure you securely connect to your Azure storage service, Azure Machine Lear
### Virtual network
-If your data storage account is in a **virtual network**, additional configuration steps are required to ensure Azure Machine Learning has access to your data. See [Network isolation & privacy](how-to-enable-studio-virtual-network.md) to ensure the appropriate configuration steps are applied when you create and register your datastore.
+If your data storage account is in a **virtual network**, additional configuration steps are required to ensure Azure Machine Learning has access to your data. See [Use Azure Machine Learning studio in a virtual network](how-to-enable-studio-virtual-network.md) to ensure the appropriate configuration steps are applied when you create and register your datastore.
### Access validation
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-attach-kubernetes.md
In Azure Machine Learning studio, select __Compute__, __Inference clusters__, an
Updates to Azure Machine Learning components installed in an Azure Kubernetes Service cluster must be manually applied.
-You can apply these updates by detaching the cluster from the Azure Machine Learning workspace, and then reattaching the cluster to the workspace. If TLS is enabled in the cluster, you will need to supply the TLS/SSL certificate and private key when reattaching the cluster.
+You can apply these updates by detaching the cluster from the Azure Machine Learning workspace and reattaching the cluster to the workspace.
```python compute_target = ComputeTarget(workspace=ws, name=clusterWorkspaceName) compute_target.detach() compute_target.wait_for_completion(show_output=True)
+```
+
+Before you can re-attach the cluster to your workspace, you need to first delete any `azureml-fe` related resources. If there is no active service in the cluster, you can delete your `azureml-fe` related resources with the following code.
+
+```shell
+kubectl delete sa azureml-fe
+kubectl delete clusterrole azureml-fe-role
+kubectl delete clusterrolebinding azureml-fe-binding
+kubectl delete svc azureml-fe
+kubectl delete svc azureml-fe-int-http
+kubectl delete deploy azureml-fe
+kubectl delete secret azuremlfessl
+kubectl delete cm azuremlfeconfig
+```
+If TLS is enabled in the cluster, you will need to supply the TLS/SSL certificate and private key when reattaching the cluster.
+
+```python
attach_config = AksCompute.attach_configuration(resource_group=resourceGroup, cluster_name=kubernetesClusterName)
-## If SSL is enabled.
+# If SSL is enabled.
attach_config.enable_ssl( ssl_cert_pem_file="cert.pem", ssl_key_pem_file="key.pem",
If you no longer have the TLS/SSL certificate and private key, or you are using
kubectl get secret/azuremlfessl -o yaml ```
->[!Note]
->Kubernetes stores the secrets in base-64 encoded format. You will need to base-64 decode the `cert.pem` and `key.pem` components of the secrets prior to providing them to `attach_config.enable_ssl`.
+> [!NOTE]
+> Kubernetes stores the secrets in Base64-encoded format. You will need to Base64-decode the `cert.pem` and `key.pem` components of the secrets prior to providing them to `attach_config.enable_ssl`.
### Webservice failures
Many webservice failures in AKS can be debugged by connecting to the cluster usi
az aks get-credentials -g <rg> -n <aks cluster name> ```
+### Delete azureml-fe related resources
+
+After detaching cluster, if there is none active service in cluster, please delete the `azureml-fe` related resources before attaching again:
+
+```shell
+kubectl delete sa azureml-fe
+kubectl delete clusterrole azureml-fe-role
+kubectl delete clusterrolebinding azureml-fe-binding
+kubectl delete svc azureml-fe
+kubectl delete svc azureml-fe-int-http
+kubectl delete deploy azureml-fe
+kubectl delete secret azuremlfessl
+kubectl delete cm azuremlfeconfig
+```
+ ## Next steps * [Use Azure RBAC for Kubernetes authorization](../aks/manage-azure-rbac.md)
machine-learning How To Create Image Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-image-labeling-projects.md
On the **Data** tab, you can see your dataset and review labeled data. Scroll th
### Details tab
-View details of your project. In this tab you can:
+View and change details of your project. In this tab you can:
* View project details and input datasets
-* Enable incremental refresh
+* Enable or disable incremental refresh at regular intervals or request an immediate refresh
* View details of the storage container used to store labeled outputs in your project * Add labels to your project * Edit instructions you give to your labels
machine-learning How To Create Text Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-text-labeling-projects.md
On the **Data** tab, you can see your dataset and review labeled data. Scroll th
### Details tab
-View details of your project. In this tab you can:
+View and change details of your project. In this tab you can:
* View project details and input datasets
-* Enable incremental refresh
+* Enable or disable **incremental refresh at regular intervals** or request an immediate refresh
* View details of the storage container used to store labeled outputs in your project * Add labels to your project * Edit instructions you give to your labels
machine-learning How To Debug Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-debug-pipelines.md
You can also find the log files for specific runs in the pipeline run detail pag
1. Select a component in the preview pane. 1. In the right pane of the component, go to the **Outputs + logs** tab.
-1. Expand the right pane to view the **70_driver_log.txt** file in browser, or select the file to download the logs locally.
+1. Expand the right pane to view the **std_log.txt** file in browser, or select the file to download the logs locally.
> [!IMPORTANT] > To update a pipeline from the pipeline run details page, you must **clone** the pipeline run to a new pipeline draft. A pipeline run is a snapshot of the pipeline. It's similar to a log file, and cannot be altered.
machine-learning How To Enable Studio Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-enable-studio-virtual-network.md
Use the following steps to enable access to data stored in Azure Blob and File s
|Storage account | Notes | |||
- |Workspace default blob storage| Stores model assets from the designer. Enable managed identity authentication on this storage account to deploy models in the designer. <br> <br> You can visualize and run a designer pipeline if it uses a non-default datastore that has been configured to use managed identity. However, if you try to deploy a trained model without managed identity enabled on the default datastore, deployment will fail regardless of any other datastores in use.|
+ |Workspace default blob storage| Stores model assets from the designer. Enable managed identity authentication on this storage account to deploy models in the designer. If managed identity authentication is disabled, the user's identity is used to access data stored in the blob. <br> <br> You can visualize and run a designer pipeline if it uses a non-default datastore that has been configured to use managed identity. However, if you try to deploy a trained model without managed identity enabled on the default datastore, deployment will fail regardless of any other datastores in use.|
|Workspace default file store| Stores AutoML experiment assets. Enable managed identity authentication on this storage account to submit AutoML experiments. | 1. **Configure datastores to use managed identity authentication**. After you add an Azure storage account to your virtual network with either a [service endpoint](how-to-secure-workspace-vnet.md?tabs=se#secure-azure-storage-accounts) or [private endpoint](how-to-secure-workspace-vnet.md?tabs=pe#secure-azure-storage-accounts), you must configure your datastore to use [managed identity](../active-directory/managed-identities-azure-resources/overview.md) authentication. Doing so lets the studio access data in your storage account.
Use the following steps to enable access to data stored in Azure Blob and File s
![Screenshot showing how to enable managed workspace identity](./media/how-to-enable-studio-virtual-network/enable-managed-identity.png)
+ 1. In the __Networking__ settings for the __Azure Storage Account__, add the Microsoft.MachineLearningService/workspaces __Resource type__, and set the __Instance name__ to the workspace.
+ These steps add the workspace's managed identity as a __Reader__ to the new storage service using Azure RBAC. __Reader__ access allows the workspace to view the resource, but not make changes. ## Datastore: Azure Data Lake Storage Gen1
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-log-view-metrics.md
Log files are an essential resource for debugging the Azure ML workloads. After
:::image type="content" source="media/how-to-log-view-metrics/download-logs.png" alt-text="Screenshot of Output and logs section of a run.":::
-The tables below show the contents of the log files in the folders you'll see in this section.
+#### user_logs folder
-> [!NOTE]
-> You will not necessarily see every file for every run. For example, the 20_image_build_log*.txt only appears when a new image is built (e.g. when you change you environment).
+This folder contains information about the user generated logs. This folder is open by default, and the **std_log.txt** log is selected. The **std_log.txt** is where your code's logs (for example, print statements) show up. This file contains `stdout` log and `stderr` logs from your control script and training script, one per process. In the majority of cases, you will monitor the logs here.
-#### `azureml-logs` folder
+#### system_logs folder
-|File |Description |
-|||
-|20_image_build_log.txt | Docker image building log for the training environment, optional, one per run. Only applicable when updating your Environment. Otherwise AML will reuse cached image. If successful, contains image registry details for the corresponding image. |
-|55_azureml-execution-<node_id>.txt | stdout/stderr log of host tool, one per node. Pulls image to compute target. Note, this log only appears once you have secured compute resources. |
-|65_job_prep-<node_id>.txt | stdout/stderr log of job preparation script, one per node. Download your code to compute target and datastores (if requested). |
-|70_driver_log(_x).txt | stdout/stderr log from AML control script and customer training script, one per process. **Standard output from your script. This file is where your code's logs (for example, print statements) show up.** In the majority of cases, you will monitor the logs here. |
-|70_mpi_log.txt | MPI framework log, optional, one per run. Only for MPI run. |
-|75_job_post-<node_id>.txt | stdout/stderr log of job release script, one per node. Send logs, release the compute resources back to Azure. |
-|process_info.json | show which process is running on which node. |
-|process_status.json | show process status, such as if a process is not started, running, or completed. |
-
-#### `logs > azureml` folder
-
-|File |Description |
-|||
-|110_azureml.log | |
-|job_prep_azureml.log | system log for job preparation |
-|job_release_azureml.log | system log for job release |
-
-#### `logs > azureml > sidecar > node_id` folder
-
-When sidecar is enabled, job prep and job release scripts will be run within sidecar container. There is one folder for each node.
-
-|File |Description |
-|||
-|start_cms.txt | Log of process that starts when Sidecar Container starts |
-|prep_cmd.txt | Log for ContextManagers entered when `job_prep.py` is run (some of this content will be streamed to `azureml-logs/65-job_prep`) |
-|release_cmd.txt | Log for ComtextManagers exited when `job_release.py` is run |
+This folder contains the logs generated by Azure Machine Learning and it will be closed by default.The logs generated by the system are grouped into different folders, based on the stage of the job in the runtime.
#### Other folders
For jobs training on multi-compute clusters, logs are present for each node IP.
Azure Machine Learning logs information from various sources during training, such as AutoML or the Docker container that runs the training job. Many of these logs are not documented. If you encounter problems and contact Microsoft support, they may be able to use these logs during troubleshooting. - ## Interactive logging session Interactive logging sessions are typically used in notebook environments. The method [Experiment.start_logging()](/python/api/azureml-core/azureml.core.experiment%28class%29#start-logging--args-kwargs-) starts an interactive logging session. Any metrics logged during the session are added to the run record in the experiment. The method [run.complete()](/python/api/azureml-core/azureml.core.run%28class%29#complete--set-status-true-) ends the sessions and marks the run as completed.
machine-learning Tutorial 1St Experiment Bring Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-bring-data.md
This code will print a URL to the experiment in the Azure Machine Learning studi
### <a name="inspect-log"></a> Inspect the log file
-In the studio, go to the experiment run (by selecting the previous URL output) followed by **Outputs + logs**. Select the `70_driver_log.txt` file. Scroll down through the log file until you see the following output:
+In the studio, go to the experiment run (by selecting the previous URL output) followed by **Outputs + logs**. Select the `std_log.txt` file. Scroll down through the log file until you see the following output:
```txt Processing 'input'.
machine-learning Tutorial 1St Experiment Hello World https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-hello-world.md
Here's a description of how the control script works:
1. In the page that opens, you'll see the run status. 1. When the status of the run is **Completed**, select **Output + logs** at the top of the page.
-1. Select **70_driver_log.txt** to view the output of your run.
+1. Select **std_log.txt** to view the output of your run.
## <a name="monitor"></a>Monitor your code in the cloud in the studio
Follow the link. At first, you'll see a status of **Queued** or **Preparing**.
Subsequent runs are much quicker (~15 seconds) as the docker image is cached on the compute. You can test this by resubmitting the code below after the first run has completed.
-Wait about 10 minutes. You'll see a message that the run has completed. Then use **Refresh** to see the status change to *Completed*. Once the job completes, go to the **Outputs + logs** tab. There you can see a `70_driver_log.txt` file that looks like this:
-
-> [!NOTE]
-> Your log files may be in a different place, depending on your region. Your log file may be located at **user_logs/std_log.txt** instead.
+Wait about 10 minutes. You'll see a message that the run has completed. Then use **Refresh** to see the status change to *Completed*. Once the job completes, go to the **Outputs + logs** tab. There you can see a `std_log.txt` file that looks like this:
```txt 1: [2020-08-04T22:15:44.407305] Entering context manager injector.
machine-learning Tutorial 1St Experiment Sdk Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-sdk-train.md
if __name__ == "__main__":
1. In the page that opens, you'll see the run status. The first time you run this script, Azure Machine Learning will build a new Docker image from your PyTorch environment. The whole run might around 10 minutes to complete. This image will be reused in future runs to make them run much quicker. 1. You can see view Docker build logs in the Azure Machine Learning studio. Select the **Outputs + logs** tab, and then select **20_image_build_log.txt**. 1. When the status of the run is **Completed**, select **Output + logs**.
-1. Select **70_driver_log.txt** to view the output of your run.
-
-> [!NOTE]
-> Your log files may be in a different place, depending on your region. Your log file may be located at **user_logs/std_log.txt** instead.
+1. Select **std_log.txt** to view the output of your run.
```txt Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ../data/cifar-10-python.tar.gz
marketplace Azure Ad Transactable Saas Landing Page https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-ad-transactable-saas-landing-page.md
Previously updated : 10/25/2021 Last updated : 01/27/2022 # Build the landing page for your transactable SaaS offer in the commercial marketplace
The SaaS fulfillment APIs implement the [resolve](./partner-center-portal/pc-saa
## Read information from claims encoded in the ID token
-As part of the [OpenID Connect](../active-directory/develop/v2-protocols-oidc.md) flow, Azure AD adds an [ID token](../active-directory/develop/id-tokens.md) to the request when the buyer is sent to the landing page. This token contains multiple pieces of basic information that could be useful in the activation process, including the information seen in this table.
+As part of the [OpenID Connect](../active-directory/develop/v2-protocols-oidc.md) flow, put the tenant id value you receive in `https://login.microsoftonline.com/{tenant}/v2.0`. Azure AD adds an [ID token](../active-directory/develop/id-tokens.md) to the request when the buyer is sent to the landing page. This token contains multiple pieces of basic information that could be useful in the activation process, including the information seen in this table.
| Value | Description | | | - |
As part of the [OpenID Connect](../active-directory/develop/v2-protocols-oidc.md
| email | User's email address. Note that this field may be empty. | | name | Human-readable value that identifies the subject of the token. In this case, it will be the buyer's name. | | oid | Identifier in the Microsoft identity system that uniquely identifies the user across applications. Microsoft Graph will return this value as the ID property for a given user account. |
-| tid | Identifier that represents the Azure AD tenant the buyer is from. In the case of an MSA identity, this will always be ``9188040d-6c67-4c5b-b112-36a304b66dad``. For more information, see the note in the next section: Use the Microsoft Graph API. |
+| tid | Identifier that represents the Azure AD tenant the buyer is from. In the case of an MSA identity, this will always be `9188040d-6c67-4c5b-b112-36a304b66dad`. For more information, see the note in the next section: Use the Microsoft Graph API. |
| sub | Identifier that uniquely identifies the user in this specific application. |
-|||
+|
## Use the Microsoft Graph API
marketplace Dynamics 365 Business Central Supplemental Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/dynamics-365-business-central-supplemental-content.md
This page lets you provide additional information to help us validate your offer
Indicate which release of Microsoft Dynamics 365 Business Central your solution targets: **Current**, **Next Major**, or **Next Minor**. This information lets us test your solution appropriately.
+> [!NOTE]
+> The target release isn't used anymore during the validation of Business Central solutions and this field is being removed from Partner Center soon. For more information about the release computation during the validation, see [Technical Validation Checklist](/dynamics365/business-central/dev-itpro/developer/devenv-checklist-submission).
+ ## Supported editions If your offer requires the Premium edition of Microsoft Dynamics 365 Business Central, select **Premium** only. Otherwise, select both **Essentials** and **Premium**.
If your offer requires the Premium edition of Microsoft Dynamics 365 Business Ce
Upload a PDF file that lists your offer's key usage scenarios. All scenarios listed here may be verified by our validation team before we approve your offer for the marketplace.
+> [!NOTE]
+> The key usage scenario PDF isn't used anymore during the validation of Business Central solutions and we are working on removing this field from Partner Center. For more information, see [Technical Validation FAQ](/dynamics365/business-central/dev-itpro/developer/devenv-checklist-submission-faq).
+ ## Test accounts If a test account is needed in order for our certification team to properly review your offer, upload a .pdf, .doc, or .docx file with your **Test accounts** information.
If your offer is an Add-on app, you must upload an **App tests automation** file
Select **Save draft**, then continue with review and publish in **Next steps** below.
+> [!NOTE]
+> The test app isn't used anymore during the validation of Business Central solutions and we are currently working on removing this field from Partner Center. For more information, see [Technical Validation FAQ](/dynamics365/business-central/dev-itpro/developer/devenv-checklist-submission-faq).
+ ## Next steps - [Review and publish](dynamics-365-review-publish.md)
marketplace Saas Metered Billing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/partner-center-portal/saas-metered-billing.md
description: Learn about flexible billing models for SaaS offers using the comme
Previously updated : 10/15/2021 Last updated : 12/16/2021 # Metered billing for SaaS using the commercial marketplace metering service
-With the commercial marketplace metering service, you can create software as a service (SaaS) offers that are charged according to non-standard units. Before publishing a SaaS offer to the commercial marketplace, you define the billing dimensions such as bandwidth, tickets, or emails processed. Customers then pay according to their consumption of these dimensions, with your system informing Microsoft via the commercial marketplace metering service API of billable events as they occur.
+With the commercial marketplace metering service, you can create software as a service (SaaS) offers that are charged according to non-standard units. Before publishing a SaaS offer to the commercial marketplace, you define the billing dimensions such as bandwidth, tickets, or emails processed. Customers then pay according to their consumption of these dimensions, with your system informing Microsoft via the commercial marketplace metering service API of billable events as they occur.
## Prerequisites for metered billing For a SaaS offer to use metered billing, it must first: - Meet all of the offer requirements for a [sell through Microsoft offer](../plan-saas-offer.md#listing-options) as outlined in [Create a SaaS offer in the commercial marketplace](../create-new-saas-offer.md).-- Integrate with the [SaaS Fulfillment APIs](./pc-saas-fulfillment-apis.md) for customers to provision and connect to your offer. -- Be configured for the **flat rate** pricing model when charging customers for your service. Dimensions are an optional extension to the flat rate pricing model.
+- Integrate with the [SaaS Fulfillment APIs](./pc-saas-fulfillment-apis.md) for customers to provision and connect to your offer.
+- Be configured for the **flat rate** pricing model when charging customers for your service. Dimensions are an optional extension to the flat rate pricing model.
Then the SaaS offer can integrate with the [commercial marketplace metering service APIs](../marketplace-metering-service-apis.md) to inform Microsoft of billable events.
Then the SaaS offer can integrate with the [commercial marketplace metering serv
Understanding the offer hierarchy is important, when it comes to defining the offer along with its pricing models. -- Each SaaS offer is configured to sell either through Microsoft or not. Once an offer is published, this option cannot be changed.-- Each SaaS offer, configured to sell through Microsoft, can have one or more plans. A user subscribes to the SaaS offer, but it is purchased through Microsoft within the context of a plan.
+- Each SaaS offer is configured to sell either through Microsoft or not. Once an offer is published, this option cannot be changed.
+- Each SaaS offer, configured to sell through Microsoft, can have one or more plans. A user subscribes to the SaaS offer, but it is purchased through Microsoft within the context of a plan.
- Each plan has a pricing model associated with it: **flat rate** or **per user**. All plans in an offer must be associated with the same pricing model. For example, there cannot be an offer having plans for a flat-rate pricing model, and another being per-user pricing model. - Within each plan configured for a flat rate billing model, at least one recurring fee (which can be $0) is included: - Recurring **monthly** fee: flat monthly fee that is pre-paid on a monthly recurrence when the user purchases the plan. - Recurring **annual** fee: flat annual fee that is pre-paid on an annual recurrence when the user purchases the plan.-- In addition to the recurring fees, a flat rate plan can also include optional custom dimensions used to charge customers for the overage usage not included in the flat rate. Each dimension represents a billable unit that your service will communicate to Microsoft using the [commercial marketplace metering service API](../marketplace-metering-service-apis.md).
+- In addition to the recurring fees, a flat rate plan can also include optional custom dimensions used to charge customers for the overage usage not included in the flat rate. Each dimension represents a billable unit that your service will communicate to Microsoft using the [commercial marketplace metering service API](../marketplace-metering-service-apis.md).
> [!IMPORTANT] > You must keep track of the usage in your code and only send usage events to Microsoft for the usage that is above the base fee. ## Sample offer
-As an example, Contoso is a publisher with a SaaS service called Contoso Notification Services (CNS). CNS lets its customers send notifications either via email or text. Contoso is registered as a publisher in Partner Center for the commercial marketplace program to publish SaaS offers to Azure customers. There are three plans associated with CNS, outlined below:
+As an example, Contoso is a publisher with a SaaS service called Contoso Notification Services (CNS). CNS lets its customers send notifications either via email or text. Contoso is registered as a publisher in Partner Center for the commercial marketplace program to publish SaaS offers to Azure customers. There are three plans associated with CNS, outlined below:
- Basic plan - Send 10000 emails and 1000 texts for $0/month (flat monthly fee)
As an example, Contoso is a publisher with a SaaS service called Contoso Notific
[![Enterprise plan pricing](./media/saas-enterprise-pricing.png "Click for enlarged view")](./media/saas-enterprise-pricing.png)
-Based on the plan selected, an Azure customer purchasing subscription to CNS SaaS offer will be able to send the included quantity of text and emails per subscription term (month or year as appears in subscription details - startDate and endDate). Contoso counts the usage up to the included quantity in base without sending any usage events to Microsoft. When customers consume more than the included quantity, they do not have to change plans or do anything different. Contoso will measure the overage beyond the included quantity and start emitting usage events to Microsoft for charging the overage usage using the [commercial marketplace metering service API](../marketplace-metering-service-apis.md). Microsoft in turn will charge the customer for the overage usage as specified by the publisher in the custom dimensions. The overage billing is done on the next billing cycle (monthly, but can be quarterly or early for some customers). For a monthly flat rate plan, the overage billing will be made for every month where overage has occurred. For a yearly flat rate plan, once the quantity included in base per year is consumed, all additional usage emitted by the custom meter will be billed as overage during each billing cycle (monthly) until the end of the subscription's year term.
+Based on the plan selected, an Azure customer purchasing subscription to CNS SaaS offer will be able to send the included quantity of text and emails per subscription term (month or year as appears in subscription detailsΓÇöstartDate and endDate). Contoso counts the usage up to the included quantity in base without sending any usage events to Microsoft. When customers consume more than the included quantity, they do not have to change plans or do anything different. Contoso will measure the overage beyond the included quantity and start emitting usage events to Microsoft for charging the overage usage using the [commercial marketplace metering service API](../marketplace-metering-service-apis.md). Microsoft in turn will charge the customer for the overage usage as specified by the publisher in the custom dimensions. The overage billing is done on the next billing cycle (monthly, but can be quarterly or early for some customers). For a monthly flat rate plan, the overage billing will be made for every month where overage has occurred. For a yearly flat rate plan, once the quantity included in base per year is consumed, all additional usage emitted by the custom meter will be billed as overage during each billing cycle (monthly) until the end of the subscription's year term.
## Billing dimensions
-Each billing dimension defines a custom unit by which the ISV can emit usage events. Billing dimensions are also used to communicate to the customer about how they will be billed for using the software. They are defined as follows:
+Each billing dimension defines a custom unit by which the ISV can emit usage events. Billing dimensions are also used to communicate to the customer about how they will be billed for using the software. They are defined as follows:
- **ID**: the immutable dimension identifier referenced while emitting usage events. - **Display Name**: the display name associated with the dimension, for example "text messages sent". - **Unit of Measure**: the description of the billing unit, for example "per text message" or "per 100 emails".-- **Price per unit in USD**: the price for one unit of the dimension. It can be 0.
+- **Price per unit in USD**: the price for one unit of the dimension. It can be 0.
- **Monthly quantity included in base**: quantity of dimension included per month for customers paying the recurring monthly fee, must be an integer. It can be 0 or unlimited. - **Annual quantity included in base**: quantity of dimension included per each year for customers paying the recurring annual fee, must be an integer. Can be 0 or unlimited.
+>[!Note]
+>Accuracy is to two decimal points (one cent), it is not unlimited, and Microsoft will truncate a meter less than $0.01 to zero.
+ > [!IMPORTANT] > You must keep track of the usage in your code and only send usage events to Microsoft for the usage that is above the base fee.
-Billing dimensions are shared across all plans for an offer. Some attributes apply to the dimension across all plans, and other attributes are plan-specific.
+Billing dimensions are shared across all plans for an offer. Some attributes apply to the dimension across all plans, and other attributes are plan-specific.
-The attributes, which define the dimension itself, are shared across all plans for an offer. Before you publish the offer, a change made to these attributes from the context of any plan will affect the dimension definition across all plans. Once you publish the offer, these attributes will no longer be editable. These attributes are:
+The attributes, which define the dimension itself, are shared across all plans for an offer. Before you publish the offer, a change made to these attributes from the context of any plan will affect the dimension definition across all plans. Once you publish the offer, these attributes will no longer be editable. These attributes are:
- ID - Display Name - Unit of Measure
-The other attributes of a dimension are specific to each plan and can have different values from plan to plan. Before you publish the plan, you can edit these values and only this plan will be affected. Once you publish the plan, these attributes will no longer be editable. These attributes are:
+The other attributes of a dimension are specific to each plan and can have different values from plan to plan. Before you publish the plan, you can edit these values and only this plan will be affected. Once you publish the plan, these attributes will no longer be editable. These attributes are:
- Price per unit in USD-- Monthly quantity included in base
+- Monthly quantity included in base
- Annual quantity included in base Dimensions also have two special concepts, "enabled" and "infinite": -- **Enabled** indicates that this plan participates in this dimension. If you are creating a new plan that does not send usage events based on this dimension, you might want to leave this option unchecked. Also, any new dimensions added after a plan was first published will show up as "not enabled" on the already published plan. A disabled dimension will not show up in any lists of dimensions for a plan seen by customers.-- **Infinite** represented by the infinity symbol "∞", indicates that this plan participates in this dimension, but does not emit usage against this dimension. If you want to indicate to your customers that the functionality represented by this dimension is included in the plan, but with no limit on usage. A dimension with infinite usage will show up in lists of dimensions for a plan seen by customers, with an indication that it will never incur a charge for this plan.
+- **Enabled** indicates that this plan participates in this dimension. If you are creating a new plan that does not send usage events based on this dimension, you might want to leave this option unchecked. Also, any new dimensions added after a plan was first published will show up as "not enabled" on the already published plan. A disabled dimension will not show up in any lists of dimensions for a plan seen by customers.
+- **Infinite** represented by the infinity symbol "∞", indicates that this plan participates in this dimension, but does not emit usage against this dimension. If you want to indicate to your customers that the functionality represented by this dimension is included in the plan, but with no limit on usage. A dimension with infinite usage will show up in lists of dimensions for a plan seen by customers, with an indication that it will never incur a charge for this plan.
>[!Note]
->The following scenarios are explicitly supported: <br> - You can add a new dimension to a new plan. The new dimension will not be enabled for any already published plans. <br> - You can publish a **flat-rate** plan without any dimensions, then add a new plan and configure a new dimension for that plan. The new dimension will not be enabled for already published plans.
+>The following scenarios are explicitly supported: <br> - You can add a new dimension to a new plan. The new dimension will not be enabled for any already published plans. <br> - You can publish a **flat-rate** plan without any dimensions, then add a new plan and configure a new dimension for that plan. The new dimension will not be enabled for already published plans.
### Setting dimension price per unit per supported market
Like flat rate plans, a plan with dimensions can be set as private plan, accessi
### Trial behavior
-Metered billing using the commercial marketplace metering service is not compatible with offering a free trial. It is not possible to configure a plan to use both metered billing and a free trial.
+Metered billing using the commercial marketplace metering service is not compatible with offering a free trial. It is not possible to configure a plan to use both metered billing and a free trial.
### Locking behavior
-Because a dimension used with the commercial marketplace metering service represents an understanding of how a customer will be paying for the service, all the details for a dimension are no longer editable after you publish it. It's important that you have your dimensions fully defined for a plan before you publish.
+Because a dimension used with the commercial marketplace metering service represents an understanding of how a customer will be paying for the service, all the details for a dimension are no longer editable after you publish it. It's important that you have your dimensions fully defined for a plan before you publish.
Once an offer is published with a dimension, the offer-level details for that dimension can no longer be changed:
openshift Howto Deploy With S2i https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/howto-deploy-with-s2i.md
In this article, you deploy an application to an Azure Red Hat OpenShift cluster
## Before you begin
+> [!NOTE]
+> This article assumes you have set up a pull secret. If you do not have a pull secret for your cluster, you can follow the documentation to [Add or update your Red Hat pull secret.](https://docs.microsoft.com/azure/openshift/howto-add-update-pull-secret)
+ [!INCLUDE [aro-howto-beforeyoubegin](includes/aro-howto-before-you-begin.md)] ## Create a project
postgresql Howto Manage Firewall Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-manage-firewall-using-cli.md
- Title: Manage firewall rules - Azure CLI - Azure Database for PostgreSQL - Single Server
-description: This article describes how to create and manage firewall rules in Azure Database for PostgreSQL - Single Server using Azure CLI command line.
---- Previously updated : 5/6/2019 --
-# Create and manage firewall rules in Azure Database for PostgreSQL - Single Server using Azure CLI
-Server-level firewall rules can be used to manage access to an Azure Database for PostgreSQL Server from a specific IP address or range of IP addresses. Using convenient Azure CLI commands, you can create, update, delete, list, and show firewall rules to manage your server. For an overview of Azure Database for PostgreSQL firewall rules, see [Azure Database for PostgreSQL Server firewall rules](concepts-firewall-rules.md).
-
-Virtual Network (VNet) rules can also be used to secure access to your server. Learn more about [creating and managing Virtual Network service endpoints and rules using the Azure CLI](howto-manage-vnet-using-cli.md).
-
-## Prerequisites
-To step through this how-to guide, you need:
-- Install [Azure CLI](/cli/azure/install-azure-cli) command-line utility or use the Azure Cloud Shell in the browser.-- An [Azure Database for PostgreSQL server and database](quickstart-create-server-database-azure-cli.md).-
-## Configure firewall rules for Azure Database for PostgreSQL
-The [az postgres server firewall-rule](/cli/azure/postgres/server/firewall-rule) commands are used to configure firewall rules.
-
-## List firewall rules
-To list the existing server firewall rules, run the [az postgres server firewall-rule list](/cli/azure/postgres/server/firewall-rule) command.
-```azurecli-interactive
-az postgres server firewall-rule list --resource-group myresourcegroup --server-name mydemoserver
-```
-The output lists the firewall rules, if any, by default in JSON format. You may use the switch `--output table` for a more readable table format as the output.
-```azurecli-interactive
-az postgres server firewall-rule list --resource-group myresourcegroup --server-name mydemoserver --output table
-```
-## Create firewall rule
-To create a new firewall rule on the server, run the [az postgres server firewall-rule create](/cli/azure/postgres/server/firewall-rule) command.
--
-To allow access to a singular IP address, provide the same address in the `--start-ip-address` and `--end-ip-address`, as in this example, replacing the IP shown here with your specific IP.
-```azurecli-interactive
-az postgres server firewall-rule create --resource-group myresourcegroup --server-name mydemoserver --name AllowSingleIpAddress --start-ip-address 13.83.152.1 --end-ip-address 13.83.152.1
-```
-To allow applications from Azure IP addresses to connect to your Azure Database for PostgreSQL server, provide the IP address 0.0.0.0 as the Start IP and End IP, as in this example.
-```azurecli-interactive
-az postgres server firewall-rule create --resource-group myresourcegroup --server-name mydemoserver --name AllowAllAzureIps --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0
-```
-
-> [!IMPORTANT]
-> This option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
->
-
-Upon success, the command output lists the details of the firewall rule you have created, by default in JSON format. If there is a failure, the output shows an error message instead.
-
-## Update firewall rule
-Update an existing firewall rule on the server using [az postgres server firewall-rule update](/cli/azure/postgres/server/firewall-rule) command. Provide the name of the existing firewall rule as input, and the start IP and end IP attributes to update.
-```azurecli-interactive
-az postgres server firewall-rule update --resource-group myresourcegroup --server-name mydemoserver --name AllowIpRange --start-ip-address 13.83.152.0 --end-ip-address 13.83.152.0
-```
-Upon success, the command output lists the details of the firewall rule you have updated, by default in JSON format. If there is a failure, the output shows an error message instead.
-> [!NOTE]
-> If the firewall rule does not exist, it gets created by the update command.
-
-## Show firewall rule details
-You can also show the details of an existing server-level firewall rule by running [az postgres server firewall-rule show](/cli/azure/postgres/server/firewall-rule) command.
-```azurecli-interactive
-az postgres server firewall-rule show --resource-group myresourcegroup --server-name mydemoserver --name AllowIpRange
-```
-Upon success, the command output lists the details of the firewall rule you have specified, by default in JSON format. If there is a failure, the output shows an error message instead.
-
-## Delete firewall rule
-To revoke access for an IP range to the server, delete an existing firewall rule by executing the [az postgres server firewall-rule delete](/cli/azure/postgres/server/firewall-rule) command. Provide the name of the existing firewall rule.
-```azurecli-interactive
-az postgres server firewall-rule delete --resource-group myresourcegroup --server-name mydemoserver --name AllowIpRange
-```
-Upon success, there is no output. Upon failure, the error message text is returned.
-
-## Next steps
-- Similarly, you can use a web browser to [Create and manage Azure Database for PostgreSQL firewall rules using the Azure portal](howto-manage-firewall-using-portal.md).-- Understand more about [Azure Database for PostgreSQL Server firewall rules](concepts-firewall-rules.md).-- Further secure access to your server by [creating and managing Virtual Network service endpoints and rules using the Azure CLI](howto-manage-vnet-using-cli.md).-- For help in connecting to an Azure Database for PostgreSQL server, see [Connection libraries for Azure Database for PostgreSQL](concepts-connection-libraries.md).
postgresql Howto Manage Vnet Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-manage-vnet-using-cli.md
ms.devlang: azurecli Previously updated : 5/6/2019 Last updated : 01/26/2022 # Create and manage VNet service endpoints for Azure Database for PostgreSQL - Single Server using Azure CLI+ Virtual Network (VNet) services endpoints and rules extend the private address space of a Virtual Network to your Azure Database for PostgreSQL server. Using convenient Azure CLI commands, you can create, update, delete, list, and show VNet service endpoints and rules to manage your server. For an overview of Azure Database for PostgreSQL VNet service endpoints, including limitations, see [Azure Database for PostgreSQL Server VNet service endpoints](concepts-data-access-and-security-vnet.md). VNet service endpoints are available in all supported regions for Azure Database for PostgreSQL. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-## Prerequisites
-To step through this how-to guide:
-- Install [the Azure CLI](/cli/azure/install-azure-cli) or use the Azure Cloud Shell in the browser.-- Create an [Azure Database for PostgreSQL server and database](quickstart-create-server-database-azure-cli.md).-
- > [!NOTE]
- > Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.
- > In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for PostgreSQL server.
---- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-## Configure Vnet service endpoints for Azure Database for PostgreSQL
-The [az network vnet](/cli/azure/network/vnet) commands are used to configure Virtual Networks.
-If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. Select the specific subscription ID under your account using [az account set](/cli/azure/account#az_account_set) command. Substitute the **id** property from the **az login** output for your subscription into the subscription id placeholder.
+> [!NOTE]
+> Support for VNet service endpoints is only for General Purpose and Memory Optimized servers. In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for PostgreSQL server.
-- The account must have the necessary permissions to create a virtual network and service endpoint.
+## Configure Vnet service endpoints
-Service endpoints can be configured on virtual networks independently, by a user with write access to the virtual network.
+The [az network vnet](/cli/azure/network/vnet) commands are used to configure virtual networks. Service endpoints can be configured on virtual networks independently, by a user with write access to the virtual network.
To secure Azure service resources to a VNet, the user must have permission to "Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/" for the subnets being added. This permission is included in the built-in service administrator roles, by default and can be modified by creating custom roles.
Learn more about [built-in roles](../role-based-access-control/built-in-roles.md
VNets and Azure service resources can be in the same or different subscriptions. If the VNet and Azure service resources are in different subscriptions, the resources should be under the same Active Directory (AD) tenant. Ensure that both the subscriptions have the **Microsoft.Sql** resource provider registered. For more information, see [resource-manager-registration][resource-manager-portal]. > [!IMPORTANT]
-> It is highly recommended to read this article about service endpoint configurations and considerations before running the sample script below, or configuring service endpoints. **Virtual Network service endpoint:** A [Virtual Network service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) is a subnet whose property values include one or more formal Azure service type names. VNet services endpoints use the service type name **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure SQL Database, Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it configures service endpoint traffic for all Azure Database services, including Azure SQL Database, Azure Database for PostgreSQL and Azure Database for MySQL servers on the subnet.
->
+> It is highly recommended to read this article about service endpoint configurations and considerations before running the sample script below, or configuring service endpoints. **Virtual Network service endpoint:** A [Virtual Network service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) is a subnet whose property values include one or more formal Azure service type names. VNet services endpoints use the service type name **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure SQL Database, Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it configures service endpoint traffic for all Azure Database services, including Azure SQL Database, Azure Database for PostgreSQL and Azure Database for MySQL servers on the subnet.
-### Sample script to create an Azure Database for PostgreSQL database, create a VNet, VNet service endpoint and secure the server to the subnet with a VNet rule
-In this sample script, change the highlighted lines to customize the admin username and password. Replace the SubscriptionID used in the `az account set --subscription` command with your own subscription identifier.
-[!code-azurecli-interactive[main](../../cli_scripts/postgresql/create-postgresql-server-vnet/create-postgresql-server.sh?highlight=5,20 "Create an Azure Database for PostgreSQL, VNet, VNet service endpoint, and VNet rule.")]
+## Sample script
++
+### Run the script
+ ## Clean up deployment
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-[!code-azurecli-interactive[main](../../cli_scripts/postgresql/create-postgresql-server-vnet/delete-postgresql.sh "Delete the resource group.")]
-<!-- Link references, to text, Within this same GitHub repo. -->
+
+ ```azurecli
+ echo "Cleaning up resources by removing the resource group..."
+ az group delete --name $resourceGroup -y
+ ```
+
+<!-- Link references, to text, Within this same GitHub repo. -->
[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
postgresql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-limits.md
Previously updated : 12/10/2021 Last updated : 01/14/2022 # Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) limits and limitations
Last updated 12/10/2021
The following section describes capacity and functional limits in the Hyperscale (Citus) service.
+### Naming
+
+#### Server group name
+
+A Hyperscale (Citus) server group must have a name that is 40 characters or
+shorter.
+ ## Networking ### Maximum connections
pooling](concepts-connection-pool.md). Hyperscale (Citus) offers a
managed pgBouncer connection pooler configured for up to 2,000 simultaneous client connections.
-### Private access (preview)
-
-#### Server group name
-
-To be compatible with [private access](concepts-private-access.md),
-a Hyperscale (Citus) server group must have a name that is 40 characters or
-shorter.
-
-#### Regions
-
-The private access feature is available in preview in only these regions:
-
-* Americas
- * East US
- * East US 2
- * West US 2
-* Asia Pacific
- * Japan East
- * Japan West
- * Korea Central
-* Europe
- * Germany West Central
- * UK South
- * West Europe
- ## Storage ### Storage scaling
postgresql Concepts Private Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-private-access.md
Title: Private access (preview) - Hyperscale (Citus) - Azure Database for PostgreSQL
+ Title: Private access - Hyperscale (Citus) - Azure Database for PostgreSQL
description: This article describes private access for Azure Database for PostgreSQL - Hyperscale (Citus).
Last updated 10/15/2021
-# Private access (preview) in Azure Database for PostgreSQL - Hyperscale (Citus)
+# Private access in Azure Database for PostgreSQL - Hyperscale (Citus)
[!INCLUDE [azure-postgresql-hyperscale-access](../../../includes/azure-postgresql-hyperscale-access.md)] This page describes the private access option. For public access, see [here](concepts-firewall-rules.md).
-> [!NOTE]
->
-> Private access is available for preview in only [certain
-> regions](concepts-limits.md#regions).
->
-> If the private access option is not selectable for your server group even
-> though your server group is within an allowed region, please open an Azure
-> [support
-> request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest),
-> and include your Azure subscription ID, to get access.
- ## Definitions **Virtual network**. An Azure Virtual Network (VNet) is the fundamental
page.
## Next steps
-* Learn how to [enable and manage private
- access](howto-private-access.md) (preview)
-* Follow a [tutorial](tutorial-private-access.md) to see
- private access (preview) in action.
+* Learn how to [enable and manage private access](howto-private-access.md)
+* Follow a [tutorial](tutorial-private-access.md) to see private access in
+ action.
* Learn about [private endpoints](../../private-link/private-endpoint-overview.md) * Learn about [virtual
postgresql Concepts Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-security-overview.md
Previously updated : 10/15/2021 Last updated : 01/14/2022 # Security in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
page.
## Next steps
-* Learn how to [enable and manage private
- access](howto-private-access.md) (preview)
+* Learn how to [enable and manage private access](howto-private-access.md)
* Learn about [private endpoints](../../private-link/private-endpoint-overview.md) * Learn about [virtual
postgresql Concepts Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-server-group.md
applications with the basic tier and later [graduate to the standard
tier](howto-scale-grow.md#add-worker-nodes) with confidence that the interface remains the same.
-The basic tier is also appropriate for smaller workloads in production. There
-is room to scale vertically *within* the basic tier by increasing the number of
+The basic tier is also appropriate for smaller workloads in production. ThereΓÇÖs
+room to scale vertically *within* the basic tier by increasing the number of
server vCores. When greater scale is required right away, use the standard tier. Its smallest
allowed server group has one coordinator node and two workers. You can choose
to use more nodes based on your use-case, as described in our [initial sizing](howto-scale-initial.md) how-to.
+#### Tier summary
+
+**Basic tier**
+
+* 2 to 8 vCores, 8 to 32 gigabytes of memory.
+* Consists of a single database node, which can be scaled vertically.
+* Supports sharding on a single node and can be easily upgraded to a standard tier.
+* Economical deployment option for initial development, testing.
+
+**Standard tier**
+
+* 8 to 1000+ vCores, up to 8+ TiB memory
+* Distributed Postgres cluster, which consists of a dedicated coordinator
+ node and at least two worker nodes.
+* Supports Sharding on multiple worker nodes. The cluster can be scaled
+ horizontally by adding new worker nodes, and scaled vertically by
+ increasing the node vCores.
+* Best for performance and scale.
+ ## Next steps * Learn to [provision the basic tier](quickstart-create-basic-tier.md)
postgresql Howto Create Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-create-users.md
comes with several roles pre-defined:
* `citus` Since Hyperscale (Citus) is a managed PaaS service, only Microsoft can sign in with the
-`postgres` super user role. For limited administrative access, Hyperscale (Citus)
+`postgres` superuser role. For limited administrative access, Hyperscale (Citus)
provides the `citus` role. Permissions for the `citus` role: * Read all configuration variables, even variables normally visible only to superusers.
-* Read all pg\_stat\_\* views and use various statistics-related extensions --
- even views or extensions normally visible only to superusers.
+* Read all pg\_stat\_\* views and use various statistics-related
+ extensions--even views or extensions normally visible only to superusers.
* Execute monitoring functions that may take ACCESS SHARE locks on tables, potentially for a long time. * [Create PostgreSQL extensions](concepts-extensions.md) (because
Notably, the `citus` role has some restrictions:
As mentioned, the `citus` admin account lacks permission to create additional users. To add a user, use the Azure portal interface.
-1. Go to the **Roles** page for your Hyperscale (Citus) server group, and click **+ Add**:
+1. Go to the **Roles** page for your Hyperscale (Citus) server group, and
+ select **+ Add**:
:::image type="content" source="../media/howto-hyperscale-create-users/1-role-page.png" alt-text="The roles page":::
-2. Enter the role name and password. Click **Save**.
+2. Enter the role name and password. Select **Save**.
:::image type="content" source="../media/howto-hyperscale-create-users/2-add-user-fields.png" alt-text="Add role"::: The user will be created on the coordinator node of the server group, and propagated to all the worker nodes. Roles created through the Azure
-portal have the `LOGIN` attribute, which means they are true users who
+portal have the `LOGIN` attribute, which means theyΓÇÖre true users who
can sign in to the database. ## How to modify privileges for user role New user roles are commonly used to provide database access with restricted privileges. To modify user privileges, use standard PostgreSQL commands, using
-a tool such as PgAdmin or psql. (See [connecting with
-psql](quickstart-create-portal.md#connect-to-the-database-using-psql)
-in the Hyperscale (Citus) quickstart.)
+a tool such as PgAdmin or psql. (See [Connect to a Hyperscale (Citus) server
+group](quickstart-connect-psql.md).)
For example, to allow `db_user` to read `mytable`, grant the permission:
GRANT SELECT ON mytable TO db_user;
Hyperscale (Citus) propagates single-table GRANT statements through the entire cluster, applying them on all worker nodes. It also propagates GRANTs that are
-system-wide (e.g. for all tables in a schema):
+system-wide (for example, for all tables in a schema):
```sql -- applies to the coordinator node and propagates to workers
GRANT SELECT ON ALL TABLES IN SCHEMA public TO db_user;
## How to delete a user role or change their password To update a user, visit the **Roles** page for your Hyperscale (Citus) server group,
-and click the ellipses **...** next to the user. The ellipses will open a menu
+and select the ellipses **...** next to the user. The ellipses will open a menu
to delete the user or reset their password. :::image type="content" source="../media/howto-hyperscale-create-users/edit-role.png" alt-text="Edit a role":::
Open the firewall for the IP addresses of the new users' machines to enable
them to connect: [Create and manage Hyperscale (Citus) firewall rules using the Azure portal](howto-manage-firewall-using-portal.md).
-For more information about database user account management, see PostgreSQL
+For more information about database user management, see PostgreSQL
product documentation: * [Database Roles and Privileges](https://www.postgresql.org/docs/current/static/user-manag.html)
postgresql Howto Private Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-private-access.md
Title: Enable private access (preview) - Hyperscale (Citus) - Azure Database for PostgreSQL
+ Title: Enable private access - Hyperscale (Citus) - Azure Database for PostgreSQL
description: How to set up private link in a server group for Azure Database for PostgreSQL - Hyperscale (Citus) Previously updated : 11/16/2021 Last updated : 01/14/2022
-# Private access (preview) in Azure Database for PostgreSQL Hyperscale (Citus)
+# Private access in Azure Database for PostgreSQL Hyperscale (Citus)
-[Private access](concepts-private-access.md) (preview) allows
-resources in an Azure virtual network to connect securely and privately to
-nodes in a Hyperscale (Citus) server group. This how-to assumes you've already
-created a virtual network and subnet. For an example of setting up
-prerequisites, see the [private access
-tutorial](tutorial-private-access.md).
+[Private access](concepts-private-access.md) allows resources in an Azure
+virtual network to connect securely and privately to nodes in a Hyperscale
+(Citus) server group. This how-to assumes you've already created a virtual
+network and subnet. For an example of setting up prerequisites, see the
+[private access tutorial](tutorial-private-access.md).
## Create a server group with a private endpoint
tutorial](tutorial-private-access.md).
6. Select **Next: Networking** at the bottom of the page.
-7. Select **Private access (preview)**.
-
- > [!NOTE]
- >
- > Private access is available for preview in only [certain
- > regions](concepts-limits.md#regions).
- >
- > If the private access option is not selectable for your server group
- > even though your server group is within an allowed region,
- > please open an Azure [support
- > request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest),
- > and include your Azure subscription ID, to get access.
+7. Select **Private access**.
8. A screen appears called **Create private endpoint**. Choose appropriate values for your existing resources, and click **OK**:
To create a private endpoint to a node in an existing server group, open the
## Next steps
-* Learn more about [private access](concepts-private-access.md)
- (preview).
-* Follow a [tutorial](tutorial-private-access.md) to see private
- access (preview) in action.
+* Learn more about [private access](concepts-private-access.md).
+* Follow a [tutorial](tutorial-private-access.md) to see private access in
+ action.
postgresql Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/product-updates.md
Here are the features currently available for preview:
session and object audit logging via the standard PostgreSQL logging facility. It produces audit logs required to pass certain government, financial, or ISO certification audits.
-* **[Private access](concepts-private-access.md)**.
- Allow hosts on a virtual network (VNet) to securely access a
- Hyperscale (Citus) server group over a private endpoint.
-
-> [!NOTE]
->
-> Private access is available for preview in only [certain
-> regions](concepts-limits.md#regions).
## Contact us
postgresql Quickstart Connect Psql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/quickstart-connect-psql.md
+
+ Title: 'Quickstart: connect to a server group with psql - Hyperscale (Citus) - Azure Database for PostgreSQL'
+description: Quickstart to connect psql to Azure Database for PostgreSQL - Hyperscale (Citus).
++++++ Last updated : 01/24/2022++
+# Connect to a Hyperscale (Citus) server group with psql
+
+## Prerequisites
+
+To follow this quickstart, you'll first need to:
+
+* [Create a server group](quickstart-create-portal.md) in the Azure portal.
+
+## Connect
+
+When you create your Azure Database for PostgreSQL server, a default database named **citus** is created. To connect to your database server, you need a connection string and the admin password.
+
+1. Obtain the connection string. In the server group page, select the **Connection strings** menu item. (It's under **Settings**.) Find the string marked **psql**. It will be of the form, `psql "host=hostname.postgres.database.azure.com port=5432 dbname=citus user=citus password={your_password} sslmode=require"`
+
+ Copy the string. YouΓÇÖll need to replace "{your\_password}" with the administrative password you chose earlier. The system doesn't store your plaintext password and so can't display it for you in the connection string.
+
+2. Open a terminal window on your local computer.
+
+3. At the prompt, connect to your Azure Database for PostgreSQL server with the [psql](https://www.postgresql.org/docs/current/app-psql.html) utility. Pass your connection string in quotes, being sure it contains your password:
+ ```bash
+ psql "host=..."
+ ```
+
+ For example, the following command connects to the coordinator node of the server group **mydemoserver**:
+
+ ```bash
+ psql "host=mydemoserver-c.postgres.database.azure.com port=5432 dbname=citus user=citus password={your_password} sslmode=require"
+ ```
+
+## Next steps
+
+* [Troubleshoot connection problems](howto-troubleshoot-common-connection-issues.md).
+* Learn to [create and distribute tables](quickstart-distribute-tables.md).
postgresql Quickstart Create Basic Tier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/quickstart-create-basic-tier.md
- Title: 'Quickstart: create a basic tier server group - Hyperscale (Citus) - Azure Database for PostgreSQL'
-description: Get started with the Azure Database for PostgreSQL Hyperscale (Citus) basic tier.
------ Previously updated : 11/16/2021
-#Customer intent: As a developer, I want to provision a hyperscale server group so that I can run queries quickly on large datasets.
--
-# Create a Hyperscale (Citus) basic tier server group in the Azure portal
-
-Azure Database for PostgreSQL - Hyperscale (Citus) is a managed service that
-you use to run, manage, and scale highly available PostgreSQL databases in the
-cloud. Its [basic tier](concepts-tiers.md) is a a convenient
-deployment option for initial development and testing.
-
-This quickstart shows you how to create a Hyperscale (Citus) basic tier
-server group using the Azure portal. You'll provision the server group
-and verify that you can connect to it to run queries.
--
-## Next steps
-
-In this quickstart, you learned how to provision a Hyperscale (Citus) server group. You connected to it with psql, created a schema, and distributed data.
--- Follow a tutorial to [build scalable multi-tenant
- applications](./tutorial-design-database-multi-tenant.md)
-- Determine the best [initial
- size](howto-scale-initial.md) for your server group
postgresql Quickstart Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/quickstart-create-portal.md
Previously updated : 11/16/2021 Last updated : 01/24/2022 #Customer intent: As a developer, I want to provision a hyperscale server group so that I can run queries quickly on large datasets.
-# Quickstart: create a Hyperscale (Citus) server group in the Azure portal
+# Create a Hyperscale (Citus) server group in the Azure portal
Azure Database for PostgreSQL is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. This Quickstart shows you how to create an Azure Database for PostgreSQL - Hyperscale (Citus) server group using the Azure portal. You'll explore distributed data: sharding tables across nodes, ingesting sample data, and running queries that execute on multiple nodes. -
-## Create and distribute tables
-
-Once connected to the hyperscale coordinator node using psql, you can complete some basic tasks.
-
-Within Hyperscale (Citus) servers there are three types of tables:
--- Distributed or sharded tables (spread out to help scaling for performance and parallelization)-- Reference tables (multiple copies maintained)-- Local tables (often used for internal admin tables)-
-In this quickstart, we'll primarily focus on distributed tables and getting familiar with them.
-
-The data model we're going to work with is simple: user and event data from GitHub. Events include fork creation, git commits related to an organization, and more.
-
-Once you've connected via psql, let's create our tables. In the psql console run:
-
-```sql
-CREATE TABLE github_events
-(
- event_id bigint,
- event_type text,
- event_public boolean,
- repo_id bigint,
- payload jsonb,
- repo jsonb,
- user_id bigint,
- org jsonb,
- created_at timestamp
-);
-
-CREATE TABLE github_users
-(
- user_id bigint,
- url text,
- login text,
- avatar_url text,
- gravatar_id text,
- display_login text
-);
-```
-
-The `payload` field of `github_events` has a JSONB datatype. JSONB is the JSON datatype in binary form in Postgres. The datatype makes it easy to store a flexible schema in a single column.
-
-Postgres can create a `GIN` index on this type, which will index every key and value within it. With an index, it becomes fast and easy to query the payload with various conditions. Let's go ahead and create a couple of indexes before we load our data. In psql:
-
-```sql
-CREATE INDEX event_type_index ON github_events (event_type);
-CREATE INDEX payload_index ON github_events USING GIN (payload jsonb_path_ops);
-```
-
-Next weΓÇÖll take those Postgres tables on the coordinator node and tell Hyperscale (Citus) to shard them across the workers. To do so, weΓÇÖll run a query for each table specifying the key to shard it on. In the current example weΓÇÖll shard both the events and users table on `user_id`:
-
-```sql
-SELECT create_distributed_table('github_events', 'user_id');
-SELECT create_distributed_table('github_users', 'user_id');
-```
+Azure Database for PostgreSQL - Hyperscale (Citus) is a managed service that
+you use to run, manage, and scale highly available PostgreSQL databases in the
+cloud. Its [basic tier](concepts-server-group.md#tiers) is a convenient
+deployment option for initial development and testing.
-We're ready to load data. In psql still, shell out to download the files:
+This quickstart shows you how to create a Hyperscale (Citus) basic tier server
+group using the Azure portal. You'll create the server group and verify that
+you can connect to it to run queries.
-```sql
-\! curl -O https://examples.citusdata.com/users.csv
-\! curl -O https://examples.citusdata.com/events.csv
-```
-
-Next, load the data from the files into the distributed tables:
-
-```sql
-SET CLIENT_ENCODING TO 'utf8';
-
-\copy github_events from 'events.csv' WITH CSV
-\copy github_users from 'users.csv' WITH CSV
-```
-
-## Run queries
-
-Now it's time for the fun part, actually running some queries. Let's start with a simple `count (*)` to see how much data we loaded:
-
-```sql
-SELECT count(*) from github_events;
-```
-
-That worked nicely. We'll come back to that sort of aggregation in a bit, but for now letΓÇÖs look at a few other queries. Within the JSONB `payload` column there's a good bit of data, but it varies based on event type. `PushEvent` events contain a size that includes the number of distinct commits for the push. We can use it to find the total number of commits per hour:
-
-```sql
-SELECT date_trunc('hour', created_at) AS hour,
- sum((payload->>'distinct_size')::int) AS num_commits
-FROM github_events
-WHERE event_type = 'PushEvent'
-GROUP BY hour
-ORDER BY hour;
-```
-
-So far the queries have involved the github\_events exclusively, but we can combine this information with github\_users. Since we sharded both users and events on the same identifier (`user_id`), the rows of both tables with matching user IDs will be [colocated](concepts-colocation.md) on the same database nodes and can easily be joined.
-
-If we join on `user_id`, Hyperscale (Citus) can push the join execution down into shards for execution in parallel on worker nodes. For example, let's find the users who created the greatest number of repositories:
-
-```sql
-SELECT gu.login, count(*)
- FROM github_events ge
- JOIN github_users gu
- ON ge.user_id = gu.user_id
- WHERE ge.event_type = 'CreateEvent'
- AND ge.payload @> '{"ref_type": "repository"}'
- GROUP BY gu.login
- ORDER BY count(*) DESC;
-```
-
-## Clean up resources
-
-In the preceding steps, you created Azure resources in a server group. If you don't expect to need these resources in the future, delete the server group. Press the **Delete** button in the **Overview** page for your server group. When prompted on a pop-up page, confirm the name of the server group and click the final **Delete** button.
-
-## Next steps
-In this quickstart, you learned how to provision a Hyperscale (Citus) server group. You connected to it with psql, created a schema, and distributed data.
+**Next steps**
-- Follow a tutorial to [build scalable multi-tenant
- applications](./tutorial-design-database-multi-tenant.md)
-- Determine the best [initial
- size](howto-scale-initial.md) for your server group
+* [Connect to your server group](quickstart-connect-psql.md) with psql.
postgresql Quickstart Distribute Tables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/quickstart-distribute-tables.md
+
+ Title: 'Quickstart: distribute tables - Hyperscale (Citus) - Azure Database for PostgreSQL'
+description: Quickstart to distribute table data across nodes in Azure Database for PostgreSQL - Hyperscale (Citus).
++++++ Last updated : 01/24/2022++
+# Create and distribute tables
+
+Within Hyperscale (Citus) servers there are three types of tables:
+
+* **Distributed Tables** - Distributed across worker nodes (scaled out).
+ Generally large tables should be distributed tables to improve performance.
+* **Reference tables** - Replicated to all nodes. Enables joins with
+ distributed tables. Typically used for small tables like countries or product
+ categories.
+* **Local tables** - Tables that reside on coordinator node. Administration
+ tables are good examples of local tables.
+
+In this quickstart, we'll primarily focus on distributed tables, and getting
+familiar with them.
+
+The data model we're going to work with is simple: user and event data from GitHub. Events include fork creation, git commits related to an organization, and more.
+
+## Prerequisites
+
+To follow this quickstart, you'll first need to:
+
+1. [Create a server group](quickstart-create-portal.md) in the Azure portal.
+2. [Connect to the server group](quickstart-connect-psql.md) with psql to
+ run SQL commands.
+
+## Create tables
+
+Once you've connected via psql, let's create our tables. In the psql console run:
+
+```sql
+CREATE TABLE github_events
+(
+ event_id bigint,
+ event_type text,
+ event_public boolean,
+ repo_id bigint,
+ payload jsonb,
+ repo jsonb,
+ user_id bigint,
+ org jsonb,
+ created_at timestamp
+);
+
+CREATE TABLE github_users
+(
+ user_id bigint,
+ url text,
+ login text,
+ avatar_url text,
+ gravatar_id text,
+ display_login text
+);
+```
+
+The `payload` field of `github_events` has a JSONB datatype. JSONB is the JSON datatype in binary form in Postgres. The datatype makes it easy to store a flexible schema in a single column.
+
+Postgres can create a `GIN` index on this type, which will index every key and value within it. With an index, it becomes fast and easy to query the payload with various conditions. Let's go ahead and create a couple of indexes before we load our data. In psql:
+
+```sql
+CREATE INDEX event_type_index ON github_events (event_type);
+CREATE INDEX payload_index ON github_events USING GIN (payload jsonb_path_ops);
+```
+
+## Shard tables across worker nodes
+
+Next weΓÇÖll take those Postgres tables on the coordinator node and tell Hyperscale (Citus) to shard them across the workers. To do so, weΓÇÖll run a query for each table specifying the key to shard it on. In the current example weΓÇÖll shard both the events and users table on `user_id`:
+
+```sql
+SELECT create_distributed_table('github_events', 'user_id');
+SELECT create_distributed_table('github_users', 'user_id');
+```
++
+## Load data into distributed tables
+
+We're ready to load data. In psql still, shell out to download the files:
+
+```sql
+\! curl -O https://examples.citusdata.com/users.csv
+\! curl -O https://examples.citusdata.com/events.csv
+```
+
+Next, load the data from the files into the distributed tables:
+
+```sql
+SET CLIENT_ENCODING TO 'utf8';
+
+\copy github_events from 'events.csv' WITH CSV
+\copy github_users from 'users.csv' WITH CSV
+```
+
+## Next steps
+
+* [Run queries](quickstart-run-queries.md) on the distributed tables you
+ created in this quickstart.
+* Learn more about [sharding data](tutorial-shard.md).
postgresql Quickstart Run Queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/quickstart-run-queries.md
+
+ Title: 'Quickstart: Run queries - Hyperscale (Citus) - Azure Database for PostgreSQL'
+description: Quickstart to run queries on table data in Azure Database for PostgreSQL - Hyperscale (Citus).
++++++ Last updated : 01/24/2022++
+# Run queries
+
+## Prerequisites
+
+To follow this quickstart, you'll first need to:
+
+1. [Create a server group](quickstart-create-portal.md) in the Azure portal.
+2. [Connect to the server group](quickstart-connect-psql.md) with psql to
+ run SQL commands.
+3. [Create and distribute tables](quickstart-distribute-tables.md) with our
+ example dataset.
+
+## Aggregate queries
+
+Now it's time for the fun part in our quickstart series: actually running some
+queries. Let's start with a simple `count (*)` to see how much data we loaded:
+
+```sql
+SELECT count(*) from github_events;
+```
+
+That worked nicely. We'll come back to that sort of aggregation in a bit, but for now letΓÇÖs look at a few other queries. Within the JSONB `payload` column there's a good bit of data, but it varies based on event type. `PushEvent` events contain a size that includes the number of distinct commits for the push. We can use it to find the total number of commits per hour:
+
+```sql
+SELECT date_trunc('hour', created_at) AS hour,
+ sum((payload->>'distinct_size')::int) AS num_commits
+FROM github_events
+WHERE event_type = 'PushEvent'
+GROUP BY hour
+ORDER BY hour;
+```
+
+So far the queries have involved the github\_events exclusively, but we can combine this information with github\_users. Since we sharded both users and events on the same identifier (`user_id`), the rows of both tables with matching user IDs will be [colocated](concepts-colocation.md) on the same database nodes and can easily be joined.
+
+If we join on `user_id`, Hyperscale (Citus) can push the join execution down into shards for execution in parallel on worker nodes. For example, let's find the users who created the greatest number of repositories:
+
+```sql
+SELECT gu.login, count(*)
+ FROM github_events ge
+ JOIN github_users gu
+ ON ge.user_id = gu.user_id
+ WHERE ge.event_type = 'CreateEvent'
+ AND ge.payload @> '{"ref_type": "repository"}'
+ GROUP BY gu.login
+ ORDER BY count(*) DESC;
+```
+
+## Next steps
+
+- Follow a tutorial to [build scalable multi-tenant
+ applications](./tutorial-design-database-multi-tenant.md)
+- Determine the best [initial
+ size](howto-scale-initial.md) for your server group
postgresql Tutorial Private Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/tutorial-private-access.md
Title: Create server group with private access (preview) - Hyperscale (Citus) - Azure Database for PostgreSQL
+ Title: Create server group with private access - Hyperscale (Citus) - Azure Database for PostgreSQL
description: Connect a VM to a server group private endpoint Previously updated : 10/15/2021 Last updated : 01/14/2022
-# Create server group with private access (preview) in Azure Database for PostgreSQL - Hyperscale (Citus)
+# Create server group with private access in Azure Database for PostgreSQL - Hyperscale (Citus)
This tutorial creates a virtual machine and a Hyperscale (Citus) server group, and establishes [private access](concepts-private-access.md) between
az vm run-command invoke \
6. Select **Next: Networking** at the bottom of the page.
-7. Select **Private access (preview)**.
-
- > [!NOTE]
- >
- > Private access is available for preview in only [certain
- > regions](concepts-limits.md#regions).
- >
- > If the private access option is not selectable for your server group
- > even though your server group is within an allowed region,
- > please open an Azure [support
- > request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest),
- > and include your Azure subscription ID, to get access.
+7. Select **Private access**.
8. A screen appears called **Create private endpoint**. Enter these values and select **OK**:
az group delete --resource-group link-demo
## Next steps * Learn more about [private access](concepts-private-access.md)
- (preview)
* Learn about [private endpoints](../../private-link/private-endpoint-overview.md) * Learn about [virtual
postgresql Tutorial Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/tutorial-server-group.md
- Title: 'Tutorial: create server group - Hyperscale (Citus) - Azure Database for PostgreSQL'
-description: How to create an Azure Database for PostgreSQL Hyperscale (Citus) server group.
------ Previously updated : 11/16/2021--
-# Tutorial: create server group
-
-In this tutorial, you create a server group in Azure Database for PostgreSQL - Hyperscale (Citus). You'll do these steps:
-
-> [!div class="checklist"]
-> * Provision the nodes
-> * Allow network access
-> * Connect to the coordinator node
--
-## Next steps
-
-With a server group provisioned, it's time to go on to the next tutorial:
-
-* [Work with distributed data](tutorial-shard.md)
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/overview.md
Previously updated : 11/30/2021 Last updated : 01/24/2022 # What is Azure Database for PostgreSQL?
-Azure Database for PostgreSQL is a relational database service in the Microsoft cloud based on the [PostgreSQL Community Edition](https://www.postgresql.org/) (available under the GPLv2 license) database engine. Azure Database for PostgreSQL delivers:
+Azure Database for PostgreSQL is a relational database service in the Microsoft cloud based on the [PostgreSQL open source relational database](https://www.postgresql.org/). Azure Database for PostgreSQL delivers:
- Built-in high availability. - Data protection using automatic backups and point-in-time-restore for up to 35 days.
For detailed overview of single server deployment mode, refer [single server ove
### Azure Database for PostgreSQL - Flexible Server
-Azure Database for PostgreSQL Flexible Server is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. In general, the service provides more flexibility and customizations based on the user requirements. The flexible server architecture allows users to opt for high availability within single availability zone and across multiple availability zones. Flexible Server provide better cost optimization controls with the ability to stop/start server and burstable compute tier, ideal for workloads that do not need full compute capacity continuously. The service currently supports community version of PostgreSQL 11 and 12 with plans to add newer versions soon. The service is currently in public preview, available today in wide variety of Azure regions.
+Azure Database for PostgreSQL Flexible Server is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. In general, the service provides more flexibility and customizations based on the user requirements. The flexible server architecture allows users to opt for high availability within single availability zone and across multiple availability zones. Flexible Server provides better cost optimization controls with the ability to stop/start server and burstable compute tier, ideal for workloads that donΓÇÖt need full-compute capacity continuously. The service currently supports community version of PostgreSQL 11 and 12 with plans to add newer versions soon. The service is currently in public preview, available today in wide variety of Azure regions.
Flexible servers are best suited for -- Application developments requiring better control and customizations.-- Cost optimization controls with ability to stop/start server.
+- Application developments requiring better control and customizations
+- Cost optimization controls with ability to stop/start server
- Zone redundant high availability - Managed maintenance windows
For a detailed overview of flexible server deployment mode, see [flexible server
### Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
-The Hyperscale (Citus) option horizontally scales queries across multiple machines using sharding. Its query engine parallelizes incoming SQL queries across these servers for faster responses on large datasets. It serves applications that require greater scale and performance, generally workloads that are approaching -- or already exceed -- 100 GB of data.
+The Hyperscale (Citus) option horizontally scales queries across multiple machines using sharding. Its query engine parallelizes incoming SQL queries across these servers for faster responses on large datasets. It serves applications that require greater scale and performance, generally workloads that are approaching--or already exceed--100 GB of data.
The Hyperscale (Citus) deployment option delivers: - Horizontal scaling across multiple machines using sharding - Query parallelization across these servers for faster responses on large datasets-- Excellent support for multi-tenant applications, real time operational analytics, and high throughput transactional workloads
+- Excellent support for multi-tenant applications, real-time operational analytics, and high-throughput transactional workloads
Applications built for PostgreSQL can run distributed queries on Hyperscale (Citus) with standard [connection libraries](./concepts-connection-libraries.md) and minimal changes.
postgresql Quickstart Create Server Database Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/quickstart-create-server-database-azure-cli.md
ms.devlang: azurecli Previously updated : 06/25/2020 Last updated : 01/26/2022 # Quickstart: Create an Azure Database for PostgreSQL server by using the Azure CLI
-This quickstart shows how to use [Azure CLI](/cli/azure/get-started-with-azure-cli) commands in [Azure Cloud Shell](https://shell.azure.com) to create a single Azure Database for PostgreSQL server in five minutes. If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+This quickstart shows how to use [Azure CLI](/cli/azure/get-started-with-azure-cli) commands in [Azure Cloud Shell](https://shell.azure.com) to create a single Azure Database for PostgreSQL server in five minutes.
+
+> [!TIP]
+> Consider using the simpler [az postgres up](/cli/azure/postgres#az_postgres_up) Azure CLI command. Try out the [quickstart](./quickstart-create-server-up-azure-cli.md).
+ [!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
- > [!TIP]
- > Consider using the simpler [az postgres up](/cli/azure/postgres#az_postgres_up) Azure CLI command that's currently in preview. Try out the [quickstart](./quickstart-create-server-up-azure-cli.md).
+## Set parameter values
-- Select the specific subscription ID under your account by using the [az account set](/cli/azure/account) command.
+The following values are used in subsequent commands to create the database and required resources. Server names need to be globally unique across all of Azure so the $RANDOM function is used to create the server name.
- - Make a note of the **id** value from the **az login** output to use as the value for the **subscription** argument in the command.
+Change the location as appropriate for your environment. Replace `0.0.0.0` with the IP address range to match your specific environment. Use the public IP address of the computer you're using to restrict access to the server to only your IP address.
- ```azurecli
- az account set --subscription <subscription id>
- ```
- - If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. To get all your subscriptions, use [az account list](/cli/azure/account#az_account_list).
+## Create a resource group
-## Create an Azure Database for PostgreSQL server
+Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *myResourceGroup* in the *eastus* location:
-Create an [Azure resource group](../azure-resource-manager/management/overview.md) by using the [az group create](/cli/azure/group#az_group_create) command, and then create your PostgreSQL server inside this resource group. You should provide a unique name. The following example creates a resource group named `myresourcegroup` in the `westus` location.
-```azurecli-interactive
-az group create --name myresourcegroup --location westus
-```
+## Create a server
-Create an [Azure Database for PostgreSQL server](overview.md) by using the [az postgres server create](/cli/azure/postgres/server) command. A server can contain multiple databases.
+Create a server with the [az postgres server create](/cli/azure/postgres/server#az-postgres-server-create) command.
-```azurecli-interactive
-az postgres server create --resource-group myresourcegroup --name mydemoserver --location westus --admin-user myadmin --admin-password <server_admin_password> --sku-name GP_Gen5_2
-```
-Here are the details for the preceding arguments:
-
-**Setting** | **Sample value** | **Description**
-||
-name | mydemoserver | Unique name that identifies your Azure Database for PostgreSQL server. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain 3 to 63 characters. For more information, see [Azure Database for PostgreSQL Naming Rules](../azure-resource-manager/management/resource-name-rules.md#microsoftdbforpostgresql).
-resource-group | myresourcegroup | Name of the Azure resource group.
-location | westus | Azure location for the server.
-admin-user | myadmin | Username for the administrator login. It can't be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**.
-admin-password | *secure password* | Password of the administrator user. It must contain 8 to 128 characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters.
-sku-name|GP_Gen5_2| Name of the pricing tier and compute configuration. Follow the convention {pricing tier}_{compute generation}_{vCores} in shorthand. For more information, see [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).
-
->[!IMPORTANT]
+
+> [!NOTE]
+>
+>- The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain 3 to 63 characters. For more information, see [Azure Database for PostgreSQL Naming Rules](../azure-resource-manager/management/resource-name-rules.md#microsoftdbforpostgresql).
+>- The user name for the admin user can't be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**.
+>- The password must contain 8 to 128 characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters.
+>- For information about SKUs, see [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).
+
+>[!IMPORTANT]
+>
>- The default PostgreSQL version on your server is 9.6. To see all the versions supported, see [Supported PostgreSQL major versions](./concepts-supported-versions.md).
->- To view all the arguments for **az postgres server create** command, see [this reference document](/cli/azure/postgres/server#az_postgres_server_create).
>- SSL is enabled by default on your server. For more information on SSL, see [Configure SSL connectivity](./concepts-ssl-connection-security.md).
-## Configure a server-level firewall rule
-By default, the server that you created is not publicly accessible and is protected with firewall rules. You can configure the firewall rules on your server by using the [az postgres server firewall-rule create](/cli/azure/postgres/server/firewall-rule) command to give your local environment access to connect to the server.
+## Configure a server-based firewall rule
-The following example creates a firewall rule called `AllowMyIP` that allows connections from a specific IP address, 192.168.0.1. Replace the IP address or range of IP addresses that corresponds to where you'll be connecting from. If you don't know your IP address, go to [WhatIsMyIPAddress.com](https://whatismyipaddress.com/) to get it.
+Create a firewall rule with the [az postgres server firewall-rule create](/cli/azure/postgre/server/firewall-rule) command to give your local environment access to connect to the server.
-```azurecli-interactive
-az postgres server firewall-rule create --resource-group myresourcegroup --server mydemoserver --name AllowMyIP --start-ip-address 192.168.0.1 --end-ip-address 192.168.0.1
-```
+> [!TIP]
+> If you don't know your IP address, go to [WhatIsMyIPAddress.com](https://whatismyipaddress.com/) to get it.
> [!NOTE]
-> To avoid connectivity issues, make sure your network's firewall allows port 5432. Azure Database for PostgreSQL servers use that port.
+> To avoid connectivity issues, make sure your network's firewall allows port 5432. Azure Database for PostgreSQL servers use that port.
+
+## List server-based firewall rules
+
+To list the existing server firewall rules, run the [az postgres server firewall-rule list](/cli/azure/postgres/server/firewall-rule) command.
++
+The output lists the firewall rules, if any, by default in JSON format. You may use the switch `--output table` for a more readable table format as the output.
## Get the connection information To connect to your server, provide host information and access credentials.
-```azurecli-interactive
-az postgres server show --resource-group myresourcegroup --name mydemoserver
+```azurecli
+az postgres server show --resource-group $resourceGroup --name $server
```
-The result is in JSON format. Make a note of the **administratorLogin** and **fullyQualifiedDomainName** values.
-
-```json
-{
- "administratorLogin": "myadmin",
- "earliestRestoreDate": null,
- "fullyQualifiedDomainName": "mydemoserver.postgres.database.azure.com",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myresourcegroup/providers/Microsoft.DBforPostgreSQL/servers/mydemoserver",
- "location": "westus",
- "name": "mydemoserver",
- "resourceGroup": "myresourcegroup",
- "sku": {
- "capacity": 2,
- "family": "Gen5",
- "name": "GP_Gen5_2",
- "size": null,
- "tier": "GeneralPurpose"
- },
- "sslEnforcement": "Enabled",
- "storageProfile": {
- "backupRetentionDays": 7,
- "geoRedundantBackup": "Disabled",
- "storageMb": 5120
- },
- "tags": null,
- "type": "Microsoft.DBforPostgreSQL/servers",
- "userVisibleState": "Ready",
- "version": "9.6"
-}
-```
+Make a note of the **administratorLogin** and **fullyQualifiedDomainName** values.
## Connect to the Azure Database for PostgreSQL server by using psql
-The [psql](https://www.postgresql.org/docs/current/static/app-psql.html) client is a popular choice for connecting to PostgreSQL servers. You can connect to your server by using psql with [Azure Cloud Shell](../cloud-shell/overview.md). You can also use psql on your local environment if you have it available. An empty database, **postgres**, is automatically created with a new PostgreSQL server. You can use that database to connect with psql, as shown in the following code.
- ```bash
- psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=myadmin@mydemoserver --dbname=postgres
- ```
+The [psql](https://www.postgresql.org/docs/current/static/app-psql.html) client is a popular choice for connecting to PostgreSQL servers. You can connect to your server by using `psql` with [Azure Cloud Shell](../cloud-shell/overview.md). You can also use `psql` on your local environment if you have it available. An empty database, **postgres**, is automatically created with a new PostgreSQL server. You can use that database to connect with `psql`, as shown in the following code.
+
+```bash
+psql --host=<server_name>.postgres.database.azure.com --port=5432 --username=<admin_user>@<server_name> --dbname=postgres
+```
> [!TIP] > If you prefer to use a URL path to connect to Postgres, URL encode the @ sign in the username with `%40`. For example, the connection string for psql would be: >
+> ```bash
+> psql postgresql://<admin_user>%40<server_name>@<server_name>.postgres.database.azure.com:5432/postgres
> ```
-> psql postgresql://myadmin%40mydemoserver@mydemoserver.postgres.database.azure.com:5432/postgres
-> ```
- ## Clean up resources
-If you don't need these resources for another quickstart or tutorial, you can delete them by running the following command.
-```azurecli-interactive
-az group delete --name myresourcegroup
-```
-
-If you just want to delete the one newly created server, you can run the [az postgres server delete](/cli/azure/postgres/server) command.
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command - unless you have an ongoing need for these resources. Some of these resources may take a while to create, as well as to delete.
-```azurecli-interactive
-az postgres server delete --resource-group myresourcegroup --name mydemoserver
+```azurecli
+az group delete --name $resourceGroup
``` ## Next steps+ > [!div class="nextstepaction"]
-> [Migrate your database using export and import](./howto-migrate-using-export-and-import.md)
+> [Design your first Azure Database for PostgreSQL using the Azure CLI](tutorial-design-database-using-azure-cli.md)
postgresql Quickstart Create Server Up Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/quickstart-create-server-up-azure-cli.md
ms.devlang: azurecli Previously updated : 05/06/2019 Last updated : 01/25/2022
-# Quickstart: Use an Azure CLI command, az postgres up (preview), to create an Azure Database for PostgreSQL - Single Server
-
-> [!IMPORTANT]
-> The [az postgres up](/cli/azure/postgres#az_postgres_up) Azure CLI command is in preview.
+# Quickstart: Use the az postgres up command to create an Azure Database for PostgreSQL - Single Server
Azure Database for PostgreSQL is a managed service that enables you to run, manage, and scale highly available PostgreSQL databases in the cloud. The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart shows you how to use the [az postgres up](/cli/azure/postgres#az_postgres_up) command to create an Azure Database for PostgreSQL server using the Azure CLI. In addition to creating the server, the `az postgres up` command creates a sample database, a root user in the database, opens the firewall for Azure services, and creates default firewall rules for the client computer. These defaults help to expedite the development process.
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
-
-This article requires that you're running the Azure CLI version 2.0 or later locally. To see the version installed, run the `az --version` command. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-You'll need to sign in to your account using the [az login](/cli/azure/authenticate-azure-cli) command. Note the **ID** property from the command output for the corresponding subscription name.
-
-```azurecli
-az login
-```
+## Create an Azure Database for PostgreSQL server
-If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. Select the specific subscription ID under your account using [az account set](/cli/azure/account) command. Substitute the **subscription ID** property from the **az login** output for your subscription into the subscription ID placeholder.
-```azurecli
-az account set --subscription <subscription id>
-```
-
-## Create an Azure Database for PostgreSQL server
-To use the commands, install the [db-up](/cli/azure/ext/db-up/mysql) extension. If an error is returned, ensure you have installed the latest version of the Azure CLI. See [Install Azure CLI](/cli/azure/install-azure-cli).
+Install the [db-up](/cli/azure/ext/db-up/mysql) extension. If an error is returned, ensure you have installed the latest version of the Azure CLI. See [Install Azure CLI](/cli/azure/install-azure-cli).
```azurecli az extension add --name db-up
postgresql Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/sample-scripts-azure-cli.md
Last updated 09/17/2021
keywords: azure cli samples, azure cli code samples, azure cli script samples # Azure CLI samples for Azure Database for PostgreSQL - Single Server+ The following table includes links to sample Azure CLI scripts for Azure Database for PostgreSQL. | Sample link | Description | ||| |**Create a server**||
-| [Create a server and firewall rule](scripts/sample-create-server-and-firewall-rule.md?toc=%2fcli%2fazure%2ftoc.json) | Azure CLI script that creates an Azure Database for PostgreSQL server and configures a server-level firewall rule. |
+| [Create a server and firewall rule](scripts/sample-create-server-and-firewall-rule.md) | Azure CLI script that creates an Azure Database for PostgreSQL server and configures a server-level firewall rule. |
+| **Create server with vNet rules**||
+| [Create a server with vNet rules](scripts/sample-create-server-with-vnet-rule.md) | Azure CLI that creates an Azure Database for PostgreSQL server with a service endpoint on a virtual network and configures a vNet rule. |
|**Scale a server**||
-| [Scale a server](scripts/sample-scale-server-up-or-down.md?toc=%2fcli%2fazure%2ftoc.json) | Azure CLI script that scales an Azure Database for PostgreSQL server up or down to allow for changing performance needs. |
+| [Scale a server](scripts/sample-scale-server-up-or-down.md) | Azure CLI script that scales an Azure Database for PostgreSQL server up or down to allow for changing performance needs. |
|**Change server configurations**||
-| [Change server configurations](./scripts/sample-change-server-configuration.md?toc=%2fcli%2fazure%2ftoc.json) | Azure CLI script that change configurations options of an Azure Database for PostgreSQL server. |
+| [Change server configurations](./scripts/sample-change-server-configuration.md) | Azure CLI script that change configurations options of an Azure Database for PostgreSQL server. |
|**Restore a server**||
-| [Restore a server](./scripts/sample-point-in-time-restore.md?toc=%2fcli%2fazure%2ftoc.json) | Azure CLI script that restores an Azure Database for PostgreSQL server to a previous point in time. |
+| [Restore a server](./scripts/sample-point-in-time-restore.md) | Azure CLI script that restores an Azure Database for PostgreSQL server to a previous point in time. |
|**Download server logs**||
-| [Enable and download server logs](./scripts/sample-server-logs.md?toc=%2fcli%2fazure%2ftoc.json) | Azure CLI script that enables and downloads server logs of an Azure Database for PostgreSQL server. |
+| [Enable and download server logs](./scripts/sample-server-logs.md) | Azure CLI script that enables and downloads server logs of an Azure Database for PostgreSQL server. |
|||
postgresql Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/scripts/sample-change-server-configuration.md
ms.devlang: azurecli Previously updated : 02/28/2018 Last updated : 01/26/2022 # List and update configurations of an Azure Database for PostgreSQL server using Azure CLI+ This sample CLI script lists all available configuration parameters as well as their allowable values for Azure Database for PostgreSQL server, and sets the *log_retention_days* to a value that is other than the default one. [!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.- ## Sample script
-In this sample script, edit the highlighted lines to update the admin username and password to your own.
-[!code-azurecli-interactive[main](../../../cli_scripts/postgresql/change-server-configurations/change-server-configurations.sh?highlight=15-16 "List and update configurations of Azure Database for PostgreSQL.")]
++
+### Run the script
+ ## Clean up deployment
-Use the following command to remove the resource group and all resources associated with it after the script has been run.
-[!code-azurecli-interactive[main](../../../cli_scripts/postgresql/change-server-configurations/delete-postgresql.sh "Delete the resource group.")]
-## Script explanation
+
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+ This script uses the commands outlined in the following table: | **Command** | **Notes** |
This script uses the commands outlined in the following table:
| [az group delete](/cli/azure/group) | Deletes a resource group including all nested resources. | ## Next steps+ - Read more information on the Azure CLI: [Azure CLI documentation](/cli/azure). - Try additional scripts: [Azure CLI samples for Azure Database for PostgreSQL](../sample-scripts-azure-cli.md) - For more information on server parameters, see [How To Configure server parameters in Azure portal](../howto-configure-server-parameters-using-portal.md).
postgresql Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/scripts/sample-create-server-and-firewall-rule.md
ms.devlang: azurecli Previously updated : 02/28/2018 Last updated : 01/26/2022 # Create an Azure Database for PostgreSQL server and configure a firewall rule using the Azure CLI+ This sample CLI script creates an Azure Database for PostgreSQL server and configures a server-level firewall rule. Once the script has been successfully run, the PostgreSQL server can be accessed from all Azure services and the configured IP address. -- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-In this sample script, edit the highlighted lines to update the admin username and password to your own.
-[!code-azurecli-interactive[main](../../../cli_scripts/postgresql/create-postgresql-server-and-firewall-rule/create-postgresql-server-and-firewall-rule.sh?highlight=15-16 "Create an Azure Database for PostgreSQL, and server-level firewall rule.")]
++
+### Run the script
+ ## Clean up deployment
-Use the following command to remove the resource group and all resources associated with it after the script has been run.
-[!code-azurecli-interactive[main](../../../cli_scripts/postgresql/create-postgresql-server-and-firewall-rule/delete-postgresql.sh "Delete the resource group.")]
-## Script explanation
+
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+ This script uses the commands outlined in the following table: | **Command** | **Notes** |
This script uses the commands outlined in the following table:
| [az group delete](/cli/azure/group) | Deletes a resource group including all nested resources. | ## Next steps+ - Read more information on the Azure CLI: [Azure CLI documentation](/cli/azure) - Try additional scripts: [Azure CLI samples for Azure Database for PostgreSQL](../sample-scripts-azure-cli.md)
postgresql Sample Create Server With Vnet Rule https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/scripts/sample-create-server-with-vnet-rule.md
+
+ Title: CLI script - Create server with vNet rule - Azure Database for PostgreSQL
+description: This sample CLI script creates an Azure Database for PostgreSQL server with a service endpoint on a virtual network and configures a vNet rule.
+++
+ms.devlang: azurecli
++ Last updated : 01/26/2022 ++
+# Create a PostgreSQL server and configure a vNet rule using the Azure CLI
+
+This sample CLI script creates an Azure Database for PostgreSQL server and configures a vNet rule.
+++
+## Sample script
++
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+
+This script uses the commands outlined in the following table:
+
+| **Command** | **Notes** |
+|||
+| [az group create](/cli/azure/group#az_group_create) | Creates a resource group in which all resources are stored. |
+| [az postgresql server create](/cli/azure/postgresql/server#az_postgresql_server_create) | Creates a PostgreSQL server that hosts the databases. |
+| [az network vnet list-endpoint-services](/cli/cli/azure/network/vnet#az-network-vnet-list-endpoint-services) | List which services support VNET service tunneling in a given region. |
+| [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) | Creates a virtual network. |
+| [az network vnet subnet create](/cli/azure/network/vnet#az-network-vnet-subnet-create) | Create a subnet and associate an existing NSG and route table. |
+| [az network vnet subnet show](/cli/azure/network/vnet#az-network-vnet-subnet-show) |Shows details of a subnet. |
+| [az postgresql server vnet-rule create](/cli/azure/postgresql/server/vnet-rule#az-postgresql-server-vnet-rule-create) | Create a virtual network rule to allows access to a PostgreSQL server. |
+| [az group delete](/cli/azure/group#az_group_delete) | Deletes a resource group including all nested resources. |
+
+## Next steps
+
+- Read more information on the Azure CLI: [Azure CLI documentation](/cli/azure).
+- Try additional scripts: [Azure CLI samples for Azure Database for PostgreSQL](../sample-scripts-azure-cli.md)
postgresql Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/scripts/sample-point-in-time-restore.md
ms.devlang: azurecli Previously updated : 02/28/2018 Last updated : 01/26/2022 # Restore an Azure Database for PostgreSQL server using Azure CLI+ This sample CLI script restores a single Azure Database for PostgreSQL server to a previous point in time. -- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-In this sample script, edit the highlighted lines to update the admin username and password to your own. Replace the subscription ID used in the `az monitor` commands with your own subscription ID.
-[!code-azurecli-interactive[main](../../../cli_scripts/postgresql/backup-restore/backup-restore.sh?highlight=15-16 "Restore Azure Database for PostgreSQL.")]
++
+### Run the script
+ ## Clean up deployment
-Use the following command to remove the resource group and all resources associated with it after the script has been run.
-[!code-azurecli-interactive[main](../../../cli_scripts/postgresql/backup-restore/delete-postgresql.sh "Delete the resource group.")]
-## Script explanation
+
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+ This script uses the commands outlined in the following table: | **Command** | **Notes** |
This script uses the commands outlined in the following table:
| [az group delete](/cli/azure/group) | Deletes a resource group including all nested resources. | ## Next steps+ - Read more information on the Azure CLI: [Azure CLI documentation](/cli/azure). - Try additional scripts: [Azure CLI samples for Azure Database for PostgreSQL](../sample-scripts-azure-cli.md) - [How to backup and restore a server in Azure Database for PostgreSQL using the Azure portal](../howto-restore-server-portal.md)
postgresql Sample Scale Server Up Or Down https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/scripts/sample-scale-server-up-or-down.md
ms.devlang: azurecli Previously updated : 08/07/2019 Last updated : 01/26/2022 # Monitor and scale a single PostgreSQL server using Azure CLI
-This sample CLI script scales compute and storage for a single Azure Database for PostgreSQL server after querying the metrics. Compute can scale up or down. Storage can only scale up.
-> [!IMPORTANT]
+This sample CLI script scales compute and storage for a single Azure Database for PostgreSQL server after querying the metrics. Compute can scale up or down. Storage can only scale up. \
+
+> [!IMPORTANT]
> Storage can only be scaled up, not down. -- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-Update the script with your subscription ID.
-[!code-azurecli-interactive[main](../../../cli_scripts/postgresql/scale-postgresql-server/scale-postgresql-server.sh "Create and scale Azure Database for PostgreSQL.")]
++
+### Run the script
+ ## Clean up deployment
-Use the following command to remove the resource group and all resources associated with it after the script has been run.
-[!code-azurecli-interactive[main](../../../cli_scripts/postgresql/scale-postgresql-server/delete-postgresql.sh "Delete the resource group.")]
-## Script explanation
+
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+ This script uses the commands outlined in the following table: | **Command** | **Notes** |
This script uses the commands outlined in the following table:
| [az group delete](/cli/azure/group) | Deletes a resource group including all nested resources. | ## Next steps+ - Learn more about [Azure Database for PostgreSQL compute and storage](../concepts-pricing-tiers.md) - Try additional scripts: [Azure CLI samples for Azure Database for PostgreSQL](../sample-scripts-azure-cli.md) - Learn more about the [Azure CLI](/cli/azure)
postgresql Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/scripts/sample-server-logs.md
ms.devlang: azurecli Previously updated : 02/28/2018 Last updated : 01/26/2022 # Enable and download server slow query logs of an Azure Database for PostgreSQL server using Azure CLI+ This sample CLI script enables and downloads the slow query logs of a single Azure Database for PostgreSQL server. -- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-In this sample script, edit the highlighted lines to update the admin username and password to your own. Replace the &lt;log_file_name&gt; in the `az monitor` commands with your own server log file name.
-[!code-azurecli-interactive[main](../../../cli_scripts/postgresql/server-logs/server-logs.sh?highlight=15-16 "Manipulate with server logs.")]
++
+### Run the script
+ ## Clean up deployment
-Use the following command to remove the resource group and all resources associated with it after the script has been run.
-[!code-azurecli-interactive[main](../../../cli_scripts/postgresql/server-logs/delete-postgresql.sh "Delete the resource group.")]
-## Script explanation
+
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
+ This script uses the commands outlined in the following table: | **Command** | **Notes** |
This script uses the commands outlined in the following table:
| [az group delete](/cli/azure/group) | Deletes a resource group including all nested resources. | ## Next steps+ - Read more information on the Azure CLI: [Azure CLI documentation](/cli/azure). - Try additional scripts: [Azure CLI samples for Azure Database for PostgreSQL](../sample-scripts-azure-cli.md) - [Configure and access server logs in the Azure portal](../howto-configure-server-logs-in-portal.md)
postgresql Tutorial Design Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/tutorial-design-database-using-azure-cli.md
ms.devlang: azurecli Previously updated : 06/25/2019 Last updated : 01/26/2022
-# Tutorial: Design an Azure Database for PostgreSQL - Single Server using Azure CLI
+# Tutorial: Design an Azure Database for PostgreSQL - Single Server using Azure CLI
+ In this tutorial, you use Azure CLI (command-line interface) and other utilities to learn how to: > [!div class="checklist"]
+>
> * Create an Azure Database for PostgreSQL server > * Configure the server firewall > * Use [**psql**](https://www.postgresql.org/docs/9.6/static/app-psql.html) utility to create a database
In this tutorial, you use Azure CLI (command-line interface) and other utilities
> * Update data > * Restore data
-You may use the Azure Cloud Shell in the browser, or [install Azure CLI]( /cli/azure/install-azure-cli) on your own computer to run the commands in this tutorial.
-## Prerequisites
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
-If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+## Set parameter values
-If you have multiple subscriptions, choose the appropriate subscription in which the resource exists or is billed for. Select a specific subscription ID under your account using [az account set](/cli/azure/account) command.
-```azurecli-interactive
-az account set --subscription 00000000-0000-0000-0000-000000000000
-```
+The following values are used in subsequent commands to create the database and required resources. Server names need to be globally unique across all of Azure so the $RANDOM function is used to create the server name.
+
+Change the location as appropriate for your environment. Replace `0.0.0.0` with the IP address range to match your specific environment. Use the public IP address of the computer you're using to restrict access to the server to only your IP address.
+ ## Create a resource group
-Create an [Azure resource group](../azure-resource-manager/management/overview.md) using the [az group create](/cli/azure/group) command. A resource group is a logical container into which Azure resources are deployed and managed as a group. The following example creates a resource group named `myresourcegroup` in the `westus` location.
-```azurecli-interactive
-az group create --name myresourcegroup --location westus
-```
-## Create an Azure Database for PostgreSQL server
-Create an [Azure Database for PostgreSQL server](overview.md) using the [az postgres server create](/cli/azure/postgres/server) command. A server contains a group of databases managed as a group.
+Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *myResourceGroup* in the *eastus* location:
-The following example creates a server called `mydemoserver` in your resource group `myresourcegroup` with server admin login `myadmin`. The name of a server maps to DNS name and is thus required to be globally unique in Azure. Substitute the `<server_admin_password>` with your own value. It is a General Purpose, Gen 5 server with 2 vCores.
-```azurecli-interactive
-az postgres server create --resource-group myresourcegroup --name mydemoserver --location westus --admin-user myadmin --admin-password <server_admin_password> --sku-name GP_Gen5_2 --version 9.6
-```
-The sku-name parameter value follows the convention {pricing tier}\_{compute generation}\_{vCores} as in the examples below:
-+ `--sku-name B_Gen5_2` maps to Basic, Gen 5, and 2 vCores.
-+ `--sku-name GP_Gen5_32` maps to General Purpose, Gen 5, and 32 vCores.
-+ `--sku-name MO_Gen5_2` maps to Memory Optimized, Gen 5, and 2 vCores.
-Please see the [pricing tiers](./concepts-pricing-tiers.md) documentation to understand the valid values per region and per tier.
+## Create a server
-> [!IMPORTANT]
-> The server admin login and password that you specify here are required to log in to the server and its databases later in this quickstart. Remember or record this information for later use.
+Create a server with the [az postgres server create](/cli/azure/postgres/server#az-postgres-server-create) command.
-By default, **postgres** database gets created under your server. The [postgres](https://www.postgresql.org/docs/9.6/static/app-initdb.html) database is a default database meant for use by users, utilities, and third-party applications.
+> [!NOTE]
+>
+> * The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain 3 to 63 characters. For more information, see [Azure Database for PostgreSQL Naming Rules](../azure-resource-manager/management/resource-name-rules.md#microsoftdbforpostgresql).
+> * The user name for the admin user can't be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**.
+> * The password must contain 8 to 128 characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters.
+> * For information about SKUs, see [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).
-## Configure a server-level firewall rule
+>[!IMPORTANT]
+>
+> * The default PostgreSQL version on your server is 9.6. To see all the versions supported, see [Supported PostgreSQL major versions](./concepts-supported-versions.md).
+> * SSL is enabled by default on your server. For more information on SSL, see [Configure SSL connectivity](./concepts-ssl-connection-security.md).
-Create an Azure PostgreSQL server-level firewall rule with the [az postgres server firewall-rule create](/cli/azure/postgres/server/firewall-rule) command. A server-level firewall rule allows an external application, such as [psql](https://www.postgresql.org/docs/9.2/static/app-psql.html) or [PgAdmin](https://www.pgadmin.org/) to connect to your server through the Azure PostgreSQL service firewall.
+## Configure a server-based firewall rule
-You can set a firewall rule that covers an IP range to be able to connect from your network. The following example uses [az postgres server firewall-rule create](/cli/azure/postgres/server/firewall-rule) to create a firewall rule `AllowMyIP` that allows connection from a single IP address.
+Create a firewall rule with the [az postgres server firewall-rule create](/cli/azure/postgre/server/firewall-rule) command to give your local environment access to connect to the server.
-```azurecli-interactive
-az postgres server firewall-rule create --resource-group myresourcegroup --server mydemoserver --name AllowMyIP --start-ip-address 192.168.0.1 --end-ip-address 192.168.0.1
-```
-To restrict access to your Azure PostgreSQL server to only your network, you can set the firewall rule to only cover your corporate network IP address range.
+> [!TIP]
+> If you don't know your IP address, go to [WhatIsMyIPAddress.com](https://whatismyipaddress.com/) to get it.
> [!NOTE]
-> Azure PostgreSQL server communicates over port 5432. When connecting from within a corporate network, outbound traffic over port 5432 may not be allowed by your network's firewall. Have your IT department open port 5432 to connect to your Azure SQL Database server.
->
+> To avoid connectivity issues, make sure your network's firewall allows port 5432. Azure Database for PostgreSQL servers use that port.
+
+## List server-based firewall rules
+
+To list the existing server firewall rules, run the [az postgres server firewall-rule list](/cli/azure/postgres/server/firewall-rule) command.
++
+The output lists the firewall rules, if any, by default in JSON format. You may use the switch `--output table` for a more readable table format as the output.
## Get the connection information
-To connect to your server, you need to provide host information and access credentials.
-```azurecli-interactive
-az postgres server show --resource-group myresourcegroup --name mydemoserver
-```
+To connect to your server, provide host information and access credentials.
-The result is in JSON format. Make a note of the **administratorLogin** and **fullyQualifiedDomainName**.
-```json
-{
- "administratorLogin": "myadmin",
- "earliestRestoreDate": null,
- "fullyQualifiedDomainName": "mydemoserver.postgres.database.azure.com",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myresourcegroup/providers/Microsoft.DBforPostgreSQL/servers/mydemoserver",
- "location": "westus",
- "name": "mydemoserver",
- "resourceGroup": "myresourcegroup",
- "sku": {
- "capacity": 2,
- "family": "Gen5",
- "name": "GP_Gen5_2",
- "size": null,
- "tier": "GeneralPurpose"
- },
- "sslEnforcement": "Enabled",
- "storageProfile": {
- "backupRetentionDays": 7,
- "geoRedundantBackup": "Disabled",
- "storageMb": 5120
- },
- "tags": null,
- "type": "Microsoft.DBforPostgreSQL/servers",
- "userVisibleState": "Ready",
- "version": "9.6"
-
-}
+```azurecli
+az postgres server show --resource-group $resourceGroup --name $server
```
-## Connect to Azure Database for PostgreSQL database using psql
-If your client computer has PostgreSQL installed, you can use a local instance of [psql](https://www.postgresql.org/docs/9.6/static/app-psql.html), or the Azure Cloud Console to connect to an Azure PostgreSQL server. Let's now use the psql command-line utility to connect to the Azure Database for PostgreSQL server.
+Make a note of the **administratorLogin** and **fullyQualifiedDomainName** values.
-1. Run the following psql command to connect to an Azure Database for PostgreSQL database:
- ```
- psql --host=<servername> --port=<port> --username=<user@servername> --dbname=<dbname>
- ```
+## Connect to the Azure Database for PostgreSQL server by using psql
- For example, the following command connects to the default database called **postgres** on your PostgreSQL server **mydemoserver.postgres.database.azure.com** using access credentials. Enter the `<server_admin_password>` you chose when prompted for password.
-
- ```
- psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=myadmin@mydemoserver --dbname=postgres
- ```
+The [psql](https://www.postgresql.org/docs/current/static/app-psql.html) client is a popular choice for connecting to PostgreSQL servers. You can connect to your server by using `psql` with [Azure Cloud Shell](../cloud-shell/overview.md). You can also use `psql` on your local environment if you have it available. An empty database, **postgres**, is automatically created with a new PostgreSQL server. You can use that database to connect with `psql`, as shown in the following code.
+
+```bash
+psql --host=<server_name>.postgres.database.azure.com --port=5432 --username=<admin_user>@<server_name> --dbname=postgres
+```
+
+> [!TIP]
+> If you prefer to use a URL path to connect to Postgres, URL encode the @ sign in the username with `%40`. For example, the connection string for psql would be:
+>
+> ```bash
+> psql postgresql://<admin_user>%40<server_name>@<server_name>.postgres.database.azure.com:5432/postgres
+> ```
+
+## Create a blank database
- > [!TIP]
- > If you prefer to use a URL path to connect to Postgres, URL encode the @ sign in the username with `%40`. For example the connection string for psql would be,
- > ```
- > psql postgresql://myadmin%40mydemoserver@mydemoserver.postgres.database.azure.com:5432/postgres
- > ```
+1. Once you are connected to the server, create a blank database at the prompt:
-2. Once you are connected to the server, create a blank database at the prompt:
```sql CREATE DATABASE mypgsqldb; ```
-3. At the prompt, execute the following command to switch connection to the newly created database **mypgsqldb**:
+1. At the prompt, execute the following command to switch connection to the newly created database **mypgsqldb**:
+ ```sql \c mypgsqldb ``` ## Create tables in the database+ Now that you know how to connect to the Azure Database for PostgreSQL, you can complete some basic tasks: First, create a table and load it with some data. For example, create a table that tracks inventory information:+ ```sql CREATE TABLE inventory (
- id serial PRIMARY KEY,
- name VARCHAR(50),
- quantity INTEGER
+ id serial PRIMARY KEY,
+ name VARCHAR(50),
+ quantity INTEGER
); ``` You can see the newly created table in the list of tables now by typing:+ ```sql \dt ``` ## Load data into the table+ Now that there is a table created, insert some data into it. At the open command prompt window, run the following query to insert some rows of data:+ ```sql INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150); INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154);
INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154);
You have now added two rows of sample data into the table you created earlier. ## Query and update the data in the tables
-Execute the following query to retrieve information from the inventory table:
+
+Execute the following query to retrieve information from the inventory table:
+ ```sql SELECT * FROM inventory; ``` You can also update the data in the inventory table:+ ```sql UPDATE inventory SET quantity = 200 WHERE name = 'banana'; ``` You can see the updated values when you retrieve the data:+ ```sql SELECT * FROM inventory; ``` ## Restore a database to a previous point in time
-Imagine you have accidentally deleted a table. This is something you cannot easily recover from. Azure Database for PostgreSQL allows you to go back to any point-in-time for which your server has backups (determined by the backup retention period you configured) and restore this point-in-time to a new server. You can use this new server to recover your deleted data.
+
+Imagine you have accidentally deleted a table. This is something you cannot easily recover from. Azure Database for PostgreSQL allows you to go back to any point-in-time for which your server has backups (determined by the backup retention period you configured) and restore this point-in-time to a new server. You can use this new server to recover your deleted data.
The following command restores the sample server to a point before the table was added:+ ```azurecli-interactive az postgres server restore --resource-group myresourcegroup --name mydemoserver-restored --restore-point-in-time 2017-04-13T13:59:00Z --source-server mydemoserver ```
The command is synchronous, and will return after the server is restored. Once t
## Clean up resources
-In the preceding steps, you created Azure resources in a server group. If you don't expect to need these resources in the future, delete the server group. Press the *Delete* button in the *Overview* page for your server group. When prompted on a pop-up page, confirm the name of the server group and click the final *Delete* button.
+Use the following command to remove the resource group and all resources associated with it using the [az group delete](/cli/azure/vm/extension#az_vm_extension_set) command - unless you have an ongoing need for these resources. Some of these resources may take a while to create, as well as to delete.
+```azurecli
+az group delete --name $resourceGroup
+```
## Next steps+ In this tutorial, you learned how to use Azure CLI (command-line interface) and other utilities to: > [!div class="checklist"]
+>
> * Create an Azure Database for PostgreSQL server > * Configure the server firewall > * Use the **psql** utility to create a database
purview How To Data Owner Policy Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-data-owner-policy-authoring-generic.md
+
+ Title: Authoring and publishing data owner access policies
+description: Step-by-step guide on how a data owner can author and publish access policies in Azure Purview
+++++ Last updated : 1/28/2022+++
+# Authoring and publishing data owner access policies (preview)
+
+This tutorial describes how a data owner can create, update and publish access policies in Azure Purview.
+
+## Create a new policy
+
+This section describes the steps to create a new policy in Azure Purview.
+
+1. Sign in to Azure Purview Studio.
+
+1. Navigate to the **Policy management** app using the left side panel. Then select **Data policies**.
+
+1. Select the **New Policy** button in the policy page.
+
+ ![Image shows how a data owner can access the Policy functionality in Azure Purview when it wants to create policies.](./media/access-policies-common/policy-onboard-guide-1.png)
+
+1. The new policy page will appear. Enter the policy **Name** and **Description**.
+
+1. To add policy statements to the new policy, select the **New policy statement** button. This will bring up the policy statement builder.
+
+ ![Image shows how a data owner can create a new policy statement.](./media/access-policies-common/create-new-policy.png)
+
+1. Select the **Effect** button and choose *Allow* from the drop-down list.
+
+1. Select the **Action** button and choose *Read* or *Modify* from the drop-down list.
+
+1. Select the **Data Resources** button to bring up the window to enter Data resource information, which will open to the right.
+
+1. Under the **Data Resources** Panel do one of two things depending on the granularity of the policy:
+ - To create a broad policy statement that covers an entire data source, resource group, or subscription that was previously registered, use the **Data sources** box and select its **Type**.
+ - To create a fine-grained policy, use the **Assets** box instead. Enter the **Data Source Type** and the **Name** of a previously registered and scanned data source. See example in the image.
+
+ ![Image shows how a data owner can select a Data Resource when editing a policy statement.](./media/access-policies-common/select-data-source-type.png)
+
+1. Select the **Continue** button and transverse the hierarchy to select and underlying data-object (e.g. folder, file, etc). Select **Recursive** to apply the policy from that point in the hierarchy down to any child data-objects. Then select the **Add** button. This will take you back to the policy editor.
+
+ ![Image shows how a data owner can select the asset when creating or editing a policy statement.](./media/access-policies-common/select-asset.png)
+
+1. Select the **Subjects** button and enter the subject identity as a principal, group, or MSI. Then select the **OK** button. This will take you back to the policy editor
+
+ ![Image shows how a data owner can select the subject when creating or editing a policy statement.](./media/access-policies-common/select-subject.png)
+
+1. Repeat the steps #5 to #11 to enter any more policy statements.
+
+1. Select the **Save** button to save the policy
+
+## Update or delete a policy
+
+Steps to create a new policy in Azure Purview are as follows.
+
+1. Sign in to Azure Purview Studio.
+
+1. Navigate to the **Policy management** app using the left side panel. Then select **Data policies**.
+
+ ![Image shows how a data owner can access the Policy functionality in Azure Purview when it wants to update a policy.](./media/access-policies-common/policy-onboard-guide-2.png)
+
+1. The Policy portal will present the list of existing policies in Azure Purview. Select the policy that needs to be updated.
+
+1. The policy details page will appear, including Edit and Delete options. Select the **Edit** button, which brings up the policy statement builder. Now, any parts of the statements in this policy can be updated. To delete the policy, use the **Delete** button.
+
+ ![Image shows how a data owner can edit or delete a policy statement.](./media/access-policies-common/edit-policy.png)
+
+## Publish the policy
+
+A newly created policy is in the draft state. The process of publishing associates the new policy with one or more data sources under governance. This is called "binding" a policy to a data source.
+
+The steps to publish a policy are as follows
+
+1. Sign in to Azure Purview Studio.
+
+1. Navigate to the Policy management app using the left side panel. Then select **Data policies**.
+
+ ![Image shows how a data owner can access the Policy functionality in Azure Purview when it wants to publish a policy.](./media/access-policies-common/policy-onboard-guide-2.png)
+
+1. The Policy portal will present the list of existing policies in Azure Purview. Locate the policy that needs to be published. Select the **Publish** button on the right top corner of the page.
+
+ ![Image shows how a data owner can publish a policy.](./media/access-policies-common/publish-policy.png)
+
+1. A list of data sources is displayed. You can enter a name to filter the list. Then, select each data source where this policy is to be published and then select the **Publish** button.
+
+ ![Image shows how a data owner can select the data source where the policy will be published.](./media/access-policies-common/select-data-sources-publish-policy.png)
+
+>[!Note]
+> - After making changes to a policy, there is no need to publish it again for it to take effect if the data source(s) continues to be the same.
+
+## Next steps
+Check blog, demo and related tutorials
+
+* [What's New in Azure Purview at Microsoft Ignite 2021](https://techcommunity.microsoft.com/t5/azure-purview/what-s-new-in-azure-purview-at-microsoft-ignite-2021/ba-p/2915954)
+* [Demo of data owner access policies for Azure Storage](https://www.youtube.com/watch?v=CFE8ltT19Ss)
+* [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./tutorial-data-owner-policies-resource-group.md)
+* [Enable Azure Purview data owner policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
purview Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/purview-connector-overview.md
Title: Azure Purview Connector Overview
-description: This article outlines the different data stores and functionalities supported in Azure Purview
+ Title: Azure Purview supported data sources and file types
+description: This article provides details about supported data sources, file types, and functionalities in Azure Purview.
Previously updated : 12/21/2021 Last updated : 01/24/2022
-# Supported data stores
+# Supported data sources and file types
-Azure Purview supports the following data stores. Select each data store to learn the supported capabilities and the corresponding configurations in details.
+This article discusses currently supported data sources, file types, and scanning concepts in Azure Purview.
## Azure Purview data sources
+The table below shows the supported capabilities for each data source. Select the data source, or the feature, to learn more.
+ |**Category**| **Data Store** |**Technical metadata** |**Classification** |**Lineage** | **Access Policy** | ||||||| | Azure | [Azure Blob Storage](register-scan-azure-blob-storage-source.md)| [Yes](register-scan-azure-blob-storage-source.md#register) | [Yes](register-scan-azure-blob-storage-source.md#scan)| Limited* | [Yes](how-to-access-policies-storage.md) |
The following is a list of all the Azure data source (data center) regions where
- West US - West US 2
+## File types supported for scanning
+
+The following file types are supported for scanning, for schema extraction, and classification where applicable:
+
+- Structured file formats supported by extension: AVRO, ORC, PARQUET, CSV, JSON, PSV, SSV, TSV, TXT, XML, GZIP
+ > [!Note]
+ > * Azure Purview scanner only supports schema extraction for the structured file types listed above.
+ > * For AVRO, ORC, and PARQUET file types, Azure Purview scanner does not support schema extraction for files that contain complex data types (for example, MAP, LIST, STRUCT).
+ > * Azure Purview scanner supports scanning snappy compressed PARQUET types for schema extraction and classification.
+ > * For GZIP file types, the GZIP must be mapped to a single csv file within.
+ > Gzip files are subject to System and Custom Classification rules. We currently don't support scanning a gzip file mapped to multiple files within, or any file type other than csv.
+ > * For delimited file types(CSV, PSV, SSV, TSV, TXT), we do not support data type detection. The data type will be listed as "string" for all columns.
+- Document file formats supported by extension: DOC, DOCM, DOCX, DOT, ODP, ODS, ODT, PDF, POT, PPS, PPSX, PPT, PPTM, PPTX, XLC, XLS, XLSB, XLSM, XLSX, XLT
+- Azure Purview also supports custom file extensions and custom parsers.
+
+## Nested data
+
+Currently, nested data is only supported for JSON content.
+
+For all [system supported file types](#file-types-supported-for-scanning), if there is nested JSON content in a column, then the scanner parses the nested JSON data and surfaces it within the schema tab of the asset.
+
+Nested data, or nested schema parsing, is not supported in SQL. A column with nested data will be reported and classified as is, and sub-data will not be parsed.
+
+## Sampling within a file
+
+In Azure Purview terminology,
+- L1 scan: Extracts basic information and meta data like file name, size and fully qualified name
+- L2 scan: Extracts schema for structured file types and database tables
+- L3 scan: Extracts schema where applicable and subjects the sampled file to system and custom classification rules
+
+For all structured file formats, Azure Purview scanner samples files in the following way:
+
+- For structured file types, it samples the top 128 rows in each column or the first 1 MB, whichever is lower.
+- For document file formats, it samples the first 20 MB of each file.
+ - If a document file is larger than 20 MB, then it is not subject to a deep scan (subject to classification). In that case, Azure Purview captures only basic meta data like file name and fully qualified name.
+- For **tabular data sources(SQL, CosmosDB)**, it samples the top 128 rows.
+
+## Resource set file sampling
+
+A folder or group of partition files is detected as a *resource set* in Azure Purview, if it matches with a system resource set policy or a customer defined resource set policy. If a resource set is detected, then Azure Purview will sample each folder that it contains. Learn more about resource sets [here](concept-resource-sets.md).
+
+File sampling for resource sets by file types:
+
+- **Delimited files (CSV, PSV, SSV, TSV)** - 1 in 100 files are sampled (L3 scan) within a folder or group of partition files that are considered a 'Resource set'
+- **Data Lake file types (Parquet, Avro, Orc)** - 1 in 18446744073709551615 (long max) files are sampled (L3 scan) within a folder or group of partition files that are considered a *resource set*
+- **Other structured file types (JSON, XML, TXT)** - 1 in 100 files are sampled (L3 scan) within a folder or group of partition files that are considered a 'Resource set'
+- **SQL objects and CosmosDB entities** - Each file is L3 scanned.
+- **Document file types** - Each file is L3 scanned. Resource set patterns don't apply to these file types.
+
+## Classification
+
+All 206 system classification rules apply to structured file formats. Only the MCE classification rules apply to document file types (Not the data scan native regex patterns, bloom filter-based detection). For more information on supported classifications, see [Supported classifications in Azure Purview](supported-classifications.md).
+ ## Next steps - [Register and scan Azure Blob storage source](register-scan-azure-blob-storage-source.md)
+- [Scans and ingestion in Azure Purview](concept-scans-and-ingestion.md)
+- [Manage data sources in Azure Purview](manage-data-sources.md)
purview Sources And Scans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/sources-and-scans.md
- Title: Supported data sources and file types
-description: This article provides conceptual details about supported data sources and file types in Azure Purview.
----- Previously updated : 09/27/2021--
-# Supported data sources and file types in Azure Purview
-
-This article discusses supported data sources, file types and scanning concepts in Azure Purview.
-
-## Supported data sources
-
-Azure Purview supports all the data sources listed [here](purview-connector-overview.md).
-
-## File types supported for scanning
-
-The following file types are supported for scanning, for schema extraction and classification where applicable:
--- Structured file formats supported by extension: AVRO, ORC, PARQUET, CSV, JSON, PSV, SSV, TSV, TXT, XML, GZIP
- > [!Note]
- > * Azure Purview scanner only supports schema extraction for the structured file types listed above.
- > * For AVRO, ORC, and PARQUET file types, Azure Purview scanner does not support schema extraction for files that contain complex data types (for example, MAP, LIST, STRUCT).
- > * Azure Purview scanner supports scanning snappy compressed PARQUET types for schema extraction and classification.
- > * For GZIP file types, the GZIP must be mapped to a single csv file within.
- > Gzip files are subject to System and Custom Classification rules. We currently don't support scanning a gzip file mapped to multiple files within, or any file type other than csv.
- > * For delimited file types(CSV, PSV, SSV, TSV, TXT), we do not support data type detection. The data type will be listed as "string" for all columns.
-- Document file formats supported by extension: DOC, DOCM, DOCX, DOT, ODP, ODS, ODT, PDF, POT, PPS, PPSX, PPT, PPTM, PPTX, XLC, XLS, XLSB, XLSM, XLSX, XLT-- Azure Purview also supports custom file extensions and custom parsers.-
-## Sampling within a file
-
-In Azure Purview terminology,
-- L1 scan: Extracts basic information and meta data like file name, size and fully qualified name-- L2 scan: Extracts schema for structured file types and database tables-- L3 scan: Extracts schema where applicable and subjects the sampled file to system and custom classification rules-
-For all structured file formats, Azure Purview scanner samples files in the following way:
--- For structured file types, it samples the top 128 rows in each column or the first 1 MB, whichever is lower.-- For document file formats, it samples the first 20 MB of each file.
- - If a document file is larger than 20 MB, then it is not subject to a deep scan (subject to classification). In that case, Azure Purview captures only basic meta data like file name and fully qualified name.
-- For **tabular data sources(SQL, CosmosDB)**, it samples the top 128 rows. -
-## Resource set file sampling
-
-A folder or group of partition files is detected as a *resource set* in Azure Purview, if it matches with a system resource set policy or a customer defined resource set policy. If a resource set is detected, then Azure Purview will sample each folder that it contains. Learn more about resource sets [here](concept-resource-sets.md).
-
-File sampling for resource sets by file types:
--- **Delimited files (CSV, PSV, SSV, TSV)** - 1 in 100 files are sampled (L3 scan) within a folder or group of partition files that are considered a 'Resource set'-- **Data Lake file types (Parquet, Avro, Orc)** - 1 in 18446744073709551615 (long max) files are sampled (L3 scan) within a folder or group of partition files that are considered a *resource set*-- **Other structured file types (JSON, XML, TXT)** - 1 in 100 files are sampled (L3 scan) within a folder or group of partition files that are considered a 'Resource set'-- **SQL objects and CosmosDB entities** - Each file is L3 scanned.-- **Document file types** - Each file is L3 scanned. Resource set patterns don't apply to these file types.-
-## Classification
-
-All 206 system classification rules apply to structured file formats. Only the MCE classification rules apply to document file types (Not the data scan native regex patterns, bloom filter-based detection). For more information on supported classifications, see [Supported classifications in Azure Purview](supported-classifications.md).
-
-## Next steps
--- [Scans and ingestion in Azure Purview](concept-scans-and-ingestion.md)-- [Manage data sources in Azure Purview](manage-data-sources.md)
purview Tutorial Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-data-owner-policies-resource-group.md
Title: Access provisioning by data owner to resource groups or subscriptions
-description: Step-by-step guide showing how a data owner can create policies on resource groups or subscriptions.
+ Title: Resource group and subscription access provisioning by data owner
+description: Step-by-step guide showing how a data owner can create access policies to resource groups or subscriptions.
Previously updated : 1/25/2022 Last updated : 1/28/2022
-# Access provisioning by data owner to resource groups or subscriptions (preview)
+# Tutorial: Resource group and subscription access provisioning by data owner (preview)
-This guide describes how a data owner can leverage Azure Purview to enable access to ALL data sources in a subscription or a resource group. This can be achieved through a single policy statement, and will cover all existing data sources, as well as data sources that are created afterwards. However, at this point, only the following data sources are supported:
+This tutorial describes how a data owner can leverage Azure Purview to enable access to ALL data sources in a subscription or a resource group. This can be achieved through a single policy statement, and will cover all existing data sources, as well as data sources that are created afterwards. However, at this point, only the following data sources are supported:
- Blob storage - Azure Data Lake Storage (ADLS) Gen2
+In this tutorial, you learn how to:
+> [!div class="checklist"]
+> * Prerequisites
+> * Configure permissions
+> * Register a data asset for Data use governance
+> * Create and publish a policy
+ > [!Note] > These capabilities are currently in preview. This preview version is provided without a service level agreement, and should not be used for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
## Configuration [!INCLUDE [Access policies generic configuration](./includes/access-policies-configuration-generic.md)]
-### Register the subscription or resource group in Azure Purview
+### Register the subscription or resource group in Azure Purview for Data use governance
The subscription or resource group needs to be registered with Azure Purview to later define access policies. You can follow this guide: - [Register multiple sources - Azure Purview](register-scan-azure-multiple-sources.md)
-Enable the resource group or subscription for access policies in Azure Purview by setting the **Data use governance** toggle to enable, as shown in the picture.
+Enable the resource group or the subscription for access policies in Azure Purview by setting the **Data use governance** toggle to **Enabled**, as shown in the picture.
-![Image shows how to register a data source for policy.](./media/tutorial-access-policies-resource-group/register-resource-group-for-policy.png)
+![Image shows how to register a resource group or subscription for policy.](./media/tutorial-data-owner-policies-resource-group/register-resource-group-for-policy.png)
[!INCLUDE [Access policies generic registration](./includes/access-policies-registration-generic.md)]
-## Policy authoring
+## Create and publish a data owner policy
+Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owner-policy-authoring-generic.md) to create and publish a policy similar to the example shown in the image: a policy that provides security group *sg-Finance* *modify* access to resource group *finance-rg*:
+
+![Image shows a sample data owner policy giving access to a resource group.](./media/tutorial-data-owner-policies-resource-group/data-owner-policy-example-resource-group.png)
## Additional information
The limit for Azure Purview policies that can be enforced by Storage accounts is
> - Publish is a background operation. It can take up to **2 hours** for the changes to be reflected in the data source. ## Next steps
-Check the blog and demo related to the capabilities mentioned in this how-to guide
+Check blog, demo and related tutorials
* [What's New in Azure Purview at Microsoft Ignite 2021](https://techcommunity.microsoft.com/t5/azure-purview/what-s-new-in-azure-purview-at-microsoft-ignite-2021/ba-p/2915954)
-* [Demo of access policy for Azure Storage](https://www.youtube.com/watch?v=CFE8ltT19Ss)
+* [Demo of data owner access policies for Azure Storage](https://www.youtube.com/watch?v=CFE8ltT19Ss)
* [Enable Azure Purview data owner policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
purview Tutorial Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-data-owner-policies-storage.md
Title: Access provisioning by data owner to Azure Storage datasets
-description: Step-by-step guide on how to integrate Azure Storage with Azure Purview to enable data owners to create access policies.
+description: Step-by-step guide showing how data owners can create access policies to datasets in Azure Storage
- Previously updated : 1/25/2022+ Last updated : 1/28/2022
-# Access provisioning by data owner to Azure Storage datasets (preview)
+# Tutorial: Access provisioning by data owner to Azure Storage datasets (preview)
-This guide describes how a data owner can leverage Azure Purview to enable access to datasets in Azure Storage. At this point, only the following data sources are supported:
+This tutorial describes how a data owner can leverage Azure Purview to enable access to datasets in Azure Storage. At this point, only the following data sources are supported:
- Blob storage - Azure Data Lake Storage (ADLS) Gen2
+In this tutorial, you learn how to:
+> [!div class="checklist"]
+> * Prerequisites
+> * Configure permissions
+> * Register a data asset for Data use governance
+> * Create and publish a policy
+ > [!Note] > These capabilities are currently in preview. This preview version is provided without a service level agreement, and should not be used for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
## Configuration [!INCLUDE [Access policies generic configuration](./includes/access-policies-configuration-generic.md)]
-### Register and scan data sources in Azure Purview
-Register and scan each data source with Azure Purview to later define access policies. You can follow these guides:
+### Register the data sources in Azure Purview for Data use governance
+Register and scan each Storage account with Azure Purview to later define access policies. You can follow these guides:
- [Register and scan Azure Storage Blob - Azure Purview](register-scan-azure-blob-storage-source.md) - [Register and scan Azure Data Lake Storage (ADLS) Gen2 - Azure Purview](register-scan-adls-gen2.md)
-Enable the data source for access policies in Azure Purview by setting the **Data use governance** toggle to enable, as shown in the picture.
+Enable the data source for access policies in Azure Purview by setting the **Data use governance** toggle to **Enabled**, as shown in the picture.
-![Image shows how to register a data source for policy.](./media/how-to-access-policies-storage/register-data-source-for-policy-storage.png)
+![Image shows how to register a data source for policy.](./media/tutorial-data-owner-policies-storage/register-data-source-for-policy-storage.png)
[!INCLUDE [Access policies generic registration](./includes/access-policies-registration-generic.md)]
-## Policy authoring
+## Create and publish a data owner policy
+Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owner-policy-authoring-generic.md) to create and publish a policy similar to the example shown in the image: a policy that provides group *Contoso Team* *read* access to Storage account *marketinglake1*:
+
+![Image shows a sample data owner policy giving access to an Azure Storage account.](./media/tutorial-data-owner-policies-storage/data-owner-policy-example-storage.png)
+ ## Additional information >[!Important]
This section contains a reference of how actions in Azure Purview data policies
## Next steps
-Check the blog and demo related to the capabilities mentioned in this how-to guide
+Check blog, demo and related tutorials
* [What's New in Azure Purview at Microsoft Ignite 2021](https://techcommunity.microsoft.com/t5/azure-purview/what-s-new-in-azure-purview-at-microsoft-ignite-2021/ba-p/2915954) * [Demo of access policy for Azure Storage](https://www.youtube.com/watch?v=CFE8ltT19Ss)
role-based-access-control Rbac And Directory Admin Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/rbac-and-directory-admin-roles.md
Account Administrator, Service Administrator, and Co-Administrator are the three
| | | | | | Account Administrator | 1 per Azure account | <ul><li>Can access the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) and manage billing</li><li>Manage billing for all subscriptions in the account</li><li>Create new subscriptions</li><li>Cancel subscriptions</li><li>Change the billing for a subscription</li><li>Change the Service Administrator</li><li>Can't cancel subscriptions unless they have the Service Administrator or subscription Owner role</li></ul> | Conceptually, the billing owner of the subscription. | | Service Administrator | 1 per Azure subscription | <ul><li>Manage services in the [Azure portal](https://portal.azure.com)</li><li>Cancel the subscription</li><li>Assign users to the Co-Administrator role</li></ul> | By default, for a new subscription, the Account Administrator is also the Service Administrator.<br>The Service Administrator has the equivalent access of a user who is assigned the Owner role at the subscription scope.<br>The Service Administrator has full access to the Azure portal. |
-| Co-Administrator | 200 per subscription | <ul><li>Same access privileges as the Service Administrator, but canΓÇÖt change the association of subscriptions to Azure directories</li><li>Assign users to the Co-Administrator role, but cannot change the Service Administrator</li></ul> | The Co-Administrator has the equivalent access of a user who is assigned the Owner role at the subscription scope. |
+| Co-Administrator | 200 per subscription | <ul><li>Same access privileges as the Service Administrator, but canΓÇÖt change the association of subscriptions to Azure AD directories</li><li>Assign users to the Co-Administrator role, but cannot change the Service Administrator</li></ul> | The Co-Administrator has the equivalent access of a user who is assigned the Owner role at the subscription scope. |
In the Azure portal, you can manage Co-Administrators or view the Service Administrator by using the **Classic administrators** tab.
role-based-access-control Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/troubleshooting.md
na Previously updated : 01/21/2022 Last updated : 01/28/2022
If you recently invited a user when creating a role assignment, this security pr
However, if this security principal is not a recently invited user, it might be a deleted security principal. If you assign a role to a security principal and then you later delete that security principal without first removing the role assignment, the security principal will be listed as **Identity not found** and an **Unknown** type.
-If you list this role assignment using Azure PowerShell, you might see an empty `DisplayName` and an `ObjectType` set to **Unknown**. For example, [Get-AzRoleAssignment](/powershell/module/az.resources/get-azroleassignment) returns a role assignment that is similar to the following output:
+If you list this role assignment using Azure PowerShell, you might see an empty `DisplayName` and `SignInName`. For example, [Get-AzRoleAssignment](/powershell/module/az.resources/get-azroleassignment) returns a role assignment that is similar to the following output:
``` RoleAssignmentId : /subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.Authorization/roleAssignments/22222222-2222-2222-2222-222222222222
SignInName :
RoleDefinitionName : Storage Blob Data Contributor RoleDefinitionId : ba92f5b4-2d11-453d-a403-e96b0029c9fe ObjectId : 33333333-3333-3333-3333-333333333333
-ObjectType : Unknown
+ObjectType : User
CanDelegate : False ```
search Search How To Index Power Query Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-how-to-index-power-query-data-sources.md
Power Query connectors can reach a broader range of data sources, including thos
This article shows you an Azure portal-based approach for setting up an indexer using Power Query connectors. Currently, there is no SDK support. > [!NOTE]
-> Preview functionality is provided without a service level agreement, and is not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Preview functionality is provided under [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) and is not recommended for production workloads.
## Supported functionality+ Power Query connectors are used in indexers. An indexer in Azure Cognitive Search is a crawler that extracts searchable data and metadata from an external data source and populates an index based on field-to-field mappings between the index and your data source. This approach is sometimes referred to as a 'pull model' because the service pulls data in without you having to write any code that adds data to an index. Indexers provide a convenient way for users to index content from their data source without having to write their own crawler or push model. Indexers that reference Power Query data sources have the same level of support for skillsets, schedules, high water mark change detection logic, and most parameters that other indexers support. ## Prerequisites+ Before you start pulling data from one of the supported data sources, you'll want to make sure you have all your resources set up.
-+ Azure Cognitive Search service
- + Azure Cognitive Search service set up in a [supported region](search-how-to-index-power-query-data-sources.md#regional-availability).
- + Ensure that the Azure Cognitive Search team has enabled your search service for the preview. You can sign up for the preview by filling out [this form](https://aka.ms/azure-cognitive-search/indexer-preview).
-+ Azure Blob Storage account
- + A Blob Storage account is required for the preview to be used as an intermediary for your data. The data will flow from your data source, then to Blob Storage, then to the index. This requirement only exists with the initial gated preview.
+++ Azure Cognitive Search service in a [supported region](search-how-to-index-power-query-data-sources.md#regional-availability).+++ [Register for the preview](https://aka.ms/azure-cognitive-search/indexer-preview). This feature must be enabled on the backend.+++ Azure Blob Storage account, used as an intermediary for your data. The data will flow from your data source, then to Blob Storage, then to the index. This requirement only exists with the initial gated preview. ## Getting started using the Azure portal+ The Azure portal provides support for the Power Query connectors. By sampling data and reading metadata on the container, the Import data wizard in Azure Cognitive Search can create a default index, map source fields to target index fields, and load the index in a single operation. Depending on the size and complexity of source data, you could have an operational full text search index in minutes. The following video shows how to set up a Power Query connector in Azure Cognitive Search.
When creating the indexer, you can optionally choose to run the indexer on a sch
Once you've finished filling out this page select **Submit**. ## High Water Mark Change Detection policy+ This change detection policy relies on a "high water mark" column capturing the version or time when a row was last updated. ### Requirements+ + All inserts specify a value for the column. + All updates to an item also change the value of the column. + The value of this column increases with each insert or update. ## Unsupported column names+ Field names in an Azure Cognitive Search index have to meet certain requirements. One of these requirements is that some characters such as "/" are not allowed. If a column name in your database does not meet these requirements, the index schema detection will not recognize your column as a valid field name and you won't see that column listed as a suggested field for your index. Normally, using [field mappings](search-indexer-field-mappings.md) would solve this problem but field mappings are not supported in the portal. To index content from a column in your table that has an unsupported field name, rename the column during the "Transform your data" phase of the import data process. For example, you can rename a column named "Billing code/Zip code" to "zipcode". By renaming the column, the index schema detection will recognize it as a valid field name and add it as a suggestion to your index definition. ## Regional availability+ The preview is only available to customers with search services in the following regions:+ + Central US + East US + East US 2
The preview is only available to customers with search services in the following
+ West US 2 ## Preview limitations+ There is a lot to be excited about with this preview, but there are a few limitations. This section describes the limitations that are specific to the current version of the preview.+ + Pulling binary data from your data source is not supported in this version of the preview. + + [Debug sessions](cognitive-search-debug-session.md) are not supported at this time. ## Next steps+ You have learned how to pull data from new data sources using the Power Query connectors. To learn more about indexers, see [Indexers in Azure Cognitive Search](search-indexer-overview.md).
search Search Howto Index Mysql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-mysql.md
Title: Index data from Azure MySQL (preview)
+ Title: Azure DB for MySQL (preview)
-description: Set up a search indexer to index data stored in Azure MySQL for full text search in Azure Cognitive Search.
+description: Set up a search indexer to index data stored in Azure Database for MySQL for full text search in Azure Cognitive Search.
ms.devlang: rest-api Previously updated : 05/17/2021 Last updated : 01/27/2022
-# Index data from Azure MySQL
+# Index data from Azure Database for MySQL
> [!IMPORTANT]
-> MySQL support is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). [Request access](https://aka.ms/azure-cognitive-search/indexer-preview) to this feature, and after access is enabled, use a [preview REST API (2020-06-30-preview or later)](search-api-preview.md) to index your content. There is currently no SDK support and no portal support.
+> MySQL support is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Use a preview REST API [(2020-06-30-preview or later)](search-api-preview.md) to index your content. There is currently no SDK or portal support.
-The Azure Cognitive Search indexer for MySQL will crawl your MySQL database on Azure, extract searchable data, and index it in Azure Cognitive Search. The indexer will take all changes, uploads, and deletes for your MySQL database and reflect these changes in your search index.
+Configure a [search indexer](search-indexer-overview.md) to extract content from Azure Database for MySQL and make it searchable in Azure Cognitive Search. The indexer will crawl your MySQL database on Azure, extract searchable data, and index it in Azure Cognitive Search. When configured to include a high water mark and soft deletion, the indexer will take all changes, uploads, and deletes for your MySQL database and reflect these changes in your search index.
-You can set up an Azure MySQL indexer by using any of these clients:
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information specific to indexing files in Azure DB for MySQL.
-* [Azure portal](https://ms.portal.azure.com)
-* Azure Cognitive Search [REST API](/rest/api/searchservice/Indexer-operations)
-* Azure Cognitive Search [.NET SDK](/dotnet/api/azure.search.documents.indexes.models.searchindexer)
+## Prerequisites
-This article uses the REST APIs.
++ [Azure Database for MySQL](../mysql/overview.md) ([single server](../mysql/single-server-overview.md)).
-## Create an Azure MySQL indexer
++ A table or view that provides the content. A primary key is required. If you're using a view, it must have a high water mark column.
-To index MySQL on Azure follow the below steps.
++ A REST API client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to create the data source, index, and indexer.
-### Step 1: Create a data source
++ [Register for the preview](https://aka.ms/azure-cognitive-search/indexer-preview) to provide feedback and get help with any issues you encounter.
-To create the data source, send the following request:
+## Preview limitations
-```http
+Currently, change tracking and deletion detection aren't working if the date or timestamp is uniform for all rows. This is a known issue that will be addressed in an update to the preview. Until this issue is addressed, donΓÇÖt add a skillset to the MySQL indexer.
- POST https://[search service name].search.windows.net/datasources?api-version=2020-06-30-Preview
- Content-Type: application/json
- api-key: [admin key]
-
- {
- "name" : "[Data source name]"
- "description" : "[Description of MySQL data source]",
- "type" : "mysql",
- "credentials" : {
- "connectionString" :
- "Server=[MySQLServerName].MySQL.database.azure.com; Port=3306; Database=[DatabaseName]; Uid=[UserName]; Pwd=[Password]; SslMode=Preferred;"
- },
- "container" : {
- "name" : "[TableName]"
- },
- "dataChangeDetectionPolicy" : {
- "@odata.type": "#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy",
- "highWaterMarkColumnName": "[HighWaterMarkColumn]"
- }
- }
+The preview doesnΓÇÖt support geometry types and blobs.
-```
+As noted, thereΓÇÖs no portal or SDK support for indexer creation, but a MySQL indexer and data source can be managed in the portal once they exist.
-### Step 2: Create an index
+## Define the data source
-Create the target Azure Cognitive Search index if you donΓÇÖt have one already.
+The data source definition specifies the data source type, content path, and how to connect.
-```http
+[Create or Update Data Source](/rest/api/searchservice/create-data-source) specifies the definition. Set "credentials" to an ADO.NET connection string. You can find connection strings in Azure portal, on the **Connection strings** page for MySQL. Be sure to use a preview REST API version (2020-06-30-Preview or later) when creating the data source.
- POST https://[service name].search.windows.net/indexes?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key]
-
- {
- "name": "[Index name]",
- "fields": [{
- "name": "id",
- "type": "Edm.String",
- "key": true,
- "searchable": false
- }, {
- "name": "description",
- "type": "Edm.String",
- "filterable": false,
- "searchable": true,
- "sortable": false,
- "facetable": false
- }]
+```http
+POST https://[search service name].search.windows.net/datasources?api-version=2020-06-30-Preview
+Content-Type: application/json
+api-key: [admin key]
+
+{
+ "name" : "hotel-mysql-ds"
+ "description" : "[Description of MySQL data source]",
+ "type" : "mysql",
+ "credentials" : {
+ "connectionString" :
+ "Server=[MySQLServerName].MySQL.database.azure.com; Port=3306; Database=[DatabaseName]; Uid=[UserName]; Pwd=[Password]; SslMode=Preferred;"
+ },
+ "container" : {
+ "name" : "[TableName]"
+ },
+ "dataChangeDetectionPolicy" : {
+ "@odata.type": "#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy",
+ "highWaterMarkColumnName": "[HighWaterMarkColumn]"
}
+}
+```
+
+## Add search fields to an index
+
+In a [search index](search-what-is-an-index.md), add search index fields that correspond to the fields in your table.
+[Create or Update Index](/rest/api/searchservice/create-index) specifies the fields:
+
+```http
+{
+ "name" : "hotels-mysql-ix",
+ "fields": [
+ { "name": "ID", "type": "Edm.String", "key": true, "searchable": false },
+ { "name": "HotelName", "type": "Edm.String", "searchable": true, "filterable": false },
+ { "name": "Category", "type": "Edm.String", "searchable": false, "filterable": true, "sortable": true },
+ { "name": "City", "type": "Edm.String", "searchable": false, "filterable": true, "sortable": true },
+ { "name": "Description", "type": "Edm.String", "searchable": false, "filterable": false, "sortable": false }
+ ]
```
-### Step 3: Create the indexer
+If the primary key in the source table matches the document key (in this case, "ID"), the indexer will import the primary key as the document key.
+
+## Configure the MySQL indexer
Once the index and data source have been created, you're ready to create the indexer.
+[Create or Update Indexer](/rest/api/searchservice/create-indexer) specifies the predefined data source and search index.
+ ```http
+POST https://[search service name].search.windows.net/indexers?api-version=2020-06-30
+
+{
+ "name" : "hotels-mysql-idxr",
+ "dataSourceName" : "hotels-mysql-ds",
+ "targetIndexName" : "hotels-mysql-ix",
+ "disabled": null,
+ "schedule": null,
+ "parameters": {
+ "batchSize": null,
+ "maxFailedItems": null,
+ "maxFailedItemsPerBatch": null,
+ "base64EncodeKeys": null,
+ "configuration": { }
+ },
+ "fieldMappings" : [ ],
+ "encryptionKey": null
+}
+```
- POST https://[search service name].search.windows.net/indexers?api-version=2020-06-30-Preview
- Content-Type: application/json
- api-key: [admin key]
-
- {
- "name" : "[Indexer name]"
- "description" : "[Description of MySQL indexer]",
- "dataSourceName" : "[Data source name]",
- "targetIndexName" : "[Index name]"
- }
+By default, the indexer runs when it's created on the search service. You can set "disabled" to true if you want to run the indexer manually.
-```
+You can now [run the indexer](search-howto-run-reset-indexers.md), [monitor status](search-howto-monitor-indexers.md), or [schedule indexer execution](search-howto-schedule-indexers.md).
-## Run indexers on a schedule
-You can also arrange the indexer to run periodically on a schedule. To do this, add the **schedule** property when creating or updating the indexer. The example below shows a PUT request to update the indexer:
+To put the indexer on a schedule, set the "schedule" property when creating or updating the indexer. Here is an example of a schedule that runs every 15 minutes.
```http
- PUT https://[search service name].search.windows.net/indexers/[Indexer name]?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin-key]
-
- {
- "dataSourceName" : "[Data source name]",
- "targetIndexName" : "[Index name]",
- "schedule" : {
- "interval" : "PT10M",
- "startTime" : "2021-01-01T00:00:00Z"
- }
+PUT https://[search service name].search.windows.net/indexers/hotels-mysql-idxr?api-version=2020-06-30
+Content-Type: application/json
+api-key: [admin-key]
+
+{
+ "dataSourceName" : "hotels-mysql-ds",
+ "targetIndexName" : "hotels-mysql-ix",
+ "schedule" : {
+ "interval" : "PT15M",
+ "startTime" : "2022-01-01T00:00:00Z"
}
+}
```
-The **interval** parameter is required. The interval refers to the time between the start of two consecutive indexer executions. The smallest allowed interval is 5 minutes; the longest is one day. It must be formatted as an XSD "dayTimeDuration" value (a restricted subset of an [ISO 8601 duration](https://www.w3.org/TR/xmlschema11-2/#dayTimeDuration) value). The pattern for this is: `P(nD)(T(nH)(nM))`. Examples: `PT15M` for every 15 minutes, `PT2H` for every 2 hours.
-
-For more information about defining indexer schedules see [How to schedule indexers for Azure Cognitive Search](search-howto-schedule-indexers.md).
- ## Capture new, changed, and deleted rows
-Azure Cognitive Search uses **incremental indexing** to avoid having to reindex the entire table or view every time an indexer runs.
+If your data source meets the requirements for change and deletion detection, the indexer can incrementally index the changes in your data source since the last indexer job, which means you can avoid having to re-index the entire table or view every time an indexer runs.
<a name="HighWaterMarkPolicy"></a>
This change detection policy relies on a "high water mark" column capturing the
#### Requirements
-* All inserts specify a value for the column.
-* All updates to an item also change the value of the column.
-* The value of this column increases with each insert or update.
-* Queries with the following WHERE and ORDER BY clauses can be executed efficiently: `WHERE [High Water Mark Column] > [Current High Water Mark Value] ORDER BY [High Water Mark Column]`
++ All inserts specify a value for the column.++ All updates to an item also change the value of the column.++ The value of this column increases with each insert or update.++ Queries with the following WHERE and ORDER BY clauses can be executed efficiently: `WHERE [High Water Mark Column] > [Current High Water Mark Value] ORDER BY [High Water Mark Column]` #### Usage To use a high water mark policy, create or update your data source like this: ```http
- {
- "name" : "[Data source name]",
- "type" : "mysql",
- "credentials" : { "connectionString" : "[connection string]" },
- "container" : { "name" : "[table or view name]" },
- "dataChangeDetectionPolicy" : {
- "@odata.type" : "#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy",
- "highWaterMarkColumnName" : "[last_updated column name]"
- }
+{
+ "name" : "[Data source name]",
+ "type" : "mysql",
+ "credentials" : { "connectionString" : "[connection string]" },
+ "container" : { "name" : "[table or view name]" },
+ "dataChangeDetectionPolicy" : {
+ "@odata.type" : "#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy",
+ "highWaterMarkColumnName" : "[last_updated column name]"
}
+}
``` > [!WARNING] > If the source table does not have an index on the high water mark column, queries used by the MySQL indexer may time out. In particular, the `ORDER BY [High Water Mark Column]` clause requires an index to run efficiently when the table contains many rows. ### Soft Delete Column Deletion Detection policy+ When rows are deleted from the source table, you probably want to delete those rows from the search index as well. If the rows are physically removed from the table, Azure Cognitive Search has no way to infer the presence of records that no longer exist. However, you can use the ΓÇ£soft-deleteΓÇ¥ technique to logically delete rows without removing them from the table. Add a column to your table or view and mark rows as deleted using that column. When using the soft-delete technique, you can specify the soft delete policy as follows when creating or updating the data source: ```http
- {
- …,
- "dataDeletionDetectionPolicy" : {
- "@odata.type" : "#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy",
- "softDeleteColumnName" : "[a column name]",
- "softDeleteMarkerValue" : "[the value that indicates that a row is deleted]"
- }
+{
+ …,
+ "dataDeletionDetectionPolicy" : {
+ "@odata.type" : "#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy",
+ "softDeleteColumnName" : "[a column name]",
+ "softDeleteMarkerValue" : "[the value that indicates that a row is deleted]"
}
+}
```
-The **softDeleteMarkerValue** must be a string ΓÇô use the string representation of your actual value. For example, if you have an integer column where deleted rows are marked with the value 1, use `"1"`. If you have a BIT column where deleted rows are marked with the Boolean true value, use the string literal `True` or `true`, the case doesn't matter.
+The "softDeleteMarkerValue" must be a string ΓÇô use the string representation of your actual value. For example, if you have an integer column where deleted rows are marked with the value 1, use `"1"`. If you have a BIT column where deleted rows are marked with the Boolean true value, use the string literal `True` or `true`, the case doesn't matter.
<a name="TypeMapping"></a>
-## Mapping between MySQL and Azure Cognitive Search data types
+## Mapping data types
+
+The following table maps the MySQL database to Cognitive Search equivalents. See [Supported data types (Azure Cognitive Search)](/rest/api/searchservice/supported-data-types) for more information.
> [!NOTE]
-> The preview does not support geometry types and blobs.
+> The preview does not support geometry types and blobs.
-| MySQL data type | Allowed target index field types |
-| | |
+| MySQL data type | Cognitive Search field type |
+| | -- |
| bool, boolean | Edm.Boolean, Edm.String | | tinyint, smallint, mediumint, int, integer, year | Edm.Int32, Edm.Int64, Edm.String | | bigint | Edm.Int64, Edm.String |
The **softDeleteMarkerValue** must be a string ΓÇô use the string representation
| char, varchar, tinytext, mediumtext, text, longtext, enum, set, time | Edm.String | | unsigned numerical data, serial, decimal, dec, bit, blob, binary, geometry | N/A | - ## Next steps
-Congratulations! You have learned how to integrate MySQL with Azure Cognitive Search using an indexer.
+This article explained how to integrate Azure Database for MySQL with Azure Cognitive Search using an indexer. Now that you have a search index that contains your searchable content, run some full text queries using Search explorer in the Azure portal.
-+ To learn more about indexers, see [Creating Indexers in Azure Cognitive Search](search-howto-create-indexers.md)
+> [!div class="nextstepaction"]
+> [Search explorer](search-explorer.md)
service-connector Quickstart Portal App Service Connection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/quickstart-portal-app-service-connection.md
Title: Quickstart - Create a service connection in App Service from Azure portal
-description: Quickstart showing how to create a service connection in App Service from Azure portal
+ Title: Quickstart - Create a service connection in App Service from the Azure portal
+description: Quickstart showing how to create a service connection in App Service from the Azure portal
- Previously updated : 10/29/2021+ Last updated : 01/27/2022
+# Customer intent: As an app developer, I want to connect several services together so that I can ensure I have the right connectivity to access my Azure resources.
-# Quickstart: Create a service connection in App Service from Azure portal
+# Quickstart: Create a service connection in App Service from the Azure portal
-This quickstart shows you how to create a new service connection with Service Connector in App Service from Azure portal.
+Get started with Service Connector by using the Azure portal to create a new service connection in Azure App Service.
+## Prerequisites
-This quickstart assumes that you already have at least an App Service running on Azure. If you don't have an App Service, [create one](../app-service/quickstart-dotnetcore.md).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+- An application deployed to App Service in a [Region supported by Service Connector](./concept-region-support.md). If you don't have one yet, [create and deploy an app to App Service](../app-service/quickstart-dotnetcore.md).
## Sign in to Azure
Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.
## Create a new service connection in App Service
-You will use Service Connector to create a new service connection in App Service.
+You'll use Service Connector to create a new service connection in App Service.
-1. Select **All resource** button found on the left of the Azure portal. Type **App Service** in the filter and click the name of the App Service you want to use in the list.
+1. Select the **All resources** button on the left of the Azure portal. Type **App Service** in the filter and select the name of the App Service you want to use in the list.
2. Select **Service Connector (Preview)** from the left table of contents. Then select **Create**. 3. Select or enter the following settings. | Setting | Suggested value | Description | | | - | -- |
+ | **Service type** | Blob Storage | Target service type. If you don't have a Storage Blob container, you can [create one](../storage/blobs/storage-quickstart-blobs-portal.md) or use another service type. |
| **Subscription** | One of your subscriptions | The subscription where your target service (the service you want to connect to) is. The default value is the subscription that this App Service is in. |
- | **Service Type** | Blob Storage | Target service type. If you don't have a Storage Blob container, you can [create one](../storage/blobs/storage-quickstart-blobs-portal.md) or use an other service type. |
- | **Connection Name** | Generated unique name | The connection name that identifies the connection between your App Service and target service |
+ | **Connection name** | Generated unique name | The connection name that identifies the connection between your App Service and target service |
| **Storage account** | Your storage account | The target storage account you want to connect to. If you choose a different service type, select the corresponding target service instance. |
- | **Client Type** | The same app stack on this App Service | Your application stack that works with the target service you selected. The default value comes from the App Service runtime stack. |
+ | **Client type** | The same app stack on this App Service | Your application stack that works with the target service you selected. The default value comes from the App Service runtime stack. |
4. Select **Next: Authentication** to select the authentication type. Then select **Connection string** to use access key to connect your Blob storage account.
You will use Service Connector to create a new service connection in App Service
1. In **Service Connector (Preview)**, you see an App Service connection to the target service.
-1. Click **>** button to expand the list, you can see the environment variables required by your application code.
+1. Select the **>** button to expand the list. You can see the environment variables required by your application code.
-1. Click **...** button and select **Validate**, you can see the connection validation details in the pop-up blade from right.
+1. Select the **...** button and select **Validate**. You can see the connection validation details in the pop-up panel on the right.
## Next steps
service-fabric Infrastructure Service Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/infrastructure-service-faq.md
+
+ Title: Introduction to the Service Fabric Infrastructure Service
+description: Frequently asked questions about Service Fabric Infrastructure Service
++ Last updated : 1/21/2022+++
+# Introduction to the Service Fabric Infrastructure Service
+In this article, we describe Infrastructure Service, which is a part of Azure Service Fabric and coordinates between Azure infrastructure updates and cluster health to update the underlying infrastructure safely.
+
+## Infrastructure Service Details
+
+The Service Fabric Infrastructure Service is a system service for Azure clusters that ensures all infrastructure operations are done in a safe manner. The service is responsible for coordinating all infrastructure updates to the underlying virtual machine scale sets with durability level Silver and higher. Typically there's one Infrastructure Service per node type, but there will be three if it's a zone resilient node type. All Platform and Tenant updates on a virtual machine scale set corresponding to these node types take in to account the state of the cluster and potential impact of the update. The service then decides if the operation can take place without compromising the health of the replicas and instances running on the cluster.
+
+The rest of this document covers frequently asked questions about Infrastructure Service:
+
+## FAQs
+
+### What are the different kinds of updates that are managed by Infrastructure Service?
+ * Platform Update - An update to underlying host for the virtual machine scale set initiated by the Azure platform and performed in a safe manner by Upgrade Domain (UD).
+ * Tenant Update - User-initiated update of the scale set such as modifying VM count, config, or modifying guest OS.
+ * Tenant Maintainence - User-initiated repair to a single instance of the virtual machine scale set such as a reboot.
+ * Platform Maintainence - Maintenance initiated by Azure Compute on a virtual machine or set of virtual machines on a virtual machine scale set.
+
+### How do I enable Infrastructure Service on my cluster?
+Infrastructure Service is enabled by default in an Azure Service Fabric Cluster if the node type is set to Silver durability or higher. To migrate an existing Bronze node type to Silver durability, follow steps mentioned [here](service-fabric-cluster-capacity.md#changing-durability-levels)
+
+### What is the required instance count for Tenant/Platform updates to be safe?
+A minimum of five instances of the virtual machine scale set is required for the Tenant or Platform updates to be performed safely. However, having five instances doesn't guarantee the operation will proceed. Workloads may configure or require additional restrictions or resources thus increasing the minimum required count. For a virtual machine scale set spanning zones, at-least five instances are required in each zone for the operations to be safe.
+
+### Why are my virtual machine scale sets not getting updated?
+Updates on the virtual machine scale sets can be stuck for longer duration because of any of the following reasons:
+ * You've performed multiple updates on the virtual machine scale set and Service Fabric is trying to update the virtual machine in safe manner. Service Fabric will execute them one by one so the duration will be longer.
+ * The updates on the virtual machine scale set that you've tried aren't progressing since the Repair Task corresponding to the infrastructure update isn't getting approved. The approval can be blocked due to multiple reasons, but usually is because there is not enough resources to safely progress. The existing replicas and instances need to be placed somewhere else for nodes to go down for update safely.
+ * Other Azure Platform Updates and Tenant Maintainence operations are currently progressing on the node type. Service Fabric is throttling the virtual machine scale set updates until the Platform updates complete, in order to execute the updates in a safe manner. By default, Service Fabric allows at most two infrastructure updates on a scale set at a time. Platform updates aren't stopped, thus they take priority over Tenant updates.
+
+### I see multiple Tenant Update Repair Jobs stuck in Preparing state. What do I do?
+Tenant update jobs that get stuck in Preparing state mean Service Fabric is unable to place existing replicas on the nodes to be updated somewhere else. Generally stuck jobs occur for scenarios such as insufficient capacity or seed node removal that can lead to Repair Task being blocked. Use Service Fabric Explorer to check the corresponding Repair Task associated with the Tenant Update to find out reasons regarding the stuck Tenant Update.
+
+### The Platform or Tenant update is in executing state for quite a while, and blocking my updates. What do I do?
+Platform and Tenant updates acknowledged by Service Fabric are performed by the underlying compute. Service Fabric waits for acknowledgment from Compute platforms when updates have been successfully applied. If the updates are in executing state for long times, customers should reach out to Compute team support to figure out why Platform update isn't making progress.
+
+### How do I ensure all updates in my cluster are safe?
+All Tenant update operations in a Service Fabric cluster are approved only if determined to be safe by Service Fabric. Updates are blocked when Service Fabric can't ensure if the operations are safe. While this generally removes the need for customers to worry about if a given operation is safe or not, it's advised performing operations after understanding their impact.
+
+### I want to bypass Infrastructure Service and perform operations on my cluster. How do I do that?
+Bypassing Infrastructure Service for any infrastructure updates is a risky operation and isn't recommended. Engage [Service Fabric support](service-fabric-support.md) experts before deciding to perform these steps.
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-support-matrix.md
Windows 7 (x64) with SP1 onwards | From version [9.30](https://support.microsoft
**Operating system** | **Details** |
-Red Hat Enterprise Linux | 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6,[7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/), [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609/), [8.3](https://support.microsoft.com/help/4597409/), 8.4 (4.18.0-305.30.1.el8_4.x86_64 or higher), 8.5 (4.18.0-348.5.1.el8_5.x86_64 or higher)
+Red Hat Enterprise Linux | 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6,[7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/), [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609/), [8.3](https://support.microsoft.com/help/4597409/), [8.4](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-305.30.1.el8_4.x86_64 or higher), [8.5](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-348.5.1.el8_5.x86_64 or higher)
CentOS | 6.5, 6.6, 6.7, 6.8, 6.9, 6.10 </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, [7.8](https://support.microsoft.com/help/4564347/), [7.9 pre-GA version](https://support.microsoft.com/help/4578241/), 7.9 GA version is supported from 9.37 hot fix patch** </br> 8.0, 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4, 8.5 Ubuntu 14.04 LTS Server | Includes support for all 14.04.*x* versions; [Supported kernel versions](#supported-ubuntu-kernel-versions-for-azure-virtual-machines); Ubuntu 16.04 LTS Server | Includes support for all 16.04.*x* versions; [Supported kernel version](#supported-ubuntu-kernel-versions-for-azure-virtual-machines)<br/><br/> Ubuntu servers using password-based authentication and sign-in, and the cloud-init package to configure cloud VMs, might have password-based sign-in disabled on failover (depending on the cloudinit configuration). Password-based sign in can be re-enabled on the virtual machine by resetting the password from the Support > Troubleshooting > Settings menu (of the failed over VM in the Azure portal.
Debian 8 | Includes support for all 8. *x* versions [Supported kernel versions](
Debian 9 | Includes support for 9.1 to 9.13. Debian 9.0 is not supported. [Supported kernel versions](#supported-debian-kernel-versions-for-azure-virtual-machines) Debian 10 | [Supported kernel versions](#supported-debian-kernel-versions-for-azure-virtual-machines) SUSE Linux Enterprise Server 12 | SP1, SP2, SP3, SP4, SP5 [(Supported kernel versions)](#supported-suse-linux-enterprise-server-12-kernel-versions-for-azure-virtual-machines)
-SUSE Linux Enterprise Server 15 | 15, SP1, SP2[(Supported kernel versions)](#supported-suse-linux-enterprise-server-15-kernel-versions-for-azure-virtual-machines)
+SUSE Linux Enterprise Server 15 | 15, SP1, SP2[(Supported kernel versions)](#supported-suse-linux-enterprise-server-15-kernel-versions-for-azure-virtual-machines), SP3
SUSE Linux Enterprise Server 11 | SP3<br/><br/> Upgrade of replicating machines from SP3 to SP4 isn't supported. If a replicated machine has been upgraded, you need to disable replication and re-enable replication after the upgrade. SUSE Linux Enterprise Server 11 | SP4 Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4573888/), [7.9](https://support.microsoft.com/help/4597409), [8.0](https://support.microsoft.com/help/4573888/), [8.1](https://support.microsoft.com/help/4573888/), [8.2](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.3](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) (running the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3, 4, 5, and 6 (UEK3, UEK4, UEK5, UEK6), [8.4](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), 8.5 <br/><br/>8.1 (running on all UEK kernels and RedHat kernel <= 3.10.0-1062.* are supported in [9.35](https://support.microsoft.com/help/4573888/) Support for rest of the RedHat kernels is available in [9.36](https://support.microsoft.com/help/4578241/))
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/site-recovery-whats-new.md
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Unified Setup** | **Configuration server ova** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent** | | | | |
+[Rollup 60](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) | 9.47.6219.1 | 5.1.7127.0 | 9.47.6219.1 | 5.1.7127.0 | 2.0.9241.0
[Rollup 59](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 9.46.6149.1 | 5.1.7029.0 | 9.46.6149.1 | 5.1.7030.0 | 2.0.9239.0 [Rollup 58](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 9.45.6096.1 | 5.1.6952.0 | 9.45.6096.1 | 5.1.6952.0 | 2.0.9237.0 [Rollup 57](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 9.44.6068.1 | 5.1.6899.0 | 9.44.6068.1 | 5.1.6899.0 | 2.0.9236.0 [Rollup 56](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 9.43.6040.1 | 5.1.6853.0 | 9.43.6040.1| 5.1.6853.0 | 2.0.9226.0
-[Rollup 55](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) | 9.42.5941.1 | 5.1.6692.0 | 9.42.5941.1 | 5.1.6692.0 | 2.0.9208.0
- [Learn more](service-updates-how-to.md) about update installation and support.
+## Updates (January 2022)
+
+### Update Rollup 60
+
+[Update rollup 60](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) provides the following updates:
+
+**Update** | **Details**
+ |
+**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article.
+**Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article.
+**Azure VM disaster recovery** | Support added for retention points to be available for up to 15 days.<br/><br/>Added support for replication to be enabled on Azure virtual machines via Azure Policy. <br/><br/> Added support for ZRS managed disks when replicating Azure virtual machines. <br/><br/> Support added for SUSE Linux Enterprise Server 15 SP3, Red Hat Enterprise Linux Linux 8.4 and Red Hat Enterprise Linux Linux 8.5 <br/><br/>
+**VMware VM/physical disaster recovery to Azure** | Support added for retention points to be available for up to 15 days.<br/><br/>Support added for SUSE Linux Enterprise Server 15 SP3, Red Hat Enterprise Linux Linux 8.4 and Red Hat Enterprise Linux Linux 8.5 <br/><br/>
++ ## Updates (November 2021) ### Update Rollup 59
site-recovery Vmware Azure Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/vmware-azure-enable-replication.md
This article assumes that your system meets the following criteria:
### Resolve common issues -- Each disk should be smaller than 4 TB.
+- Each disk should be smaller than 4 TB when replicating to unmanaged disks and smaller than 32 TB when replicating to managed disks.
- The operating system disk should be a basic disk, not a dynamic disk. - For generation 2 UEFI-enabled virtual machines, the operating system family should be Windows, and the boot disk should be smaller than 300 GB.
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/vmware-physical-azure-support-matrix.md
Windows 7 with SP1 64-bit | Supported from [Update rollup 36](https://support.mi
**Operating system** | **Details** | Linux | Only 64-bit system is supported. 32-bit system isn't supported.<br/><br/>Every Linux server should have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) installed. It is required to boot the server in Azure after test failover/failover. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure. <br/><br/> Site Recovery orchestrates failover to run Linux servers in Azure. However Linux vendors might limit support to only distribution versions that haven't reached end-of-life.<br/><br/> On Linux distributions, only the stock kernels that are part of the distribution minor version release/update are supported.<br/><br/> Upgrading protected machines across major Linux distribution versions isn't supported. To upgrade, disable replication, upgrade the operating system, and then enable replication again.<br/><br/> [Learn more](https://support.microsoft.com/help/2941892/support-for-linux-and-open-source-technology-in-azure) about support for Linux and open-source technology in Azure.
-Linux Red Hat Enterprise | 5.2 to 5.11</b><br/> 6.1 to 6.10</b> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9 Beta version](https://support.microsoft.com/help/4578241/), [7.9](https://support.microsoft.com/help/4590304/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4 (4.18.0-305.30.1.el8_4.x86_64 or higher), 8.5 (4.18.0-348.5.1.el8_5.x86_64 or higher) <br/> Few older kernels on servers running Red Hat Enterprise Linux 5.2-5.11 & 6.1-6.10 do not have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure.
+Linux Red Hat Enterprise | 5.2 to 5.11</b><br/> 6.1 to 6.10</b> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9 Beta version](https://support.microsoft.com/help/4578241/), [7.9](https://support.microsoft.com/help/4590304/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), [8.4](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-305.30.1.el8_4.x86_64 or higher), [8.5](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-348.5.1.el8_5.x86_64 or higher) <br/> Few older kernels on servers running Red Hat Enterprise Linux 5.2-5.11 & 6.1-6.10 do not have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure.
Linux: CentOS | 5.2 to 5.11</b><br/> 6.1 to 6.10</b><br/> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4, 8.5 <br/><br/> Few older kernels on servers running CentOS 5.2-5.11 & 6.1-6.10 do not have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure. Ubuntu | Ubuntu 14.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions)<br/>Ubuntu 16.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 18.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 20.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> (*includes support for all 14.04.*x*, 16.04.*x*, 18.04.*x*, 20.04.*x* versions) Debian | Debian 7/Debian 8 (includes support for all 7. *x*, 8. *x* versions); Debian 9 (includes support for 9.1 to 9.13. Debian 9.0 is not supported.), Debian 10 [(Review supported kernel versions)](#debian-kernel-versions)
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/vmware-physical-mobility-service-overview.md
Setting | Details
| Syntax | `UnifiedAgent.exe /Role \<Agent/MasterTarget> /InstallLocation \<Install Location> /Platform "VmWare" /Silent` Setup logs | `%ProgramData%\ASRSetupLogs\ASRUnifiedAgentInstaller.log`
-`/Role` | Mandatory installation parameter. Specifies whether the Mobility service (Agent) or master target (MasterTarget) should be installed. Note: in prior versions, the correct switches were Mobility Service (MS) or master target (MT)
+`/Role` | Mandatory installation parameter. Specifies whether the Mobility service (Agent) or master target (MasterTarget) should be installed. Note: in prior versions, the correct switches were Mobility Service (MS) or master target (MT)
`/InstallLocation`| Optional parameter. Specifies the Mobility service installation location (any folder). `/Platform` | Mandatory. Specifies the platform on which the Mobility service is installed: <br/> **VMware** for VMware VMs/physical servers. <br/> **Azure** for Azure VMs.<br/><br/> If you're treating Azure VMs as physical machines, specify **VMware**. `/Silent`| Optional. Specifies whether to run the installer in silent mode.
storage Customer Managed Keys Configure Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/customer-managed-keys-configure-key-vault.md
Previously updated : 01/24/2022 Last updated : 01/28/2022
This article shows how to configure encryption with customer-managed keys stored
## Configure a key vault
-You can use a new or existing key vault to store customer-managed keys. The storage account and the key vault must be in the same region, but they can be in different subscriptions.
+You can use a new or existing key vault to store customer-managed keys. The storage account and the key vault must be in the same region, but they can be in different subscriptions. To learn more about Azure Key Vault, see [Azure Key Vault Overview](../../key-vault/general/overview.md) and [What is Azure Key Vault?](../../key-vault/general/basic-concepts.md).
Using customer-managed keys with Azure Storage encryption requires that both soft delete and purge protection be enabled for the key vault. Soft delete is enabled by default when you create a new key vault and cannot be disabled. You can enable purge protection either when you create the key vault or after it is created.
principalId = $(az storage account show --name <storage-account> --resource-grou
## Configure the key vault access policy
-The next step is to configure the key vault access policy. The key vault access policy grants permissions to the managed identity that will be used to authorize access to the key vault. For more information about assigning the key vault access policy, see [Assign an Azure Key Vault access policy](../../key-vault/general/assign-access-policy.md).
+The next step is to configure the key vault access policy. The key vault access policy grants permissions to the managed identity that will be used to authorize access to the key vault. To learn more about key vault access policies, see [Azure Key Vault Overview](../../key-vault/general/overview.md#securely-store-secrets-and-keys) and [Azure Key Vault security overview](../../key-vault/general/security-features.md#key-vault-authentication-options).
### [Azure portal](#tab/portal)
-When you configure customer-managed keys with the Azure portal, the key vault access policy is configured for you under the covers.
+To learn how to configure the key vault access policy with the Azure portal, see [Assign an Azure Key Vault access policy](../../key-vault/general/assign-access-policy.md).
### [PowerShell](#tab/powershell)
Set-AzKeyVaultAccessPolicy `
-PermissionsToKeys wrapkey,unwrapkey,get ```
+To learn more about assigning the key vault access policy with PowerShell, see [Assign an Azure Key Vault access policy](../../key-vault/general/assign-access-policy.md).
+ ### [Azure CLI](#tab/azure-cli) To configure the key vault access policy with PowerShell, call [az keyvault set-policy](/cli/azure/keyvault#az-keyvault-set-policy), providing the variable for the principal ID that you previously retrieved for the managed identity.
az keyvault set-policy \
--key-permissions get unwrapKey wrapKey ```
+To learn more about assigning the key vault access policy with Azure CLI, see [Assign an Azure Key Vault access policy](../../key-vault/general/assign-access-policy.md).
+ ## Configure customer-managed keys for a new account
storage Redundancy Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/redundancy-migration.md
Previously updated : 11/30/2021 Last updated : 01/28/2022
For an overview of each of these options, see [Azure Storage redundancy](storage
## Switch between types of replication
-You can switch a storage account from one type of replication to any other type, but some scenarios are more straightforward than others. If you want to add or remove geo-replication or read access to the secondary region, you can use the Azure portal, PowerShell, or Azure CLI to update the replication setting. However, if you want to change how data is replicated in the primary region, by moving from LRS to ZRS or vice versa, then you must either perform a manual migration or request a live migration. And if you want to move from ZRS to GZRS or RA-GZRS, then you must perform a live migration, unless you are performing a failback operation after failover.
+You can switch a storage account from one type of replication to any other type, but some scenarios are more straightforward than others. If you want to add or remove geo-replication or read access to the secondary region, you can use the Azure portal, PowerShell, or Azure CLI to update the replication setting in some scenarios; other scenarios require a manual or live migration. If you want to change how data is replicated in the primary region, by moving from LRS to ZRS or vice versa, then you must either perform a manual migration or request a live migration. And if you want to move from ZRS to GZRS or RA-GZRS, then you must perform a live migration, unless you are performing a failback operation after failover.
The following table provides an overview of how to switch from each type of replication to another:
storage Storage Account Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-account-overview.md
Previously updated : 01/10/2022 Last updated : 01/24/2022 # Storage account overview
-An Azure storage account contains all of your Azure Storage data objects: blobs, file shares, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that's accessible from anywhere in the world over HTTP or HTTPS. Data in your storage account is durable and highly available, secure, and massively scalable.
+An Azure storage account contains all of your Azure Storage data objects, including blobs, file shares, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that's accessible from anywhere in the world over HTTP or HTTPS. Data in your storage account is durable and highly available, secure, and massively scalable.
-To learn how to create an Azure storage account, see [Create a storage account](storage-account-create.md).
+To learn how to create an Azure Storage account, see [Create a storage account](storage-account-create.md).
## Types of storage accounts
-Azure Storage offers several types of storage accounts. Each type supports different features and has its own pricing model. Consider these differences before you create a storage account to determine the type of account that's best for your applications.
+Azure Storage offers several types of storage accounts. Each type supports different features and has its own pricing model.
The following table describes the types of storage accounts recommended by Microsoft for most scenarios. All of these use the [Azure Resource Manager](../../azure-resource-manager/management/overview.md) deployment model. | Type of storage account | Supported storage services | Redundancy options | Usage | |--|--|--|--|
-| Standard general-purpose v2 | Blob (including Data Lake Storage<sup>1</sup>), Queue, and Table storage, Azure Files | LRS/GRS/RA-GRS<br /><br />ZRS/GZRS/RA-GZRS<sup>2</sup> | Standard storage account type for blobs, file shares, queues, and tables. Recommended for most scenarios using Azure Storage. Note that if you want support for NFS file shares in Azure Files, use the premium file shares account type. |
-| Premium block blobs<sup>3</sup> | Blob storage (including Data Lake Storage<sup>1</sup>) | LRS<br /><br />ZRS<sup>2</sup> | Premium storage account type for block blobs and append blobs. Recommended for scenarios with high transactions rates, or scenarios that use smaller objects or require consistently low storage latency. [Learn more about example workloads.](../blobs/storage-blob-block-blob-premium.md) |
-| Premium file shares<sup>3</sup> | Azure Files | LRS<br /><br />ZRS<sup>2</sup> | Premium storage account type for file shares only. Recommended for enterprise or high-performance scale applications. Use this account type if you want a storage account that supports both SMB and NFS file shares. |
+| Standard general-purpose v2 | Blob Storage (including Data Lake Storage<sup>1</sup>), Queue Storage, Table Storage, and Azure Files | Locally redundant storage (LRS) / geo-redundant storage (GRS) / read-access geo-redundant storage (RA-GRS)<br /><br />Zone-redundant storage (ZRS) / geo-zone-redundant storage (GZRS) / read-access geo-zone-redundant storage (RA-GZRS)<sup>2</sup> | Standard storage account type for blobs, file shares, queues, and tables. Recommended for most scenarios using Azure Storage. If you want support for network file system (NFS) in Azure Files, use the premium file shares account type. |
+| Premium block blobs<sup>3</sup> | Blob Storage (including Data Lake Storage<sup>1</sup>) | LRS<br /><br />ZRS<sup>2</sup> | Premium storage account type for block blobs and append blobs. Recommended for scenarios with high transactions rates, or scenarios that use smaller objects or require consistently low storage latency. [Learn more about example workloads.](../blobs/storage-blob-block-blob-premium.md) |
+| Premium file shares<sup>3</sup> | Azure Files | LRS<br /><br />ZRS<sup>2</sup> | Premium storage account type for file shares only. Recommended for enterprise or high-performance scale applications. Use this account type if you want a storage account that supports both Server Message Block (SMB) and NFS file shares. |
| Premium page blobs<sup>3</sup> | Page blobs only | LRS | Premium storage account type for page blobs only. [Learn more about page blobs and sample use cases.](../blobs/storage-blob-pageblob-overview.md) | <sup>1</sup> Data Lake Storage is a set of capabilities dedicated to big data analytics, built on Azure Blob storage. For more information, see [Introduction to Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md) and [Create a storage account to use with Data Lake Storage Gen2](../blobs/create-data-lake-storage-account.md).
-<sup>2</sup> Zone-redundant storage (ZRS) and geo-zone-redundant storage (GZRS/RA-GZRS) are available only for standard general-purpose v2, premium block blobs, and premium file shares accounts in certain regions. For more information, see [Azure Storage redundancy](storage-redundancy.md).
+<sup>2</sup> ZRS, GZRS, and RA-GZRS are available only for standard general-purpose v2, premium block blobs, and premium file shares accounts in certain regions. For more information, see [Azure Storage redundancy](storage-redundancy.md).
<sup>3</sup> Premium performance storage accounts use solid-state drives (SSDs) for low latency and high throughput. Legacy storage accounts are also supported. For more information, see [Legacy storage account types](#legacy-storage-account-types).
-You cannot change a storage account to a different type after it is created. To move your data to a storage account of a different type, you must create a new account and copy the data to the new account.
+You canΓÇÖt change a storage account to a different type after it's' created. To move your data to a storage account of a different type, you must create a new account and copy the data to the new account.
## Storage account endpoints
The following table lists the format of the endpoint for each of the Azure Stora
| Storage service | Endpoint | |--|--|
-| Blob storage | `https://<storage-account>.blob.core.windows.net` |
+| Blob Storage | `https://<storage-account>.blob.core.windows.net` |
| Data Lake Storage Gen2 | `https://<storage-account>.dfs.core.windows.net` | | Azure Files | `https://<storage-account>.file.core.windows.net` |
-| Queue storage | `https://<storage-account>.queue.core.windows.net` |
-| Table storage | `https://<storage-account>.table.core.windows.net` |
+| Queue Storage | `https://<storage-account>.queue.core.windows.net` |
+| Table Storage | `https://<storage-account>.table.core.windows.net` |
Construct the URL for accessing an object in a storage account by appending the object's location in the storage account to the endpoint. For example, the URL for a blob will be similar to:
-`http://*mystorageaccount*.blob.core.windows.net/*mycontainer*/*myblob*`
+`https://*mystorageaccount*.blob.core.windows.net/*mycontainer*/*myblob*`
You can also configure your storage account to use a custom domain for blobs. For more information, see [Configure a custom domain name for your Azure Storage account](../blobs/storage-custom-domain-name.md).
The following table summarizes and points to guidance on how to move, upgrade, o
| Move a storage account to a different subscription | Azure Resource Manager provides options for moving a resource to a different subscription. For more information, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md). | | Move a storage account to a different resource group | Azure Resource Manager provides options for moving a resource to a different resource group. For more information, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md). | | Move a storage account to a different region | To move a storage account, create a copy of your storage account in another region. Then, move your data to that account by using AzCopy, or another tool of your choice. For more information, see [Move an Azure Storage account to another region](storage-account-move.md). |
-| Upgrade to a general-purpose v2 storage account | You can upgrade a general-purpose v1 storage account or Blob storage account to a general-purpose v2 account. Note that this action cannot be undone. For more information, see [Upgrade to a general-purpose v2 storage account](storage-account-upgrade.md). |
+| Upgrade to a general-purpose v2 storage account | You can upgrade a general-purpose v1 storage account or Blob Storage account to a general-purpose v2 account. Note that this action canΓÇÖt be undone. For more information, see [Upgrade to a general-purpose v2 storage account](storage-account-upgrade.md). |
| Migrate a classic storage account to Azure Resource Manager | The Azure Resource Manager deployment model is superior to the classic deployment model in terms of functionality, scalability, and security. For more information about migrating a classic storage account to Azure Resource Manager, see the "Migration of storage accounts" section of [Platform-supported migration of IaaS resources from classic to Azure Resource Manager](../../virtual-machines/migration-classic-resource-manager-overview.md#migration-of-storage-accounts). | ## Transfer data into a storage account
Azure Storage bills based on your storage account usage. All objects in a storag
- **Region** refers to the geographical region in which your account is based. - **Account type** refers to the type of storage account you're using.-- **Access tier** refers to the data usage pattern you've specified for your general-purpose v2 or Blob storage account.
+- **Access tier** refers to the data usage pattern youΓÇÖve specified for your general-purpose v2 or Blob Storage account.
- **Capacity** refers to how much of your storage account allotment you're using to store data. - **Redundancy** determines how many copies of your data are maintained at one time, and in what locations. - **Transactions** refer to all read and write operations to Azure Storage.-- **Data egress** refers to any data transferred out of an Azure region. When the data in your storage account is accessed by an application that isn't running in the same region, you're charged for data egress. For information about using resource groups to group your data and services in the same region to limit egress charges, see [What is an Azure resource group?](/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group).
+- **Data egress** refers to any data transferred out of an Azure region. When the data in your storage account is accessed by an application that isnΓÇÖt running in the same region, you're charged for data egress. For information about using resource groups to group your data and services in the same region to limit egress charges, see [What is an Azure resource group?](/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group).
-The [Azure Storage pricing page](https://azure.microsoft.com/pricing/details/storage/) provides detailed pricing information based on account type, storage capacity, replication, and transactions. The [Data Transfers pricing details](https://azure.microsoft.com/pricing/details/data-transfers/) provides detailed pricing information for data egress. You can use the [Azure Storage pricing calculator](https://azure.microsoft.com/pricing/calculator/?scenario=data-management) to help estimate your costs.
+The [Azure Storage pricing page](https://azure.microsoft.com/pricing/details/storage) provides detailed pricing information based on account type, storage capacity, replication, and transactions. The [Data Transfers pricing details](https://azure.microsoft.com/pricing/details/data-transfers) provides detailed pricing information for data egress. You can use the [Azure Storage pricing calculator](https://azure.microsoft.com/pricing/calculator/?scenario=data-management) to help estimate your costs.
[!INCLUDE [cost-management-horizontal](../../../includes/cost-management-horizontal.md)] ## Legacy storage account types
-The following table describes the legacy storage account types. These account types are not recommended by Microsoft, but may be used in certain scenarios:
+The following table describes the legacy storage account types. These account types arenΓÇÖt recommended by Microsoft, but may be used in certain scenarios:
| Type of legacy storage account | Supported storage services | Redundancy options | Deployment model | Usage | |--|--|--|--|--|
-| Standard general-purpose v1 | Blob, Queue, and Table storage, Azure Files | LRS/GRS/RA-GRS | Resource Manager, Classic | General-purpose v1 accounts may not have the latest features or the lowest per-gigabyte pricing. Consider using for these scenarios:<br /><ul><li>Your applications require the Azure [classic deployment model](../../azure-portal/supportability/classic-deployment-model-quota-increase-requests.md).</li><li>Your applications are transaction-intensive or use significant geo-replication bandwidth, but don't require large capacity. In this case, a general-purpose v1 account may be the most economical choice.</li><li>You use a version of the Azure Storage REST API that is earlier than 2014-02-14 or a client library with a version lower than 4.x, and you can't upgrade your application.</li><li>You are selecting a storage account to use as a cache for Azure Site Recovery. Because Site Recovery is transaction-intensive, a general-purpose v1 account may be more cost-effective. For more information, see [Support matrix for Azure VM disaster recovery between Azure regions](../../site-recovery/azure-to-azure-support-matrix.md#cache-storage).</li></ul> |
-| Standard Blob storage | Blob storage (block blobs and append blobs only) | LRS/GRS/RA-GRS | Resource Manager | Microsoft recommends using standard general-purpose v2 accounts instead when possible. |
+| Standard general-purpose v1 | Blob Storage, Queue Storage, Table Storage, and Azure Files | LRS/GRS/RA-GRS | Resource Manager, classic | General-purpose v1 accounts may not have the latest features or the lowest per-gigabyte pricing. Consider using it for these scenarios:<br /><ul><li>Your applications require the Azure [classic deployment model](../../azure-portal/supportability/classic-deployment-model-quota-increase-requests.md).</li><li>Your applications are transaction-intensive or use significant geo-replication bandwidth, but donΓÇÖt require large capacity. In this case, a general-purpose v1 account may be the most economical choice.</li><li>You use a version of the Azure Storage REST API that is earlier than February 14, 2014, or a client library with a version lower than 4.x, and you canΓÇÖt upgrade your application.</li><li>You're selecting a storage account to use as a cache for Azure Site Recovery. Because Site Recovery is transaction-intensive, a general-purpose v1 account may be more cost-effective. For more information, see [Support matrix for Azure VM disaster recovery between Azure regions](../../site-recovery/azure-to-azure-support-matrix.md#cache-storage).</li></ul> |
+| Standard Blob Storage | Blob Storage (block blobs and append blobs only) | LRS/GRS/RA-GRS | Resource Manager | Microsoft recommends using standard general-purpose v2 accounts instead when possible. |
## Next steps
stream-analytics Stream Analytics Javascript User Defined Aggregates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/stream-analytics-javascript-user-defined-aggregates.md
Title: JavaScript user-defined aggregates in Azure Stream Analytics description: This article describes how to perform advanced query mechanics with JavaScript user-defined aggregates in Azure Stream Analytics.--++
Azure Stream Analytics supports user-defined aggregates (UDA) written in JavaScr
## JavaScript user-defined aggregates
-A user-defined aggregate is used on top of a time window specification to aggregate over the events in that window and produce a single result value. There are two types of UDA interfaces that Stream Analytics supports today, AccumulateOnly and AccumulateDeaccumulate. Both types of UDA can be used by Tumbling, Hopping, Sliding and Session Window. AccumulateDeaccumulate UDA performs better than AccumulateOnly UDA when used together with Hopping, Sliding and Session Window. You choose one of the two types based on the algorithm you use.
+A user-defined aggregate is used on top of a time window specification to aggregate over the events in that window and produce a single result value. There are two types of UDA interfaces that Stream Analytics supports today, AccumulateOnly and AccumulateDeaccumulate. Both types of UDA can be used by Tumbling, Hopping, Sliding and Session Window. AccumulateDeaccumulate UDA performs better than AccumulateOnly UDA when used together with Hopping, Sliding, and Session Window. You choose one of the two types based on the algorithm you use.
### AccumulateOnly aggregates
-AccumulateOnly aggregates can only accumulate new events to its state, the algorithm does not allow deaccumulation of values. Choose this aggregate type when deaccumulate an event information from the state value is impossible to implement. Following is the JavaScript template for AccumulatOnly aggregates:
+AccumulateOnly aggregates can only accumulate new events to its state, the algorithm doesnΓÇÖt allow deaccumulation of values. Choose this aggregate type when deaccumulate an event information from the state value is impossible to implement. Following is the JavaScript template for AccumulatOnly aggregates:
```JavaScript // Sample UDA which state can only be accumulated.
Each JavaScript UDA is defined by a Function object declaration. Following are t
### Function alias
-Function alias is the UDA identifier. When called in Stream Analytics query, always use UDA alias together with a "uda." prefix.
+Function alias is the UDA identifier. When called in Stream Analytics query, always use UDA alias together with an "uda." prefix.
### Function type
The init() method initializes state of the aggregate. This method is called when
### Method ΓÇô accumulate()
-The accumulate() method calculates the UDA state based on the previous state and the current event values. This method is called when an event enters a time window (TUMBLINGWINDOW, HOPPINGWINDOW, SLIDINGWINDOW or SESSIONWINDOW).
+The accumulate() method calculates the UDA state based on the previous state and the current event values. This method is called when an event enters a time window (TUMBLINGWINDOW, HOPPINGWINDOW, SLIDINGWINDOW, or SESSIONWINDOW).
### Method ΓÇô deaccumulate()
The deaccumulate() method recalculates state based on the previous state and the
### Method ΓÇô deaccumulateState()
-The deaccumulateState() method recalculates state based on the previous state and the state of a hop. This method is called when a set of events leave a HOPPINGWINDOW.
+The deaccumulateState() method recalculates state based on the previous state and the state of a hop. This method is called when a set of events leaves a HOPPINGWINDOW.
### Method ΓÇô computeResult()
-The computeResult() method returns aggregate result based on the current state. This method is called at the end of a time window (TUMBLINGWINDOW, HOPPINGWINDOW, SLIDINGWINDOW or SESSIONWINDOW).
+The computeResult() method returns aggregate result based on the current state. This method is called at the end of a time window (TUMBLINGWINDOW, HOPPINGWINDOW, SLIDINGWINDOW, or SESSIONWINDOW).
## JavaScript UDA supported input and output data types For JavaScript UDA data types, refer to section **Stream Analytics and JavaScript type conversion** of [Integrate JavaScript UDFs](stream-analytics-javascript-user-defined-functions.md). ## Adding a JavaScript UDA from the Azure portal
-Below we walk through the process of creating a UDA from Portal. The example we use here is computing time weighted averages.
+Below we walk through the process of creating a UDA from Portal. The example we use hereΓÇÖs computing time-weighted averages.
Now let's create a JavaScript UDA under an existing ASA job by following steps.
-1. Log on to Azure portal and locate your existing Stream Analytics job.
-1. Then click on functions link under **JOB TOPOLOGY**.
-1. Click on the **Add** icon to add a new function.
-1. On the New Function view, select **JavaScript UDA** as the Function Type, then you see a default UDA template show up in the editor.
+1. Sign in to Azure portal and locate your existing Stream Analytics job.
+1. Then select functions link under **JOB TOPOLOGY**.
+1. Select **Add** to add a new function.
+1. On the New Function view, select **JavaScript UDA** as the Function Type, then you see a default UDA template shows up in the editor.
1. Fill in "TWA" as the UDA alias and change the function implementation as the following: ```JavaScript
Now let's create a JavaScript UDA under an existing ASA job by following steps.
} ```
-1. Once you click the "Save" button, your UDA shows up on the function list.
+1. Once you select the "Save" button, your UDA shows up on the function list.
-1. Click on the new function "TWA", you can check the function definition.
+1. Select the new function "TWA", you can check the function definition.
## Calling JavaScript UDA in ASA query
stream-analytics Stream Analytics Stream Analytics Query Patterns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/stream-analytics-stream-analytics-query-patterns.md
Title: Common query patterns in Azure Stream Analytics description: This article describes several common query patterns and designs that are useful in Azure Stream Analytics jobs. --++
This article outlines solutions to several common query patterns based on real-w
## Supported Data Formats
-Azure Stream Analytics supports processing events in CSV, JSON and Avro data formats.
+Azure Stream Analytics supports processing events in CSV, JSON, and Avro data formats.
-Both JSON and Avro may contain complex types such as nested objects (records) or arrays. For more information on working with these complex data types, refer to the [Parsing JSON and AVRO data](stream-analytics-parsing-json.md) article.
+Both JSON and Avro may contain complex types such as nested objects (records) or arrays. For more information on working with these complex data types, see the [Parsing JSON and AVRO data](stream-analytics-parsing-json.md) article.
## Send data to multiple outputs
HAVING
The **INTO** clause tells Stream Analytics which of the outputs to write the data to. The first **SELECT** defines a pass-through query that receives data from the input and sends it to the output named **ArchiveOutput**. The second query does some simple aggregation and filtering before sending the results to a downstream alerting system output called **AlertOutput**.
-Note that the **WITH** clause can be used to define multiple sub-query blocks. This option has the benefit of opening fewer readers to the input source.
+Note that the **WITH** clause can be used to define multiple subquery blocks. This option has the benefit of opening fewer readers to the input source.
**Query**:
GROUP BY
HAVING [Count] >= 3 ```
-For more information, refer to [**WITH** clause](/stream-analytics-query/with-azure-stream-analytics).
+For more information, see [**WITH** clause](/stream-analytics-query/with-azure-stream-analytics).
## Simple pass-through query
Use the **LIKE** statement to check the **License_plate** field value. It should
## Calculation over past events
-The **LAG** function can be used to look at past events within a time window and compare them against the current event. For example, the current car make can be outputted if it is different from the last car that went through the toll.
+The **LAG** function can be used to look at past events within a time window and compare them against the current event. For example, the current car make can be outputted if itΓÇÖs different from the last car that went through the toll.
**Input**:
WHERE
Use **LAG** to peek into the input stream one event back, retrieving the *Make* value and comparing it to the *Make* value of the current event and output the event.
-For more information, refer to [**LAG**](/stream-analytics-query/lag-azure-stream-analytics).
+For more information, see [**LAG**](/stream-analytics-query/lag-azure-stream-analytics).
## Return the last event in a window
-As events are consumed by the system in real-time, there is no function that can determine if an event will be the last one to arrive for that window of time. To achieve this, the input stream needs to be joined with another where the time of an event is the maximum time for all events at that window.
+As events are consumed by the system in real time, thereΓÇÖs no function that can determine if an event will be the last one to arrive for that window of time. To achieve this, the input stream needs to be joined with another where the time of an event is the maximum time for all events at that window.
**Input**:
FROM
AND Input.Time = LastInWindow.LastEventTime ```
-The first step on the query finds the maximum time stamp in 10-minute windows, that is the time stamp of the last event for that window. The second step joins the results of the first query with the original stream to find the event that match the last time stamps in each window.
+The first step on the query finds the maximum time stamp in 10-minute windows, that is the time stamp of the last event for that window. The second step joins the results of the first query with the original stream to find the event that matches the last time stamps in each window.
-**DATEDIFF** is a date-specific function that compares and returns the time difference between two DateTime fields, for more information, refer to [date functions](/stream-analytics-query/date-and-time-functions-azure-stream-analytics).
+**DATEDIFF** is a date-specific function that compares and returns the time difference between two DateTime fields, for more information, see [date functions](/stream-analytics-query/date-and-time-functions-azure-stream-analytics).
-For more information on joining streams, refer to [**JOIN**](/stream-analytics-query/join-azure-stream-analytics).
+For more information on joining streams, see [**JOIN**](/stream-analytics-query/join-azure-stream-analytics).
## Data aggregation over time
This aggregation groups the cars by *Make* and counts them every 10 seconds. The
TumblingWindow is a windowing function used to group events together. An aggregation can be applied over all grouped events. For more information, see [windowing functions](stream-analytics-window-functions.md).
-For more information on aggregation, refer to [aggregate functions](/stream-analytics-query/aggregate-functions-azure-stream-analytics).
+For more information on aggregation, see [aggregate functions](/stream-analytics-query/aggregate-functions-azure-stream-analytics).
## Periodically output values
GROUP BY
This query generates events every 5 seconds and outputs the last event that was received previously. The **HOPPINGWINDOW** duration determines how far back the query looks to find the latest event.
-For more information, refer to [Hopping window](/stream-analytics-query/hopping-window-azure-stream-analytics).
+For more information, see [Hopping window](/stream-analytics-query/hopping-window-azure-stream-analytics).
## Correlate events in a stream
WHERE
The **LAG** function can look into the input stream one event back and retrieve the *Make* value, comparing that with the *Make* value of the current event. Once the condition is met, data from the previous event can be projected using **LAG** in the **SELECT** statement.
-For more information, refer to [LAG](/stream-analytics-query/lag-azure-stream-analytics).
+For more information, see [LAG](/stream-analytics-query/lag-azure-stream-analytics).
## Detect the duration between events
WHERE
Event = 'end' ```
-The **LAST** function can be used to retrieve the last event within a specific condition. In this example, the condition is an event of type Start, partitioning the search by **PARTITION BY** user and feature. This way, every user and feature is treated independently when searching for the Start event. **LIMIT DURATION** limits the search back in time to 1 hour between the End and Start events.
+The **LAST** function can be used to retrieve the last event within a specific condition. In this example, the condition is an event of type Start, partitioning the search by **PARTITION BY** user and feature. This way, every user and feature are treated independently when searching for the Start event. **LIMIT DURATION** limits the search back in time to 1 hour between the End and Start events.
## Count unique values
GROUP BY
``` **COUNT(DISTINCT Make)** returns the count of distinct values in the **Make** column within a time window.
-For more information, refer to [**COUNT** aggregate function](/stream-analytics-query/count-azure-stream-analytics).
+For more information, see [**COUNT** aggregate function](/stream-analytics-query/count-azure-stream-analytics).
## Retrieve the first event in a window
WHERE
IsFirst(minute, 10) OVER (PARTITION BY Make) = 1 ```
-For more information, refer to [**IsFirst**](/stream-analytics-query/isfirst-azure-stream-analytics).
+For more information, see [**IsFirst**](/stream-analytics-query/isfirst-azure-stream-analytics).
## Remove duplicate events in a window
GROUP BY DeviceId,TumblingWindow(minute, 5)
**COUNT(DISTINCT Time)** returns the number of distinct values in the Time column within a time window. The output of the first step can then be used to compute the average per device, by discarding duplicates.
-For more information, refer to [COUNT(DISTINCT Time)](/stream-analytics-query/count-azure-stream-analytics).
+For more information, see [COUNT(DISTINCT Time)](/stream-analytics-query/count-azure-stream-analytics).
## Specify logic for different cases/values (CASE statements)
FROM
The **CASE** expression compares an expression to a set of simple expressions to determine its result. In this example, vehicles of *Make1* are dispatched to lane 'A' while vehicles of any other make will be assigned lane 'B'.
-For more information, refer to [case expression](/stream-analytics-query/case-azure-stream-analytics).
+For more information, see [case expression](/stream-analytics-query/case-azure-stream-analytics).
## Data conversion
-Data can be cast in real-time using the **CAST** method. For example, car weight can be converted from type **nvarchar(max)** to type **bigint** and be used on a numeric calculation.
+Data can be cast in real time using the **CAST** method. For example, car weight can be converted from type **nvarchar(max)** to type **bigint** and be used on a numeric calculation.
**Input**:
FROM input
GROUP BY TUMBLINGWINDOW(second, 5), TollId ```
-The **TIMESTAMP OVER BY** clause looks at each device timeline independently using substreams. The output event for each *TollID* is generated as they are computed, meaning that the events are in order with respect to each *TollID* instead of being reordered as if all devices were on the same clock.
+The **TIMESTAMP OVER BY** clause looks at each device timeline independently using substreams. The output event for each *TollID* is generated as theyΓÇÖre computed, meaning that the events are in order with respect to each *TollID* instead of being reordered as if all devices were on the same clock.
-For more information, refer to [TIMESTAMP BY OVER](/stream-analytics-query/timestamp-by-azure-stream-analytics#over-clause-interacts-with-event-ordering).
+For more information, see [TIMESTAMP BY OVER](/stream-analytics-query/timestamp-by-azure-stream-analytics#over-clause-interacts-with-event-ordering).
## Session Windows
GROUP BY
The **SELECT** projects the data relevant to the user interaction, together with the duration of the interaction. Grouping the data by user and a **SessionWindow** that closes if no interaction happens within 1 minute, with a maximum window size of 60 minutes.
-For more information on **SessionWindow**, refer to [Session Window](/stream-analytics-query/session-window-azure-stream-analytics) .
+For more information on SessionWindow, see [Session Window](/stream-analytics-query/session-window-azure-stream-analytics) .
## Language extensibility with User Defined Function in JavaScript and C#
-Azure Stream Analytics query language can be extended with custom functions written either in JavaScript or C# language. User Defined Functions (UDF) are custom/complex computations that cannot be easily expressed using the **SQL** language. These UDFs can be defined once and used multiple times within a query. For example, an UDF can be used to convert a hexadecimal *nvarchar(max)* value to an *bigint* value.
+Azure Stream Analytics query language can be extended with custom functions written either in JavaScript or C# language. User Defined Functions (UDF) are custom/complex computations that canΓÇÖt be easily expressed using the **SQL** language. These UDFs can be defined once and used multiple times within a query. For example, an UDF can be used to convert a hexadecimal *nvarchar(max)* value to a *bigint* value.
**Input**:
From
Input ```
-The User Defined Function will compute the *bigint* value from the HexValue on every event consumed.
+The User-Defined Function will compute the *bigint* value from the HexValue on every event consumed.
-For more information, refer to [JavaScript](./stream-analytics-javascript-user-defined-functions.md) and [C#](./stream-analytics-edge-csharp-udf.md).
+For more information, see [JavaScript](./stream-analytics-javascript-user-defined-functions.md) and [C#](./stream-analytics-edge-csharp-udf.md).
## Advanced pattern matching with MATCH_RECOGNIZE
MATCH_RECOGNIZE (
) AS patternMatch ```
-This query matches at least two consecutive failure events and generate an alarm when the conditions are met.
+This query matches at least two consecutive failure events and generates an alarm when the conditions are met.
**PATTERN** defines the regular expression to be used on the matching, in this case, at least two consecutive warnings after at least one successful operation. Success and Warning are defined using Return_Code value and once the condition is met, the **MEASURES** are projected with *ATM_id*, the first warning operation and first warning time.
-For more information, refer to [MATCH_RECOGNIZE](/stream-analytics-query/match-recognize-stream-analytics).
+For more information, see [MATCH_RECOGNIZE](/stream-analytics-query/match-recognize-stream-analytics).
## Geofencing and geospatial queries Azure Stream Analytics provides built-in geospatial functions that can be used to implement scenarios such as fleet management, ride sharing, connected cars, and asset tracking. Geospatial data can be ingested in either GeoJSON or WKT formats as part of event stream or reference data.
-For example, a company that is specialized in manufacturing machines for printing passports, lease their machines to governments and consulates. The location of those machines is heavily controlled as to avoid the misplacing and possible use for counterfeiting of passports. Each machine is fitted with a GPS tracker, that information is relayed back to an Azure Stream Analytics job.
+For example, a company that is specialized in manufacturing machines for printing passports, leases their machines to governments and consulates. The location of those machines is heavily controlled as to avoid the misplacing and possible use for counterfeiting of passports. Each machine is fitted with a GPS tracker, that information is relayed back to an Azure Stream Analytics job.
The manufacture would like to keep track of the location of those machines and be alerted if one of them leaves an authorized area, this way they can remotely disable, alert authorities and retrieve the equipment. **Input**:
JOIN
The query enables the manufacturer to monitor the machines location automatically, getting alerts when a machine leaves the allowed geofence. The built-in geospatial function allows users to use GPS data within the query without third-party libraries.
-For more information, refer to the [Geofencing and geospatial aggregation scenarios with Azure Stream Analytics](geospatial-scenarios.md) article.
+For more information, see the [Geofencing and geospatial aggregation scenarios with Azure Stream Analytics](geospatial-scenarios.md) article.
## Get help
stream-analytics Stream Analytics Use Reference Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/stream-analytics-use-reference-data.md
Azure Stream Analytics automatically scans for refreshed reference data blobs at
> > An exception to this is when the job needs to re-process data back in time or when the job is first started. At start time the job is looking for the most recent blob produced before the job start time specified. This is done to ensure that there is a **non-empty** reference data set when the job starts. If one cannot be found, the job displays the following diagnostic: `Initializing input without a valid reference data blob for UTC time <start time>`.
+When a reference data set is refreshed, a diagnostic log will be generated: `Loaded new reference data from <blob path>`. Multiple reasons may require a job to reload a previous (past) reference data set, most often to reprocess past data. That same diagnostic log will be generated then. This doesn't imply that current stream data will use past reference data.
+ [Azure Data Factory](https://azure.microsoft.com/documentation/services/data-factory/) can be used to orchestrate the task of creating the updated blobs required by Stream Analytics to update reference data definitions. Data Factory is a cloud-based data integration service that orchestrates and automates the movement and transformation of data. Data Factory supports [connecting to a large number of cloud based and on-premises data stores](../data-factory/copy-activity-overview.md) and moving data easily on a regular schedule that you specify. For more information and step by step guidance on how to set up a Data Factory pipeline to generate reference data for Stream Analytics which refreshes on a pre-defined schedule, check out this [GitHub sample](https://github.com/Azure/Azure-DataFactory/tree/master/SamplesV1/ReferenceDataRefreshForASAJobs). ### Tips on refreshing blob reference data
There are two ways to update the reference data:
[stream.analytics.introduction]: stream-analytics-real-time-fraud-detection.md [stream.analytics.get.started]: ./stream-analytics-real-time-fraud-detection.md [stream.analytics.query.language.reference]: /stream-analytics-query/stream-analytics-query-language-reference
-[stream.analytics.rest.api.reference]: /rest/api/streamanalytics/
+[stream.analytics.rest.api.reference]: /rest/api/streamanalytics/
synapse-analytics Synapse Spark Sql Pool Import Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md
Title: Import and Export data between serverless Apache Spark pools and SQL pools
-description: This article provides information on how to use the custom connector for moving data between dedicated SQL pools and serverless Apache Spark pools.
--
+description: This article introduces the Synapse Dedicated SQL Pool Connector API for moving data between dedicated SQL pools and serverless Apache Spark pools.
++ Previously updated : 11/19/2020 ---
-# Introduction
-
-The Azure Synapse Apache Spark to Synapse SQL connector is designed to efficiently transfer data between serverless Apache Spark pools and dedicated SQL pools in Azure Synapse. The Azure Synapse Apache Spark to Synapse SQL connector works on dedicated SQL pools only, it doesn't work with serverless SQL pool.
-
-> [!WARNING]
-> The **sqlanalytics()** function name has been changed to **synapsesql()**. The sqlanalytics function will continue to work but will be deprecated. Please change any reference from **sqlanalytics()** to **synapsesql()** to prevent any disruption in the future.
-
-## Design
-
-Transferring data between Spark pools and SQL pools can be done using JDBC. However, given two distributed systems such as Spark and SQL pools, JDBC tends to be a bottleneck with serial data transfer.
-
-The Azure Synapse Apache Spark pool to Synapse SQL connector is a data source implementation for Apache Spark. It uses the Azure Data Lake Storage Gen2 and Polybase in dedicated SQL pools to efficiently transfer data between the Spark cluster and the Synapse dedicated SQL instance.
-
-![Connector Architecture](./media/synapse-spark-sqlpool-import-export/arch1.png)
-
-## Authentication in Azure Synapse Analytics
-
-Authentication between systems is made seamless in Azure Synapse Analytics. The Token Service connects with Azure Active Directory to obtain security tokens for use when accessing the storage account or the data warehouse server.
-
-For this reason, there's no need to create credentials or specify them in the connector API as long as Azure AD-Auth is configured at the storage account and the data warehouse server. If not, SQL Auth can be specified. Find more details in the [Usage](#usage) section.
-
-## Constraints
--- This connector works only in Scala.-- For pySpark, see details in the [Use Python](#use-pyspark-with-the-connector) section.-- This Connector does not support querying SQL Views.-
-## Prerequisites
--- Must be a member of **db_exporter** role in the database/SQL pool you want to transfer data to/from.-- Must be a member of Storage Blob Data Contributor role on the default storage account.-
-To create users, connect to the SQL pool database, and follow these examples:
-
-```sql
SQL User
-CREATE USER Mary FROM LOGIN Mary;
-Azure Active Directory User
-CREATE USER [mike@contoso.com] FROM EXTERNAL PROVIDER;
-```
-
-To assign a role:
-
-```sql
SQL User
-EXEC sp_addrolemember 'db_exporter', 'Mary';
-Azure Active Directory User
-EXEC sp_addrolemember 'db_exporter',[mike@contoso.com]
-```
-
-## Usage
-
-The import statements aren't required, they're pre-imported for the notebook experience.
-
-### Transfer data to or from a dedicated SQL pool attached within the workspace
-
-> [!NOTE]
-> **Imports not needed in notebook experience**
-
-```scala
- import com.microsoft.spark.sqlanalytics.utils.Constants
- import org.apache.spark.sql.SqlAnalyticsConnector._
-```
-
-#### Read API
-
-```scala
-val df = spark.read.synapsesql("<DBName>.<Schema>.<TableName>")
-```
-
-The above API will work for both Internal (Managed) as well as External Tables in the SQL pool.
-
-#### Write API
-
-```scala
-df.write.synapsesql("<DBName>.<Schema>.<TableName>", <TableType>)
-```
-
-The write API creates the table in the dedicated SQL pool and then invokes Polybase to load the data. The table must not exist in the dedicated SQL pool or an error will be returned stating that "There is already an object named..."
-
-TableType values
--- Constants.INTERNAL - Managed table in dedicated SQL pool-- Constants.EXTERNAL - External table in dedicated SQL pool-
-SQL pool-managed table
-
-```scala
-df.write.synapsesql("<DBName>.<Schema>.<TableName>", Constants.INTERNAL)
-```
-
-SQL pool external table
-
-To write to a dedicated SQL pool external table, an EXTERNAL DATA SOURCE and an EXTERNAL FILE FORMAT must exist on the dedicated SQL pool. For more information, read [creating an external data source](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [external file formats](/sql/t-sql/statements/create-external-file-format-transact-sql?view=azure-sqldw-latest&preserve-view=true) in dedicated SQL pool. Below are examples for creating an external data source and external file formats in dedicated SQL pool.
-
-```sql
For an external table, you need to pre-create the data source and file format in dedicated SQL pool using SQL queries:
-CREATE EXTERNAL DATA SOURCE <DataSourceName>
-WITH
- ( LOCATION = 'abfss://...' ,
- TYPE = HADOOP
- ) ;
-
-CREATE EXTERNAL FILE FORMAT <FileFormatName>
-WITH (
- FORMAT_TYPE = PARQUET,
- DATA_COMPRESSION = 'org.apache.hadoop.io.compress.SnappyCodec'
-);
-```
-
-An EXTERNAL CREDENTIAL object is not necessary when using Azure Active Directory pass-through authentication to the storage account. Ensure you are a member of the "Storage Blob Data Contributor" role on the storage account.
-
-```scala
-
-df.write.
- option(Constants.DATA_SOURCE, <DataSourceName>).
- option(Constants.FILE_FORMAT, <FileFormatName>).
- synapsesql("<DBName>.<Schema>.<TableName>", Constants.EXTERNAL)
-
-```
-
-### Transfer data to or from a dedicated SQL pool or database outside the workspace
-
-> [!NOTE]
-> Imports not needed in notebook experience
-
-```scala
- import com.microsoft.spark.sqlanalytics.utils.Constants
- import org.apache.spark.sql.SqlAnalyticsConnector._
-```
-
-#### Read API
-
-```scala
-val df = spark.read.
-option(Constants.SERVER, "samplews.database.windows.net").
-synapsesql("<DBName>.<Schema>.<TableName>")
-```
-
-#### Write API
-
-```scala
-df.write.
-option(Constants.SERVER, "samplews.database.windows.net").
-synapsesql("<DBName>.<Schema>.<TableName>", <TableType>)
-```
Last updated : 01/27/2022++
+
+# Azure Synapse Dedicated SQL Pool connector for Apache Spark
-### Use SQL Auth instead of Azure AD
+The Synapse Dedicated SQL Pool Connector is an API that efficiently moves data between [Apache Spark runtime](../../synapse-analytics/spark/apache-spark-overview.md) and [Dedicated SQL pool](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md) in Azure Synapse Analytics. This connector is available in `Scala`.
-#### Read API
+It uses Azure Storage and [PolyBase](/sql/relational-databases/polybase/polybase-guide) to transfer data in parallel and at scale.
-Currently the connector doesn't support token-based auth to a dedicated SQL pool that is outside of the workspace. You'll need to use SQL Auth.
+## Authentication
-```scala
-val df = spark.read.
-option(Constants.SERVER, "samplews.database.windows.net").
-option(Constants.USER, <SQLServer Login UserName>).
-option(Constants.PASSWORD, <SQLServer Login Password>).
-synapsesql("<DBName>.<Schema>.<TableName>")
-```
+Authentication works automatically with the signed in Azure Active Directory user after the following prerequisites.
-#### Write API
+* Add the user to [db_exporter role](/sql/relational-databases/security/authentication-access/database-level-roles#special-roles-for--and-azure-synapse) using system-stored procedure [sp_addrolemember](/sql/relational-databases/system-stored-procedures/sp-addrolemember-transact-sql).
+* Add the user to [Storage Blob Data Contributor role](/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor) on the storage account.
-```scala
-df.write.
-option(Constants.SERVER, "samplews.database.windows.net").
-option(Constants.USER, <SQLServer Login UserName>).
-option(Constants.PASSWORD, <SQLServer Login Password>).
-synapsesql("<DBName>.<Schema>.<TableName>", <TableType>)
-```
+The connector also supports password-based [SQL authentication](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) after the following prerequisites.
+ * Add the user to [db_exporter role](/sql/relational-databases/security/authentication-access/database-level-roles#special-roles-for--and-azure-synapse) using system-stored procedure [sp_addrolemember](/sql/relational-databases/system-stored-procedures/sp-addrolemember-transact-sql).
+ * Create an [external data source](/sql/t-sql/statements/create-external-data-source-transact-sql), whose [database scoped credential](/sql/t-sql/statements/create-database-scoped-credential-transact-sql) secret is the access key to an Azure Storage Account. The API requires the name of this external data source.
-### Use PySpark with the connector
+## API reference
-> [!NOTE]
-> This example is given with only the notebook experience kept in mind.
+See the [Scala API reference](https://synapsesql.blob.core.windows.net/docs/1.0.0/scaladocs/com/microsoft/spark/sqlanalytics/https://docsupdatetracker.net/index.html).
-Assume you have a dataframe "pyspark_df" that you want to write into the DW.
+## Example usage
-Create a temp table using the dataframe in PySpark:
+* Create and show a `DataFrame` representing a database table in the dedicated SQL pool.
-```py
-pyspark_df.createOrReplaceTempView("pysparkdftemptable")
-```
+ ```scala
+ import com.microsoft.spark.sqlanalytics.utils.Constants
+ import com.microsoft.spark.sqlanalytics.SqlAnalyticsConnector._
-Run a Scala cell in the PySpark notebook using magics:
+ val df = spark.read.
+ option(Constants.SERVER, "servername.database.windows.net").
+ synapsesql("databaseName.schemaName.tablename")
-```scala
-%%spark
-val scala_df = spark.sqlContext.sql ("select * from pysparkdftemptable")
+ df.show
+ ```
-scala_df.write.synapsesql("sqlpool.dbo.PySparkTable", Constants.INTERNAL)
-```
+* Save the content of a `DataFrame` to a database table in the dedicated SQL pool. The table type can be internal (i.e. managed) or external.
-Similarly, in the read scenario, read the data using Scala and write it into a temp table, and use Spark SQL in PySpark to query the temp table into a dataframe.
+ ```scala
+ import com.microsoft.spark.sqlanalytics.utils.Constants
+ import com.microsoft.spark.sqlanalytics.SqlAnalyticsConnector._
-## Allow other users to use the Azure Synapse Apache Spark to Synapse SQL connector in your workspace
+ val df = spark.sql("select * from tmpview")
-You need to be Storage Blob Data Owner on the ADLS Gen2 storage account connected to the workspace to alter missing permissions for others. Ensure the user has access to the workspace and permissions to run notebooks.
+ df.write.
+ option(Constants.SERVER, "servername.database.windows.net").
+ synapsesql("databaseName.schemaName.tablename", Constants.INTERNAL)
+ ```
-### Option 1
+* Use the connector API with SQL authentication with option keys `Constants.USER` and `Constants.PASSWORD`. It also requires option key `Constants.DATA_SOURCE`, specifying an external data source.
-- Make the user a Storage Blob Data Contributor/Owner
+ ```scala
+ import com.microsoft.spark.sqlanalytics.utils.Constants
+ import com.microsoft.spark.sqlanalytics.SqlAnalyticsConnector._
-### Option 2
+ val df = spark.read.
+ option(Constants.SERVER, "servername.database.windows.net").
+ option(Constants.USER, "username").
+ option(Constants.PASSWORD, "password").
+ option(Constants.DATA_SOURCE, "datasource").
+ synapsesql("databaseName.schemaName.tablename")
-- Specify the following ACLs on the folder structure:
+ df.show
+ ```
-| Folder | / | synapse | workspaces | \<workspacename> | sparkpools | \<sparkpoolname> | sparkpoolinstances |
-|--|--|--|--|--|--|--|--|
-| Access Permissions | --X | --X | --X | --X | --X | --X | -WX |
-| Default Permissions | | | | | | | |
+* We can use the `Scala` connector API to interact with content from a `DataFrame` in `PySpark` by using [DataFrame.createOrReplaceTempView](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.createOrReplaceTempView.html#pyspark.sql.DataFrame.createOrReplaceTempView) or [DataFrame.createOrReplaceGlobalTempView](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.createOrReplaceGlobalTempView.html#pyspark.sql.DataFrame.createOrReplaceGlobalTempView).
-- You should be able to ACL all folders from "synapse" and downward from Azure portal. To ACL the root "/" folder, please follow the instructions below.
+ ```py
+ %%pyspark
+ df.createOrReplaceTempView("tempview")
+ ```
-- Connect to the storage account connected with the workspace from Storage Explorer using Azure AD-- Select your Account and give the ADLS Gen2 URL and default file system for the workspace-- Once you can see the storage account listed, right-click on the listing workspace and select "Manage Access"-- Add the User to the / folder with "Execute" Access Permission. Select "Ok"
+ ```scala
+ %%spark
+ import com.microsoft.spark.sqlanalytics.utils.Constants
+ import com.microsoft.spark.sqlanalytics.SqlAnalyticsConnector._
-> [!IMPORTANT]
-> Make sure you don't select "Default" if you don't intend to.
+ val df = spark.sqlContext.sql("select * from tempview")
+ df.write.
+ option(Constants.SERVER, "servername.database.windows.net").
+ synapsesql("databaseName.schemaName.tablename")
+ ```
## Next steps - [Create a dedicated SQL pool using the Azure portal](../../synapse-analytics/quickstart-create-apache-spark-pool-portal.md)-- [Create a new Apache Spark pool using the Azure portal](../../synapse-analytics/quickstart-create-apache-spark-pool-portal.md)
+- [Create a new Apache Spark pool using the Azure portal](../../synapse-analytics/quickstart-create-apache-spark-pool-portal.md)
+- [Create, develop, and maintain Synapse notebooks in Azure Synapse Analytics](../../synapse-analytics/spark/apache-spark-development-using-notebooks.md)
virtual-desktop Azure Advisor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/azure-advisor.md
- Title: Integrate Azure Virtual Desktop with Azure Advisor - Azure
-description: How to use Azure Advisor with your Azure Virtual Desktop deployment.
-- Previously updated : 03/31/2021---
-# Use Azure Advisor with Azure Virtual Desktop
-
-Azure Advisor can help users resolve common issues on their own without having to file support cases. The recommendations reduce the need to submit help requests, saving you time and costs.
-
-This article will tell you how to set up Azure Advisor in your Azure Virtual Desktop deployment to help your users.
-
-## What is Azure Advisor?
-
-Azure Advisor analyzes your configurations and telemetry to offer personalized recommendations to solve common problems. With these recommendations, you can optimize your Azure resources for reliability, security, operational excellence, performance, and cost. Learn more at [the Azure Advisor website](https://azure.microsoft.com/services/advisor/).
-
-## How to start using Azure Advisor
-
-All you need to get started is an Azure account on the Azure portal. First, open the Azure portal at <https://portal.azure.com/#home>, then select **Advisor** under **Azure Services**, as shown in the following image. You can also enter "Azure Advisor" into the search bar in the Azure portal.
-
-> [!div class="mx-imgBorder"]
-> ![A screenshot of the Azure portal. The user is hovering their mouse cursor over the Azure Advisor link, causing a drop-down menu to appear.](media/azure-advisor.png)
-
-When you open Azure Advisor, you'll see five categories:
--- Cost-- Security-- Reliability-- Operational Excellence-- Performance-
-> [!div class="mx-imgBorder"]
-> ![A screenshot of Azure Advisor showing the five menus for each category. The five items displayed in their own boxes are Cost, Security, Reliability, Operational Excellence, and Performance.](media/advisor-categories.png)
-
-When you select a category, you'll go to its active recommendations page. On this page, you can view which recommendations Azure Advisor has for you, as shown in the following image.
-
-> [!div class="mx-imgBorder"]
-> ![A screenshot of the active recommendations list for Operational Excellence. The list shows seven recommendations with varying priority levels.](media/active-suggestions.png)
-
-## Additional tips for Azure Advisor
--- Make sure to check your recommendations frequently, at least more than once a week. Azure Advisor updates its active recommendations multiple times per day. Checking for new recommendations can prevent larger issues by helping you spot and solve smaller ones.--- Always try to solve the issues with the highest priority level in Azure Advisor. High priority issues are marked with red. Leaving high-priority recommendations unresolved can lead to problems down the line.--- If a recommendation seems less important, you can dismiss it or postpone it. To dismiss or postpone a recommendation, go to the **Action** column and change the item's state.--- Don't dismiss recommendations until you know why they're appearing and are sure it won't have a negative impact on you or your users. Always select **Learn more** to see what the issue is. If you resolve an issue by following the instructions in Azure Advisor, it will automatically disappear from the list. You're better off resolving issues than postponing them repeatedly.--- Whenever you come across an issue in Azure Virtual Desktop, always check Azure Advisor first. Azure Advisor will give you directions for how to solve the problem, or at least point you towards a resource that can help.-
-## Next steps
-
-To learn how to resolve recommendations, see [How to resolve Azure Advisor recommendations](azure-advisor-recommendations.md).
virtual-desktop Azure Monitor Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/azure-monitor-glossary.md
To learn more about Windows Event Logs, see [Windows Event records properties](.
- If you encounter a problem, check out our [troubleshooting guide](troubleshoot-azure-monitor.md) for help and known issues.
-You can also set up Azure Advisor to help you figure out how to resolve or prevent common issues. Learn more at [Use Azure Advisor with Azure Virtual Desktop](azure-advisor.md).
+You can also set up Azure Advisor to help you figure out how to resolve or prevent common issues. Learn more at [Introduction to Azure Advisor](../advisor/advisor-overview.md).
If you need help or have any questions, check out our community resources:
virtual-desktop Drain Mode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/drain-mode.md
Update-AzWvdSessionHost -ResourceGroupName <resourceGroupName> -HostPoolName <ho
## Next steps
-If you want to learn more about the Azure portal for Azure Virtual Desktop, check out [our tutorials](create-host-pools-azure-marketplace.md). If you're already familiar with the basics, check out some of the other features you can use with the Azure portal, such as [MSIX app attach](app-attach-azure-portal.md) and [Azure Advisor](azure-advisor.md).
+If you want to learn more about the Azure portal for Azure Virtual Desktop, check out [our tutorials](create-host-pools-azure-marketplace.md). If you're already familiar with the basics, check out some of the other features you can use with the Azure portal, such as [MSIX app attach](app-attach-azure-portal.md) and [Azure Advisor](../advisor/advisor-overview.md).
If you're using the PowerShell method and want to see what else the module can do, check out [Set up the PowerShell module for Azure Virtual Desktop](powershell-module.md) and our [PowerShell reference](/powershell/module/az.desktopvirtualization/).
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/whats-new.md
Here's what changed in August 2020:
- We fixed an issue in the Teams Desktop client (version 1.3.00.21759) where the client only showed the UTC time zone in the chat, channels, and calendar. The updated client now shows the remote session's time zone instead. -- Azure Advisor is now a part of Azure Virtual Desktop. When you access Azure Virtual Desktop through the Azure portal, you can see recommendations for optimizing your Azure Virtual Desktop environment. Learn more at [Azure Advisor](azure-advisor.md).
+- Azure Advisor is now a part of Azure Virtual Desktop. When you access Azure Virtual Desktop through the Azure portal, you can see recommendations for optimizing your Azure Virtual Desktop environment. Learn more at [Introduction to Azure Advisor](../advisor/advisor-overview.md).
- Azure CLI now supports Azure Virtual Desktop (`az desktopvirtualization`) to help you automate your Azure Virtual Desktop deployments. Check out [desktopvirtualization](/cli/azure/desktopvirtualization) for a list of extension commands.
virtual-machines Guest Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/guest-configuration.md
# Overview of the guest configuration extension
-The Guest Configuration extension is a component Azure Policy that performs audit and configuration operations inside virtual machines.
+The Guest Configuration extension is a component of Azure Policy that performs audit and configuration operations inside virtual machines.
Policies such as security baseline definitions for [Linux](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) and [Windows](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72650e9f-97bc-4b2a-ab5f-9781a9fcecbc)
virtual-machines Azure Hybrid Benefit Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/azure-hybrid-benefit-linux.md
A: Yes, Azure Hybrid Benefit on virtual machine scale sets for RHEL and SLES is
*Q: Can I use Azure Hybrid Benefit on reserved instances for RHEL and SLES?*
-A: Yes, Azure Hybrid Benefit on reserved instance for RHEL and SLES is available to all users. You can [learn more about this benefit and how to use it here](#azure-hybrid-benefit-on-reserved-instances).
+A: AHB can be used with reserved instances for Pay-as-you-Go RHEL and SLES. It cannot be used with pre-paid annualized RHEL or SLES subscriptions purchased through Azure.
*Q: Can I use Azure Hybrid Benefit on a virtual machine deployed for SQL Server on RHEL images?*
virtual-machines Disk Encryption Key Vault Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/disk-encryption-key-vault-aad.md
You can manage your service principals with Azure CLI using the [az ad sp](/cli/
1. Create a new service principal. ```azurecli-interactive
- az ad sp create-for-rbac --name "ServicePrincipalName" --password "My-AAD-client-secret"
+ az ad sp create-for-rbac --name "ServicePrincipalName" --password "My-AAD-client-secret" --role Contributor
``` 3. The appId returned is the Azure AD ClientID used in other commands. It's also the SPN you'll use for az keyvault set-policy. The password is the client secret that you should use later to enable Azure Disk Encryption. Safeguard the Azure AD client secret appropriately.
virtual-machines Disk Encryption Key Vault Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/disk-encryption-key-vault-aad.md
You can manage your service principals with Azure CLI using the [az ad sp](/cli/
1. Create a new service principal. ```azurecli-interactive
- az ad sp create-for-rbac --name "ServicePrincipalName" --password "My-AAD-client-secret"
+ az ad sp create-for-rbac --name "ServicePrincipalName" --password "My-AAD-client-secret" --role Contributor
``` 3. The appId returned is the Azure AD ClientID used in other commands. It's also the SPN you'll use for az keyvault set-policy. The password is the client secret that you should use later to enable Azure Disk Encryption. Safeguard the Azure AD client secret appropriately.
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/redhat/overview.md
Azure offers a wide offering of RHEL images on Azure. These images are made avai
Azure offers a variety of RHEL pay-as-you-go images. These images come properly entitled for RHEL and are attached to a source of updates (Red Hat Update Infrastructure). These images charge a premium fee for the RHEL entitlement and updates. RHEL pay-as-you-go image variants include:
-* Standard RHEL.
-* RHEL for SAP.
-* RHEL for SAP with High Availability and Update Services.
+* RHEL
+* RHEL for SAP
+* RHEL for SAP with High Availability (HA) and Update Services
You might want to use the pay-as-you-go images if you don't want to worry about paying separately for the appropriate number of subscriptions.
virtual-machines Sap High Availability Architecture Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-high-availability-architecture-scenarios.md
vm-windows Previously updated : 02/26/2020 Last updated : 01/28/2022
We recommend that you use managed disks because they simplify the deployment and
## Utilizing Azure infrastructure high availability to achieve *higher availability* of SAP applications
-If you decide not to use functionalities such as WSFC or Pacemaker on Linux (currently supported only for SUSE Linux Enterprise Server [SLES] 12 and later), Azure VM restart is utilized. It protects SAP systems against planned and unplanned downtime of the Azure physical server infrastructure and overall underlying Azure platform.
+If you decide not to use functionalities such as WSFC or Pacemaker on Linux (supported for SUSE Linux Enterprise Server [SLES] 12 and later and Red Hat Enterprise Linux [RHEL] 7 and later ), Azure VM restart is utilized. It protects SAP systems against planned and unplanned downtime of the Azure physical server infrastructure and overall underlying Azure platform.
For more information about this approach, see [Utilize Azure infrastructure VM restart to achieve higher availability of the SAP system][sap-higher-availability].
virtual-network Public Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/ip-services/public-ip-address-prefix.md
Resource|Scenario|Steps|
|||| |Virtual machine scale sets | You can use a public IP address prefix to generate instance-level IPs in a virtual machine scale set, though individual public IP resources won't be created. | Use a [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vmss-with-public-ip-prefix) with instructions to use this prefix for public IP configuration as part of the scale set creation. (Note that the zonal properties of the prefix will be passed to the instance IPs, though they will not show in the output; see [Networking for Virtual Machine Scale sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md#public-ipv4-per-virtual-machine) for more information.) | | Standard load balancers | A public IP address prefix can be used to scale a load balancer by [using all IPs in the range for outbound connections](../../load-balancer/outbound-rules.md#scale). | To associate a prefix to your load balancer: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. When creating the load balancer, select the IP prefix as associated with the frontend of your load balancer. |
-| NAT Gateway | A public IP prefix can be used to scale a NAT gateway by using the public IPs in the prefix for outbound connections. | To associate a prefix to your NAT Gateway: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. When creating the NAT Gateway, select the IP prefix as the Outbound IP. |
+| NAT Gateway | A public IP prefix can be used to scale a NAT gateway by using the public IPs in the prefix for outbound connections. | To associate a prefix to your NAT Gateway: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. When creating the NAT Gateway, select the IP prefix as the Outbound IP. (Note that a NAT Gateway can have no more than 16 IPs in total, so a public IP prefix of /28 length is the maximum size that can be used.) |
| VPN Gateway (AZ SKU) or Application Gateway v2 | You can use a public IP from a prefix for your zone-redundant VPN or Application gateway v2. | To associate an IP from a prefix to your gateway: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. [Create an IP from the prefix.](manage-public-ip-address-prefix.md) </br> 3. When you deploy the [VPN Gateway](../../vpn-gateway/tutorial-create-gateway-portal.md) or [Application Gateway](../../application-gateway/quick-create-portal.md#create-an-application-gateway), be sure to select the IP you previously gave from the prefix.| ## Limitations
virtual-network Public Ip Upgrade Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/ip-services/public-ip-upgrade-cli.md
# Upgrade a public IP address using the Azure CLI
-Azure public IP addresses are created with a SKU, either basic or standard. The SKU determines their functionality including allocation method, feature support, and resources they can be associated with.
+Azure public IP addresses are created with a SKU, either Basic or Standard. The SKU determines their functionality including allocation method, feature support, and resources they can be associated with.
-In this article, you'll learn how to upgrade a static basic SKU public IP address to standard SKU using the Azure CLI.
+In this article, you'll learn how to upgrade a static Basic SKU public IP address to Standard SKU using the Azure CLI.
## Prerequisites
In this article, you'll learn how to upgrade a static basic SKU public IP addres
## Upgrade public IP address
-In this section, you'll use the Azure CLI and upgrade your static basic SKU public IP to the standard SKU.
+In this section, you'll use the Azure CLI and upgrade your static Basic SKU public IP to the Standard SKU.
+
+In order to upgrade a public IP, it must not be associated with any resource (see [this page](https://docs.microsoft.com/azure/virtual-network/virtual-network-public-ip-address#view-modify-settings-for-or-delete-a-public-ip-address) for more information about how to disassociate public IPs).
+
+>[!IMPORTANT]
+>Public IPs upgraded from Basic to Standard SKU continue to have no [availability zones](https://docs.microsoft.com/azure/availability-zones/az-overview?toc=/azure/virtual-network/toc.json#availability-zones). This means they cannot be associated with an Azure resource that is either zone-redundant or tied to a pre-specified zone in regions where this is offered.
```azurecli-interactive az network public-ip update \
In this article, you upgraded a basic SKU public IP address to standard SKU.
For more information on public IP addresses in Azure, see: - [Public IP addresses in Azure](public-ip-addresses.md)-- [Create a public IP - Azure portal](./create-public-ip-portal.md)
+- [Create a public IP - Azure portal](./create-public-ip-portal.md)
virtual-network Public Ip Upgrade Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/ip-services/public-ip-upgrade-portal.md
# Upgrade a public IP address using the Azure portal
-Azure public IP addresses are created with a SKU, either basic or standard. The SKU determines their functionality including allocation method, feature support, and resources they can be associated with.
+Azure public IP addresses are created with a SKU, either Basic or Standard. The SKU determines their functionality including allocation method, feature support, and resources they can be associated with.
-In this article, you'll learn how to upgrade a static basic SKU public IP address to standard SKU in the Azure portal.
+In this article, you'll learn how to upgrade a static Basic SKU public IP address to Standard SKU in the Azure portal.
## Prerequisites
In this article, you'll learn how to upgrade a static basic SKU public IP addres
## Upgrade public IP address
-In this section, you'll sign in to the Azure portal and upgrade your static basic SKU public IP to the standard sku.
+In this section, you'll sign in to the Azure portal and upgrade your static Basic SKU public IP to the Standard sku.
+
+In order to upgrade a public IP, it must not be associated with any resource (see [this page](https://docs.microsoft.com/azure/virtual-network/virtual-network-public-ip-address#view-modify-settings-for-or-delete-a-public-ip-address) for more information about how to disassociate public IPs).
+
+>[!IMPORTANT]
+>Public IPs upgraded from Basic to Standard SKU continue to have no [availability zones](https://docs.microsoft.com/azure/availability-zones/az-overview?toc=/azure/virtual-network/toc.json#availability-zones). This means they cannot be associated with an Azure resource that is either zone-redundant or tied to a pre-specified zone in regions where this is offered.
1. Sign in to the [Azure portal](https://portal.azure.com).
In this article, you upgrade a basic SKU public IP address to standard SKU.
For more information on public IP addresses in Azure, see: - [Public IP addresses in Azure](public-ip-addresses.md)-- [Create a public IP - Azure portal](./create-public-ip-portal.md)
+- [Create a public IP - Azure portal](./create-public-ip-portal.md)
virtual-network Public Ip Upgrade Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/ip-services/public-ip-upgrade-powershell.md
# Upgrade a public IP address using Azure PowerShell
-Azure public IP addresses are created with a SKU, either basic or standard. The SKU determines their functionality including allocation method, feature support, and resources they can be associated with.
+Azure public IP addresses are created with a SKU, either Basic or Standard. The SKU determines their functionality including allocation method, feature support, and resources they can be associated with.
-In this article, you'll learn how to upgrade a static basic SKU public IP address to standard SKU using Azure PowerShell.
+In this article, you'll learn how to upgrade a static Basic SKU public IP address to Standard SKU using Azure PowerShell.
## Prerequisites
If you choose to install and use PowerShell locally, this article requires the A
## Upgrade public IP address
-In this section, you'll use the Azure CLI to upgrade your static basic SKU public IP to the standard SKU.
+In this section, you'll use the Azure CLI to upgrade your static Basic SKU public IP to the Standard SKU.
+
+In order to upgrade a public IP, it must not be associated with any resource (see [this page](https://docs.microsoft.com/azure/virtual-network/virtual-network-public-ip-address#view-modify-settings-for-or-delete-a-public-ip-address) for more information about how to disassociate public IPs).
+
+>[!IMPORTANT]
+>Public IPs upgraded from Basic to Standard SKU continue to have no [availability zones](https://docs.microsoft.com/azure/availability-zones/az-overview?toc=/azure/virtual-network/toc.json#availability-zones). This means they cannot be associated with an Azure resource that is either zone-redundant or tied to a pre-specified zone in regions where this is offered.
```azurepowershell-interactive ### Place the public IP address into a variable. ###
In this article, you upgraded a basic SKU public IP address to standard SKU.
For more information on public IP addresses in Azure, see: - [Public IP addresses in Azure](public-ip-addresses.md)-- [Create a public IP - Azure portal](./create-public-ip-portal.md)
+- [Create a public IP - Azure portal](./create-public-ip-portal.md)