Updates from: 01/24/2023 02:12:58
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/overview.md
Last updated 10/26/2022 -+
active-directory-b2c Partner Datawiza https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-datawiza.md
# Tutorial: Configure Azure Active Directory B2C with Datawiza to provide secure hybrid access
-In this tutorial, learn how to integrate Azure Active Directory B2C (Azure AD B2C) with [Datawiza Access Broker (DAB)](https://www.datawiza.com/access-broker). DAB enables single sign-on (SSO) and granular access control, helping Azure AD B2C protect on-premises legacy applications. With this solution, enterprises can transition from legacy to Azure AD B2C without rewriting applications.
+In this tutorial, learn how to integrate Azure Active Directory B2C (Azure AD B2C) with [Datawiza Access Proxy (DAP)](https://www.datawiza.com/), which enables single sign-on (SSO) and granular access control, helping Azure AD B2C protect on-premises legacy applications. With this solution, enterprises can transition from legacy to Azure AD B2C without rewriting applications.
## Prerequisites
To get started, you'll need:
- Your applications can run on platforms such as virtual machine and bare metal - An on-premises application to transition from a legacy identity system, to Azure AD B2C - In this tutorial, DAB is deployed on the same server as the application
- - The application runs on localhost: 3001 and DAB proxies traffic to applications via localhost: 9772
+ - The application runs on localhost: 3001 and DAP proxies traffic to applications via localhost: 9772
- The application traffic reaches DAB first and then is proxied to the application ## Scenario description
Datawiza integration includes the following components:
- **Azure AD B2C**: The authorization server to verify user credentials - Authenticated users access on-premises applications using a local account stored in the Azure AD B2C directory-- **Datawiza Access Broker (DAB)**: The service that passes identity to applications through HTTP headers
+- **Datawiza Access Proxy (DAP)**: The service that passes identity to applications through HTTP headers
- **Datawiza Cloud Management Console (DCMC)**: A management console for DAB. DCMC UI and RESTful APIs help manage DAB configurations and access control policies The following architecture diagram shows the implementation.
The following architecture diagram shows the implementation.
![Diagram of the architecture of an Azure AD B2C integration with Datawiza for secure access to hybrid applications.](./media/partner-datawiza/datawiza-architecture-diagram.png) 1. The user requests access to an on-premises application. DAB proxies the request to the application.
-2. DAB checks user authentication state. With no session token, or an invalid token, the user goes to Azure AD B2C for authentication.
-3. Azure AD B2C sends the user request to the endpoint specified during DAB registration in the Azure AD B2C tenant.
-4. The DAB evaluates access policies and calculates attribute values in HTTP headers forwarded to the application. The DAB might call to the identity provider (IdP) to retrieve information to set the header values. The DAB sets the header values and sends the request to the application.
+2. DAP checks user authentication state. With no session token, or an invalid token, the user goes to Azure AD B2C for authentication.
+3. Azure AD B2C sends the user request to the endpoint specified during DAP registration in the Azure AD B2C tenant.
+4. The DAP evaluates access policies and calculates attribute values in HTTP headers forwarded to the application. The DAP might call to the identity provider (IdP) to retrieve information to set the header values. The DAP sets the header values and sends the request to the application.
5. The user is authenticated with access to the application. ## Onboard with Datawiza
Go to docs.datawiza.com to:
## Run DAB with a header-based application
-You can use Docker or Kubernetes to run DAB. Use the Docker image for users to create a sample header-based application.
+You can use Docker or Kubernetes to run DAP. Use the Docker image for users to create a sample header-based application.
-Learn more: To configure DAB and SSO integration, see [Deploy Datawiza Access Proxy With Your App](https://docs.datawiza.com/step-by-step/step3.html)
+Learn more: To configure DAP and SSO integration, see [Deploy Datawiza Access Proxy With Your App](https://docs.datawiza.com/step-by-step/step3.html)
-A sample docker image `docker-compose.yml file` is provided. Sign in to the container registry to download DAB images and the header-based application.
+A sample docker image `docker-compose.yml file` is provided. Sign in to the container registry to download DAP images and the header-based application.
1. [Deploy Datawiza Access Proxy With Your App](https://docs.datawiza.com/step-by-step/step3.html#important-step).
A sample docker image `docker-compose.yml file` is provided. Sign in to the cont
DAB gets user attributes from IdP and passes them to the application with header or cookie. After you configure user attributes, the green check sign appears for user attributes.
- ![Screenshot of passed user attributes.](./media/partner-datawiza/pass-user-attributes.png)
+ ![Screenshot of passed user attributes.](./media/partner-datawiza/pass-user-attributes-new.png)
Learn more: [Pass User Attributes](https://docs.datawiza.com/step-by-step/step4.html) such as email address, firstname, and lastname to the header-based application. ## Test the flow 1. Navigate to the on-premises application URL.
-2. The DAB redirects to the page you configured in your user flow.
+2. The DAP redirects to the page you configured in your user flow.
3. From the list, select the IdP. 4. At the prompt, enter your credentials. If necessary, include an Azure AD Multi-Factor Authentication (MFA) token.
-5. You're redirected to Azure AD B2C, which forwards the application request to the DAB redirect URI.
+5. You're redirected to Azure AD B2C, which forwards the application request to the DAP redirect URI.
6. The DAB evaluates policies, calculates headers, and sends the user to the upstream application. 7. The requested application appears.
active-directory-b2c Partner Ping Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-ping-identity.md
Previously updated : 12/9/2022 Last updated : 01/20/2023
Many e-commerce sites and web applications exposed to the internet are deployed
Generally, configurations include an authentication translation layer that externalizes the authentication from the web application. Reverse proxies provide the authenticated user context to the web applications, such as a header value in clear or digest form. The applications aren't using industry standard tokens such as Security Assertion Markup Language (SAML), OAuth, or Open ID Connect (OIDC). Instead, the proxy provides authentication context and maintains the session with the end-user agent such as browser or native application. As a service running as a man-in-the-middle, proxies provide significant session control. The proxy service is efficient and scalable, not a bottleneck for applications behind the proxy service. The diagram is a reverse-proxy implementation and communications flow.
- ![Reverse proxy implementation](./media/partner-ping/reverse-proxy.png)
+ ![Diagram of the reverse proxy implementation.](./media/partner-ping/reverse-proxy.png)
## Modernization
Proxies support the modern authentication protocols and use the redirect-based (
In Azure AD B2C, you define policies that drive user experiences and behaviors, also called user journeys. Each such policy exposes a protocol endpoint that can perform the authentication as an IdP. On the application side, there's no special handling required for certain policies. An application makes a standard authentication request to the protocol-specific authentication endpoint exposed by a policy. You can configure Azure AD B2C to share the same issuer across policies or unique issuer for each policy. Each application can point to policies by making a protocol-native authentication request, which drives user behaviors such as sign-in, sign-up, and profile edits. The diagram shows OIDC and SAML application workflows.
- ![O I D C and S A M L implementation](./media/partner-ping/azure-ad-identity-provider.png)
+ ![Diagram of the OIDC and SAML application workflows.](./media/partner-ping/azure-ad-identity-provider.png)
The scenario can be challenging for the legacy applications to redirect the user accurately. The access request to the applications might not include the user experience context. In most cases, the proxy layer, or an integrated agent on the web application, intercepts the access request.
You can deploy PingAccess as the reverse proxy. PingAccess intercepts a direct r
Configure PingAccess with OIDC, OAuth2, or SAML for authentication with an upstream authentication provider. You can configure an upstream IdP for this purpose on the PingAccess server. See the following diagram.
- ![PingAccess with O I D C implementation](./media/partner-ping/authorization-flow.png)
+ ![Diagram of an upstream IDP on a PingAccess server.](./media/partner-ping/authorization-flow.png)
In a typical Azure AD B2C deployment with policies exposing IdPs, there's a challenge. PingAccess is configured with one, upstream IdP. ### PingFederate federation proxy
-You can configure PingFederate as an authentication provider, or a proxy. for upstream IdPs. See the following diagram.
+You can configure PingFederate as an authentication provider, or a proxy, for upstream IdPs. See the following diagram.
- ![PingFederate implementation](./media/partner-ping/pingfederate.png)
+ ![Diagram of PingFederate configured an authentication provider, or a proxy, for upstream IDPs.](./media/partner-ping/pingfederate.png)
Use this function to contextually, dynamically, or declaratively switch an inbound request to an Azure AD B2C policy. See the following diagram of protocol sequence flow.
- ![image shows the PingAccess and PingFederate workflow](./media/partner-ping/pingaccess-pingfederate-workflow.png)
+ ![Diagram of the protocol sequence flow for PingAccess, PingFederate, Azure AD B2C, and the applicaiton.](./media/partner-ping/pingaccess-pingfederate-workflow.png)
## Prerequisites
To get started, you'll need:
- An Azure subscription - If you don't have one, get an [Azure free account](https://azure.microsoft.com/free/)-- An [Azure AD B2C tenant](/tutorial-create-tenant.md) linked to your Azure subscription
+- An [Azure AD B2C tenant](tutorial-create-tenant.md) linked to your Azure subscription
- PingAccess and PingFederate deployed in Docker containers or on Azure virtual machines (VMs) ## Connectivity and communication
Confirm the following connectivity and communication.
You can use basic user flows or advanced Identity Enterprise Framework (IEF) policies. PingAccess generates the metadata endpoint, based on the issuer value, by using the [WebFinger](https://tools.ietf.org/html/rfc7033) protocol for discovery convention. To follow this convention, update the Azure AD B2C issuer using user-flow policy properties.
- ![image shows the token settings](./media/partner-ping/token-setting.png)
+ ![Screenshot of the subject sub claim URL on the Token compatibility dialog.](./media/partner-ping/token-setting.png)
In the advanced policies, configuration includes the IssuanceClaimPattern metadata element to AuthorityWithTfp value in the [JWT token issuer technical profile](./jwt-issuer-technical-profile.md).
In the advanced policies, configuration includes the IssuanceClaimPattern metada
Use the instructions in the following sections to configure PingAccess and PingFederate. See the following diagram of the overall integration user flow.
- ![PingAccess and PingFederate integration](./media/partner-ping/pingaccess.png)
+ ![Diagram of the PingAccess and PingFederate integration user flow](./media/partner-ping/pingaccess.png)
### Configure PingFederate as the token provider
Use the following instructions to create a PingAccess application for the target
#### Create a virtual host >[!IMPORTANT]
->Create a virtual host for every application. For more information, see [What can I configure with PingAccess?]([https://docs.pingidentity.com/bundle/pingaccess-43/page/reference/pa_c_KeyConsiderations.html](https://docs.pingidentity.com/bundle/pingaccess-71/page/kkj1564006722708.html).
+>Create a virtual host for every application. For more information, see [What can I configure with PingAccess?]([https://docs.pingidentity.com/bundle/pingaccess-43/page/reference/pa_c_KeyConsiderations.html].
To create a virtual host:
active-directory-b2c Quickstart Native App Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-native-app-desktop.md
Title: "Quickstart: Set up sign in for a desktop app using Azure Active Director
description: In this Quickstart, run a sample WPF desktop application that uses Azure Active Directory B2C to provide account sign in. -+ Last updated 01/13/2022-+
active-directory-b2c Quickstart Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-single-page-app.md
Title: "Quickstart: Set up sign in for a single-page app (SPA)"
description: In this Quickstart, run a sample single-page application that uses Azure Active Directory B2C to provide account sign-in. -+ Last updated 01/13/2022-+
active-directory Accidental Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/accidental-deletions.md
Title: Enable accidental deletions prevention in Application Provisioning in Azure Active Directory
-description: Enable accidental deletions prevention in Application Provisioning in Azure Active Directory.
+ Title: Enable accidental deletions prevention in the Azure AD provisioning service
+description: Enable accidental deletions prevention in the Azure Active Directory (Azure AD) provisioning service for applications and cross-tenant synchronization.
Previously updated : 10/06/2022 Last updated : 01/23/2023
+zone_pivot_groups: app-provisioning-cross-tenant-synchronization
# Enable accidental deletions prevention in the Azure AD provisioning service The Azure AD provisioning service includes a feature to help avoid accidental deletions. This feature ensures that users aren't disabled or deleted in an application unexpectedly.+
+> [!IMPORTANT]
+> [Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+The Azure AD provisioning service includes a feature to help avoid accidental deletions. This feature ensures that users aren't disabled or deleted in the target tenant unexpectedly.
The feature lets you specify a deletion threshold, above which an admin needs to explicitly choose to allow the deletions to be processed. ## Configure accidental deletion prevention+ To enable accidental deletion prevention:+ 1. In the Azure portal, select **Azure Active Directory**.
-2. Select **Enterprise applications** and then select your app.
+
+2. Select **Enterprise applications** and then select your application.
+ 3. Select **Provisioning** and then on the provisioning page select **Edit provisioning**.
-4. Under **Settings**, select the **Prevent accidental deletions** checkbox and specify a deletion
-threshold. Also, be sure the notification email address is completed. If the deletion threshold is met an email will be sent.
-5. Select **Save**, to save the changes.
+
+2. Select **Cross-tenant synchronization (Preview)** > **Configurations** and then select your configuration.
+
+3. Select **Provisioning**.
+
+4. Under **Settings**, select the **Prevent accidental deletions** check box and specify a deletion
+threshold.
+
+5. Ensure the **Notification Email** address is completed.
+
+ If the deletion threshold is met, an email will be sent.
+
+6. Select **Save** to save the changes.
When the deletion threshold is met, the job will go into quarantine and a notification email will be sent. The quarantined job can then be allowed or rejected. To learn more about quarantine behavior, see [Application provisioning in quarantine status](application-provisioning-quarantine-status.md). ## Recovering from an accidental deletion
-If you encounter an accidental deletion you'll see it on the provisioning status page. It will say **Provisioning has been quarantined. See quarantine details for more information**.
+If you encounter an accidental deletion, you'll see it on the provisioning status page. It will say **Provisioning has been quarantined. See quarantine details for more information**.
You can click either **Allow deletes** or **View provisioning logs**.
The **Allow deletes** action will delete the objects that triggered the accident
If you don't want to allow the deletions, you need to do the following: - Investigate the source of the deletions. You can use the provisioning logs for details.-- Prevent the deletion by assigning the user / group to the app again, restoring the user / group, or updating your provisioning configuration.-- Once you've made the necessary changes to prevent the user / group from being deleted, restart provisioning. Please don't restart provisioning until you've made the necessary changes to prevent the users / groups from being deleted.
+- Prevent the deletion by assigning the user / group to the application (or configuration) again, restoring the user / group, or updating your provisioning configuration.
+- Once you've made the necessary changes to prevent the user / group from being deleted, restart provisioning. Don't restart provisioning until you've made the necessary changes to prevent the users / groups from being deleted.
### Test deletion prevention
Let the provisioning job run (20 ΓÇô 40 mins) and navigate back to the provision
## Common de-provisioning scenarios to test - Delete a user / put them into the recycle bin. - Block sign in for a user.-- Unassign a user or group from the application.-- Remove a user from a group thatΓÇÖs providing them access to the app.
+- Unassign a user or group from the application (or configuration).
+- Remove a user from a group that's providing them access to the application (or configuration).
To learn more about de-provisioning scenarios, see [How Application Provisioning Works](how-provisioning-works.md#de-provisioning). ## Frequently Asked Questions ### What scenarios count toward the deletion threshold?
-When a user is set to be removed from the target application, it will be counted against the
+When a user is set to be removed from the target application (or target tenant), it will be counted against the
deletion threshold. Scenarios that could lead to a user being removed from the target
-application could include: unassigning the user from the application and soft / hard deleting a user in the directory. Groups
+application (or target tenant) could include: unassigning the user from the application (or configuration) and soft / hard deleting a user in the directory. Groups
evaluated for deletion count towards the deletion threshold. In addition to deletions, the same functionality also works for disables. ### What is the interval that the deletion threshold is evaluated on?
-It is evaluated each cycle. If the number of deletions doesn't exceed the threshold during a
+It's evaluated each cycle. If the number of deletions doesn't exceed the threshold during a
single cycle, the ΓÇ£circuit breakerΓÇ¥ wonΓÇÖt be triggered. If multiple cycles are needed to reach a steady state, the deletion threshold will be evaluated per cycle.
active-directory Define Conditional Rules For Provisioning User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md
Title: Use scoping filters in Azure Active Directory Application Provisioning
-description: Learn how to use scoping filters to prevent objects in apps that support automated user provisioning from being provisioned if an object doesn't satisfy your business requirements in Azure Active Directory Application Provisioning.
+ Title: Scoping users or groups to be provisioned with scoping filters in Azure Active Directory
+description: Learn how to use scoping filters to define attribute-based rules that determine which users or groups are provisioned in Azure Active Directory.
Previously updated : 06/15/2022 Last updated : 01/23/2023
+zone_pivot_groups: app-provisioning-cross-tenant-synchronization
-# Attribute-based application provisioning with scoping filters
-The objective of this article is to explain how to use scoping filters to define attribute-based rules that determine which users are provisioned to an application.
+# Scoping users or groups to be provisioned with scoping filters
+
+> [!IMPORTANT]
+> [Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+This article describes how to use scoping filters in the Azure Active Directory (Azure AD) provisioning service to define attribute-based rules that determine which users or groups are provisioned.
## Scoping filter use cases
-A scoping filter allows the Azure Active Directory (Azure AD) provisioning service to include or exclude any users who have an attribute that matches a specific value. For example, when provisioning users from Azure AD to a SaaS application used by a sales team, you can specify that only users with a "Department" attribute of "Sales" should be in scope for provisioning.
+You use scoping filters to prevent objects in applications that support automated user provisioning from being provisioned if an object doesn't satisfy your business requirements. A scoping filter allows you to include or exclude any users who have an attribute that matches a specific value. For example, when provisioning users from Azure AD to a SaaS application used by a sales team, you can specify that only users with a "Department" attribute of "Sales" should be in scope for provisioning.
Scoping filters can be used differently depending on the type of provisioning connector:
Scoping filters can be used differently depending on the type of provisioning co
* **Inbound provisioning from HCM applications to Azure AD and Active Directory**. When an [HCM application such as Workday](../saas-apps/workday-tutorial.md) is the source system, scoping filters are the primary method for determining which users should be provisioned from the HCM application to Active Directory or Azure AD.
-By default, Azure AD provisioning connectors do not have any attribute-based scoping filters configured.
+By default, Azure AD provisioning connectors don't have any attribute-based scoping filters configured.
+
+When Azure AD is the source system, [user and group assignments](../manage-apps/assign-user-or-group-access-portal.md) are the most common method for determining which users are in scope for provisioning. Reducing the number of users in scope improves performance and synchronizing assigned users and groups instead of synchronizing all users and groups is recommended.
+
+Scoping filters can be used optionally, in addition to scoping by assignment. A scoping filter allows the Azure AD provisioning service to include or exclude any users who have an attribute that matches a specific value. For example, when provisioning users from a sales team, you can specify that only users with a "Department" attribute of "Sales" should be in scope for provisioning.
## Scoping filter construction
According to this scoping filter, users must satisfy the following criteria to b
Scoping filters are configured as part of the attribute mappings for each Azure AD user provisioning connector. The following procedure assumes that you already set up automatic provisioning for [one of the supported applications](../saas-apps/tutorial-list.md) and are adding a scoping filter to it. ### Create a scoping filter
-1. In the [Azure portal](https://portal.azure.com), go to the **Azure Active Directory** > **Enterprise Applications** > **All applications** section.
-2. Select the application for which you have configured automatic provisioning: for example, "ServiceNow".
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. Go to the **Azure Active Directory** > **Enterprise applications** > **All applications**.
+
+3. Select the application for which you have configured automatic provisioning: for example, "ServiceNow".
+
+2. Go to **Azure Active Directory** > **Cross-tenant Synchronization** > **Configurations**
+
+3. Select your configuration.
+
+4. Select the **Provisioning** tab.
-3. Select the **Provisioning** tab.
+5. In the **Mappings** section, select the mapping that you want to configure a scoping filter for: for example, "Synchronize Azure Active Directory Users to ServiceNow".
-4. In the **Mappings** section, select the mapping that you want to configure a scoping filter for: for example, "Synchronize Azure Active Directory Users to ServiceNow".
+5. In the **Mappings** section, select the mapping that you want to configure a scoping filter for: for example, "Provision Azure Active Directory Users".
-5. Select the **Source object scope** menu.
+6. Select the **Source object scope** menu.
-6. Select **Add scoping filter**.
+7. Select **Add scoping filter**.
-7. Define a clause by selecting a source **Attribute Name**, an **Operator**, and an **Attribute Value** to match against. The following operators are supported:
+8. Define a clause by selecting a source **Attribute Name**, an **Operator**, and an **Attribute Value** to match against. The following operators are supported:
a. **EQUALS**. Clause returns "true" if the evaluated attribute matches the input string value exactly (case sensitive).
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/known-issues.md
Title: Known issues for application provisioning in Azure Active Directory
-description: Learn about known issues when you work with automated application provisioning in Azure Active Directory.
+ Title: Known issues for provisioning in Azure Active Directory
+description: Learn about known issues when you work with automated application provisioning or cross-tenant synchronization in Azure Active Directory.
Previously updated : 10/20/2022 Last updated : 01/23/2023
+zone_pivot_groups: app-provisioning-cross-tenant-synchronization
-# Known issues for application provisioning in Azure Active Directory
-This article discusses known issues to be aware of when you work with app provisioning. To provide feedback about the application provisioning service on UserVoice, see [Azure Active Directory (Azure AD) application provision UserVoice](https://aka.ms/appprovisioningfeaturerequest). We watch UserVoice closely so that we can improve the service.
+# Known issues for provisioning in Azure Active Directory
+
+> [!IMPORTANT]
+> [Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+This article discusses known issues to be aware of when you work with app provisioning or cross-tenant synchronization. To provide feedback about the application provisioning service on UserVoice, see [Azure Active Directory (Azure AD) application provision UserVoice](https://aka.ms/appprovisioningfeaturerequest). We watch UserVoice closely so that we can improve the service.
> [!NOTE] > This article isn't a comprehensive list of known issues. If you know of an issue that isn't listed, provide feedback at the bottom of the page.
+## Cross-tenant synchronization
+
+### Unsupported synchronization scenarios
+
+- Restoring a previously soft-deleted user in the target tenant
+- Synchronizing groups, devices, and contacts into another tenant
+- Synchronizing users across clouds
+- Synchronizing photos across tenants
+- Synchronizing contacts and converting contacts to B2B users
+
+### Provisioning users
+
+An external user from the source (home) tenant can't be provisioned into another tenant. Internal guest users from the source tenant can't be provisioned into another tenant. Only internal member users from the source tenant can be provisioned into the target tenant. For more information, see [Properties of an Azure Active Directory B2B collaboration user](../external-identities/user-properties.md).
+
+### Provisioning manager attributes
+
+Provisioning manager attributes isn't supported.
+
+### Universal people search
+
+It's possible for synchronized users to appear in the global address list (GAL) of the target tenant for people search scenarios, but it isn't enabled by default. In attribute mappings for a configuration, you must update the value for the **showInAddressList** attribute. Set the mapping type as constant with a default value of `True`. For any newly created B2B collaboration users, the showInAddressList attribute will be set to true and they'll appear in people search scenarios. For more information, see [Configure cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-configure.md#step-9-review-attribute-mappings).
+
+For existing B2B collaboration users, the showInAddressList attribute will be updated as long as the B2B collaboration user doesn't have a mailbox enabled in the target tenant. If the mailbox is enabled in the target tenant, use the [Set-MailUser](/powershell/module/exchange/set-mailuser) PowerShell cmdlet to set the HiddenFromAddressListsEnabled property to a value of $false.
+
+`Set-MailUser [GuestUserUPN] -HiddenFromAddressListsEnabled:$false`
+
+Where [GuestUserUPN] is the calculated UserPrincipalName. Example:
+
+`Set-MailUser guestuser1_contoso.com#EXT#@fabricam.onmicrosoft.com -HiddenFromAddressListsEnabled:$false`
+
+For more information, see [About the Exchange Online PowerShell module](/powershell/exchange/exchange-online-powershell-v2).
+
+### Configuring synchronization from target tenant
+
+Configuring synchronization from the target tenant isn't supported. All configurations must be done in the source tenant. Note that the target administrator is able to turn off cross-tenant synchronization at any time.
+
+### Usage of Azure AD B2B collaboration for cross-tenant access
+
+- B2B users are unable to manage certain Microsoft 365 services in remote tenants (such as Exchange Online), as there's no directory picker.
+- Azure Virtual Desktop currently doesn't support B2B users.
+- B2B users with UserType Member aren't currently supported in Power BI. For more information, see [Distribute Power BI content to external guest users using Azure Active Directory B2B](/power-bi/guidance/whitepaper-azure-b2b-power-bi)
+- Converting a guest account into an Azure AD member account or converting an Azure AD member account into a guest isn't supported by Teams. For more information, see [Guest access in Microsoft Teams](/microsoftteams/guest-access).
+ ## Authorization #### Unable to save The tenant URL, secret token, and notification email must be filled in to save. You can't provide only one of them. #### Unable to change provisioning mode back to manual
Multivalue directory extensions can't be used in attribute mappings or scoping f
- Provisioning to B2C tenants isn't supported because of the size of the tenants. - Not all provisioning apps are available in all clouds. For example, Atlassian isn't yet available in the Government cloud. We're working with app developers to onboard their apps to all clouds. #### Automatic provisioning isn't available on my OIDC-based application If you create an app registration, the corresponding service principal in enterprise apps won't be enabled for automatic user provisioning. You'll need to either request the app be added to the gallery, if intended for use by multiple organizations, or create a second non-gallery app for provisioning. #### The provisioning interval is fixed
When a group is in scope and a member is out of scope, the group will be provisi
If a user and their manager are both in scope for provisioning, the service provisions the user and then updates the manager. If on day one the user is in scope and the manager is out of scope, we'll provision the user without the manager reference. When the manager comes into scope, the manager reference won't be updated until you restart provisioning and cause the service to reevaluate all the users again.
-#### Global reader
+#### Global Reader
-The global reader role is unable to read the provisioning configuration. Please create a custom role with the `microsoft.directory/applications/synchronization/standard/read` permission in order to read the provisioning configuration from the Azure Portal.
+The Global Reader role is unable to read the provisioning configuration. Create a custom role with the `microsoft.directory/applications/synchronization/standard/read` permission in order to read the provisioning configuration from the Azure portal.
#### Microsoft Azure Government Cloud Credentials, including the secret token, notification email, and SSO certificate notification emails together have a 1KB limit in the Microsoft Azure Government Cloud. ## On-premises application provisioning The following information is a current list of known limitations with the Azure AD ECMA Connector Host and on-premises application provisioning.
The following attributes and objects aren't supported:
- Groups. - Complex anchors (for example, ObjectTypeName+UserName). - Binary attributes.
- - On-premises applications are sometimes not federated with Azure AD and require local passwords. The on-premises provisioning preview does not support password synchronization. Provisioning initial one-time passwords is supported. Please ensure that you are using the [Redact](./functions-for-customizing-application-data.md#redact) function to redact the passwords from the logs. In the SQL and LDAP connectors, the passwords are not exported on the initial call to the application, but rather a second call with set password.
+ - On-premises applications are sometimes not federated with Azure AD and require local passwords. The on-premises provisioning preview doesn't support password synchronization. Provisioning initial one-time passwords is supported. Ensure that you're using the [Redact](./functions-for-customizing-application-data.md#redact) function to redact the passwords from the logs. In the SQL and LDAP connectors, the passwords aren't exported on the initial call to the application, but rather a second call with set password.
#### SSL certificates The Azure AD ECMA Connector Host currently requires either an SSL certificate to be trusted by Azure or the provisioning agent to be used. The certificate subject must match the host name the Azure AD ECMA Connector Host is installed on.
The following attributes and objects aren't supported:
The attributes that the target application supports are discovered and surfaced in the Azure portal in **Attribute Mappings**. Newly added attributes will continue to be discovered. If an attribute type has changed, for example, string to Boolean, and the attribute is part of the mappings, the type won't change automatically in the Azure portal. Customers will need to go into advanced settings in mappings and manually update the attribute type. #### Provisioning agent-- The agent does not currently support auto update for the on-prem application provisioning scenario. We are actively working to close this gap and ensure that auto update is enabled by default and required for all customers. -- The same provisioning agent cannot be used for on-prem app provisioning and cloud sync / HR- driven provisioning.
+- The agent doesn't currently support auto update for the on-premises application provisioning scenario. We're actively working to close this gap and ensure that auto update is enabled by default and required for all customers.
+- The same provisioning agent can't be used for on-premises app provisioning and cloud sync / HR- driven provisioning.
#### ECMA Host
-The ECMA host does not support updating the password in the connectivity page of the wizard. Please create a new connector when changing the password.
+The ECMA host doesn't support updating the password in the connectivity page of the wizard. Create a new connector when changing the password.
## Next steps [How provisioning works](how-provisioning-works.md)
active-directory On Premises Ldap Connector Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-ldap-connector-configure.md
Title: Azure AD Provisioning to LDAP directories (preview)
+ Title: Azure AD Provisioning to LDAP directories
description: This document describes how to configure Azure AD to provision users into an LDAP directory.
active-directory On Premises Ldap Connector Prepare Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-ldap-connector-prepare-directory.md
Title: Preparing for Azure AD Provisioning to Active Directory Lightweight Directory Services (preview)
+ Title: Preparing for Azure AD Provisioning to Active Directory Lightweight Directory Services
description: This document describes how to configure Azure AD to provision users into Active Directory Lightweight Directory Services as an example of an LDAP directory.
active-directory Provision On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/provision-on-demand.md
Previously updated : 07/06/2022 Last updated : 01/23/2023
+zone_pivot_groups: app-provisioning-cross-tenant-synchronization
# On-demand provisioning in Azure Active Directory+
+> [!IMPORTANT]
+> [Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ Use on-demand provisioning to provision a user or group in seconds. Among other things, you can use this capability to: * Troubleshoot configuration issues quickly.
Use on-demand provisioning to provision a user or group in seconds. Among other
## How to use on-demand provisioning
-1. Sign in to the **Azure portal**.
-1. Go to **All services** > **Enterprise applications**.
-1. Select your application, and then go to the provisioning configuration page.
-1. Configure provisioning by providing your admin credentials.
-1. Select **Provision on demand**.
-1. Search for a user by first name, last name, display name, user principal name, or email address. Alternatively, you can search for a group and pick up to 5 users.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. Go to **Azure Active Directory** > **Enterprise applications** > **All applications**.
+
+3. Select your application, and then go to the provisioning configuration page.
+
+2. Go to **Azure Active Directory** > **Cross-tenant Synchronization** > **Configurations**
+
+3. Select your configuration, and then go to the **Provisioning** configuration page.
+
+4. Configure provisioning by providing your admin credentials.
+
+5. Select **Provision on demand**.
+
+6. Search for a user by first name, last name, display name, user principal name, or email address. Alternatively, you can search for a group and pick up to 5 users.
> [!NOTE] > For Cloud HR provisioning app (Workday/SuccessFactors to AD/Azure AD), the input value is different. > For Workday scenario, please provide "WorkerID" or "WID" of the user in Workday. > For SuccessFactors scenario, please provide "personIdExternal" of the user in SuccessFactors.
-1. Select **Provision** at the bottom of the page.
+7. Select **Provision** at the bottom of the page.
+ :::image type="content" source="media/provision-on-demand/on-demand-provision-user.png" alt-text="Screenshot that shows the Azure portal UI for provisioning a user on demand." lightbox="media/provision-on-demand/on-demand-provision-user.png":::
## Understand the provisioning steps
The on-demand provisioning process attempts to show the steps that the provision
### Step 1: Test connection
-The provisioning service attempts to authorize access to the target application by making a request for a "test user". The provisioning service expects a response that indicates that the service authorized to continue with the provisioning steps. This step is shown only when it fails. It's not shown during the on-demand provisioning experience when the step is successful.
+The provisioning service attempts to authorize access to the target system by making a request for a "test user". The provisioning service expects a response that indicates that the service authorized to continue with the provisioning steps. This step is shown only when it fails. It's not shown during the on-demand provisioning experience when the step is successful.
#### Troubleshooting tips
-* Ensure that you've provided valid credentials, such as the secret token and tenant URL, to the target application. The required credentials vary by application. For detailed configuration tutorials, see the [tutorial list](../saas-apps/tutorial-list.md).
-* Make sure that the target application supports filtering on the matching attributes defined in the **Attribute mappings** pane. You might need to check the API documentation provided by the application developer to understand the supported filters.
+* Ensure that you've provided valid credentials, such as the secret token and tenant URL, to the target system. The required credentials vary by application. For detailed configuration tutorials, see the [tutorial list](../saas-apps/tutorial-list.md).
+* Make sure that the target system supports filtering on the matching attributes defined in the **Attribute mappings** pane. You might need to check the API documentation provided by the application developer to understand the supported filters.
* For System for Cross-domain Identity Management (SCIM) applications, you can use a tool like Postman. Such tools help you ensure that the application responds to authorization requests in the way that the Azure Active Directory (Azure AD) provisioning service expects. Have a look at an [example request](./use-scim-to-provision-users-and-groups.md#request-3). ### Step 2: Import user
The **View details** page shows the properties of the users that were matched in
#### Troubleshooting tips * The provisioning service might not be able to match a user in the source system uniquely with a user in the target. Resolve this problem by ensuring that the matching attribute is unique.
-* Make sure that the target application supports filtering on the attribute that's defined as the matching attribute.
+* Make sure that the target system supports filtering on the attribute that's defined as the matching attribute.
### Step 5: Perform action
Here's an example of what you might see after the successful on-demand provision
#### View details
-The **View details** section displays the attributes that were modified in the target application. This display represents the final output of the provisioning service activity and the attributes that were exported. If this step fails, the attributes displayed represent the attributes that the provisioning service attempted to modify.
+The **View details** section displays the attributes that were modified in the target system. This display represents the final output of the provisioning service activity and the attributes that were exported. If this step fails, the attributes displayed represent the attributes that the provisioning service attempted to modify.
#### Troubleshooting tips * Failures for exporting changes can vary greatly. Check the [documentation for provisioning logs](../reports-monitoring/concept-provisioning-logs.md#error-codes) for common failures.
-* On-demand provisioning says the group or user can't be provisioned because they're not assigned to the application. Note that there is a replicate delay of up to a few minutes between when an object is assigned to an application and that assignment being honored by on-demand provisioning. You may need to wait a few minutes and try again.
+* On-demand provisioning says the group or user can't be provisioned because they're not assigned to the application. Note that there's a replicate delay of up to a few minutes between when an object is assigned to an application and that assignment being honored by on-demand provisioning. You may need to wait a few minutes and try again.
## Frequently asked questions
The **View details** section displays the attributes that were modified in the t
There are currently a few known limitations to on-demand provisioning. Post your [suggestions and feedback](https://aka.ms/appprovisioningfeaturerequest) so we can better determine what improvements to make next. > [!NOTE] > The following limitations are specific to the on-demand provisioning capability. For information about whether an application supports provisioning groups, deletions, or other capabilities, check the tutorial for that application. * On-demand provisioning of groups supports updating up to 5 members at a time
+* Restoring a previously soft-deleted user in the target tenant with on-demand provisioning isn't supported. If you try to soft delete a user with on-demand provisioning and then restore the user, it can result in duplicate users.
* On-demand provisioning of roles isn't supported. * On-demand provisioning supports disabling users that have been unassigned from the application. However, it doesn't support disabling or deleting users that have been disabled or deleted from Azure AD. Those users won't appear when you search for a user.
active-directory Skip Out Of Scope Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/skip-out-of-scope-deletions.md
Previously updated : 10/20/2022 Last updated : 01/23/2023
By default, the Azure AD provisioning engine soft deletes or disables users that
This article describes how to use the Microsoft Graph API and the Microsoft Graph API explorer to set the flag ***SkipOutOfScopeDeletions*** that controls the processing of accounts that go out of scope. * If ***SkipOutOfScopeDeletions*** is set to 0 (false), accounts that go out of scope will be disabled in the target.
-* If ***SkipOutOfScopeDeletions*** is set to 1 (true), accounts that go out of scope will not be disabled in the target. This flag is set at the *Provisioning App* level and can be configured using the Graph API.
+* If ***SkipOutOfScopeDeletions*** is set to 1 (true), accounts that go out of scope won't be disabled in the target. This flag is set at the *Provisioning App* level and can be configured using the Graph API.
-Because this configuration is widely used with the *Workday to Active Directory user provisioning* app, the following steps include screenshots of the Workday application. However, the configuration can also be used with *all other apps*, such as ServiceNow, Salesforce, and Dropbox. Note that in order to successfully complete this procedure you must have first set up app provisioning for the app. Each app has its own configuration article. For example, to configure the Workday application, see [Tutorial: Configure Workday to Azure AD user provisioning](../saas-apps/workday-inbound-cloud-only-tutorial.md).
+Because this configuration is widely used with the *Workday to Active Directory user provisioning* app, the following steps include screenshots of the Workday application. However, the configuration can also be used with *all other apps*, such as ServiceNow, Salesforce, and Dropbox and [cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-configure.md). Note that in order to successfully complete this procedure you must have first set up app provisioning for the app. Each app has its own configuration article. For example, to configure the Workday application, see [Tutorial: Configure Workday to Azure AD user provisioning](../saas-apps/workday-inbound-cloud-only-tutorial.md).
## Step 1: Retrieve your Provisioning App Service Principal ID (Object ID) 1. Launch the [Azure portal](https://portal.azure.com), and navigate to the Properties section of your provisioning application. For e.g. if you want to export your *Workday to AD User Provisioning application* mapping navigate to the Properties section of that app. 1. In the Properties section of your provisioning app, copy the GUID value associated with the *Object ID* field. This value is also called the **ServicePrincipalId** of your App and it will be used in Graph Explorer operations.
- ![Workday App Service Principal ID](./media/skip-out-of-scope-deletions/wd_export_01.png)
+ ![Screenshot of Workday App Service Principal ID.](./media/skip-out-of-scope-deletions/wd_export_01.png)
## Step 2: Sign into Microsoft Graph Explorer 1. Launch [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) 1. Click on the "Sign-In with Microsoft" button and sign-in using Azure AD Global Admin or App Admin credentials.
- ![Graph Sign-in](./media/skip-out-of-scope-deletions/wd_export_02.png)
+ ![Screenshot of Microsoft Graph Explorer Sign-in.](./media/skip-out-of-scope-deletions/wd_export_02.png)
-1. Upon successful sign-in, you will see the user account details in the left-hand pane.
+1. Upon successful sign-in, you'll see the user account details in the left-hand pane.
## Step 3: Get existing app credentials and connectivity details
In the Microsoft Graph Explorer, run the following GET query replacing [serviceP
GET https://graph.microsoft.com/beta/servicePrincipals/[servicePrincipalId]/synchronization/secrets ```
- ![GET job query](./media/skip-out-of-scope-deletions/skip-03.png)
+ ![Screenshot of GET job query.](./media/skip-out-of-scope-deletions/skip-03.png)
Copy the Response into a text file. It will look like the JSON text shown below, with values highlighted in yellow specific to your deployment. Add the lines highlighted in green to the end and update the Workday connection password highlighted in blue.
- ![GET job response](./media/skip-out-of-scope-deletions/skip-04.png)
+ ![Screenshot of GET job response.](./media/skip-out-of-scope-deletions/skip-04.png)
-Here is the JSON block to add to the mapping.
+Here's the JSON block to add to the mapping.
```json {
In the URL below replace [servicePrincipalId] with the **ServicePrincipalId** e
``` Copy the updated text from Step 3 into the "Request Body".
- ![PUT request](./media/skip-out-of-scope-deletions/skip-05.png)
+ ![Screenshot of PUT request.](./media/skip-out-of-scope-deletions/skip-05.png)
Click on ΓÇ£Run QueryΓÇ¥.
-You should get the output as "Success ΓÇô Status Code 204". If you receive an error you may need to check that your account has Read/Write permissions for ServicePrincipalEndpoint. You can find this permission by clicking on the *Modify permissions* tab in Graph Explorer.
+You should get the output as "Success ΓÇô Status Code 204". If you receive an error, you may need to check that your account has Read/Write permissions for ServicePrincipalEndpoint. You can find this permission by clicking on the *Modify permissions* tab in Graph Explorer.
- ![PUT response](./media/skip-out-of-scope-deletions/skip-06.png)
+ ![Screenshot of PUT response.](./media/skip-out-of-scope-deletions/skip-06.png)
## Step 5: Verify that out of scope users donΓÇÖt get disabled
-You can test this flag results in expected behavior by updating your scoping rules to skip a specific user. In the example below, we are excluding the employee with ID 21173 (who was earlier in scope) by adding a new scoping rule:
+You can test this flag results in expected behavior by updating your scoping rules to skip a specific user. In the example below, we're excluding the employee with ID 21173 (who was earlier in scope) by adding a new scoping rule:
![Screenshot that shows the "Add Scoping Filter" section with an example user highlighted.](./media/skip-out-of-scope-deletions/skip-07.png) In the next provisioning cycle, the Azure AD provisioning service will identify that the user 21173 has gone out of scope and if the SkipOutOfScopeDeletions property is enabled, then the synchronization rule for that user will display a message as shown below:
- ![Scoping example](./media/skip-out-of-scope-deletions/skip-08.png)
+ ![Screenshot of scoping example.](./media/skip-out-of-scope-deletions/skip-08.png)
active-directory Tutorial Ecma Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/tutorial-ecma-sql-connector.md
Title: Azure AD Provisioning to SQL applications (preview)
+ Title: Azure AD Provisioning to SQL applications
description: This tutorial describes how to provision users from Azure AD into a SQL database.
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
The following client apps are confirmed to support this setting, this list isn't
- Microsoft Cortana - Microsoft Edge - Microsoft Excel
+- Microsoft Flow Mobile
- Microsoft Launcher - Microsoft Lists - Microsoft Office
active-directory Clean Up Unmanaged Azure Ad Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/clean-up-unmanaged-azure-ad-accounts.md
Run the following cmdlets:
To identify unmanaged Azure AD accounts, run: -- `Connect-MgGraph --Scope User.Read.All`
+- `Connect-MgGraph -Scope User.ReadAll`
- `Get-MsIdUnmanagedExternalUser` To reset unmanaged Azure AD account redemption status, run: -- `Connect-MgGraph --Scope User.Readwrite.All`
+- `Connect-MgGraph -Scopes User.ReadWriteAll`
- `Get-MsIdUnmanagedExternalUser | Reset-MsIdExternalUser` To delete unmanaged Azure AD accounts, run: -- `Connect-MgGraph --Scope User.Readwrite.All`
+- `Connect-MgGraph -Scopes User.ReadWriteAll`
- `Get-MsIdUnmanagedExternalUser | Remove-MgUser`
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
Previously updated : 08/05/2022 Last updated : 01/23/2023
The default cross-tenant access settings apply to all Azure AD organizations ext
- **Organizational settings**: No organizations are added to your Organizational settings by default. This means all external Azure AD organizations are enabled for B2B collaboration with your organization.
+- **Cross-tenant sync (preview)**: No users from other tenants are synchronized into your tenant with cross-tenant synchronization.
+ The behaviors described above apply to B2B collaboration with other Azure AD tenants in your same Microsoft Azure cloud. In cross-cloud scenarios, default settings work a little differently. See [Microsoft cloud settings](#microsoft-cloud-settings) later in this article. ## Organizational settings You can configure organization-specific settings by adding an organization and modifying the inbound and outbound settings for that organization. Organizational settings take precedence over default settings. -- For B2B collaboration with other Azure AD organizations, use cross-tenant access settings to manage inbound and outbound B2B collaboration and scope access to specific users, groups, and applications. You can set a default configuration that applies to all external organizations, and then create individual, organization-specific settings as needed. Using cross-tenant access settings, you can also trust multi-factor (MFA) and device claims (compliant claims and hybrid Azure AD joined claims) from other Azure AD organizations.
+- **B2B collaboration**: For B2B collaboration with other Azure AD organizations, use cross-tenant access settings to manage inbound and outbound B2B collaboration and scope access to specific users, groups, and applications. You can set a default configuration that applies to all external organizations, and then create individual, organization-specific settings as needed. Using cross-tenant access settings, you can also trust multi-factor (MFA) and device claims (compliant claims and hybrid Azure AD joined claims) from other Azure AD organizations.
> [!TIP] >We recommend excluding external users from the [Identity Protection MFA registration policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md), if you are going to [trust MFA for external users](authentication-conditional-access.md#mfa-for-azure-ad-external-users). When both policies are present, external users wonΓÇÖt be able to satisfy the requirements for access. -- For B2B direct connect, use organizational settings to set up a mutual trust relationship with another Azure AD organization. Both your organization and the external organization need to mutually enable B2B direct connect by configuring inbound and outbound cross-tenant access settings.
+- **B2B direct connect**: For B2B direct connect, use organizational settings to set up a mutual trust relationship with another Azure AD organization. Both your organization and the external organization need to mutually enable B2B direct connect by configuring inbound and outbound cross-tenant access settings.
+
+- You can use **External collaboration settings** to limit who can invite external users, allow or block B2B specific domains, and set restrictions on guest user access to your directory.
+
+### Automatic redemption setting
+
+> [!IMPORTANT]
+> Automatic redemption is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
++
+To configure this setting using Microsoft Graph, see the [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?view=graph-rest-beta&preserve-view=true) API. For information about building your own onboarding experience, see [B2B collaboration invitation manager](external-identities-overview.md#azure-ad-microsoft-graph-api-for-b2b-collaboration).
+
+For more information, see [Configure cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-configure.md), [Configure cross-tenant access settings for B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md), and [Configure cross-tenant access settings for B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md).
+
+### Cross-tenant synchronization setting
+
+> [!IMPORTANT]
+> [Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ -- You can use external collaboration settings to limit who can invite external users, allow or block B2B specific domains, and set restrictions on guest user access to your directory.
+To configure this setting using Microsoft Graph, see the [Update crossTenantIdentitySyncPolicyPartner](/graph/api/crosstenantidentitysyncpolicypartner-update?view=graph-rest-beta&preserve-view=true) API. For more information, see [Configure cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-configure.md).
## Microsoft cloud settings
active-directory Cross Tenant Access Settings B2b Collaboration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-collaboration.md
Previously updated : 06/30/2022 Last updated : 01/23/2023
With inbound settings, you select which external users and groups will be able t
![Screenshot showing trust settings.](media/cross-tenant-access-settings-b2b-collaboration/inbound-trust-settings.png)
+1. (This step applies to **Organizational settings** only.) Review the consent prompt option:
+
+ - **Suppress consent prompts for users from the other tenant when they access apps and resources in my tenant**: Select this checkbox if you want to automatically redeem invitations so users from the specified tenant don't have to accept the consent prompt when they're added to this tenant using B2B collaboration. This setting will only suppress the consent prompt if the specified tenant checks this setting for outbound access as well.
+
+ ![Screenshot that shows the inbound suppress consent prompt check box.](../media/external-identities/inbound-consent-prompt-setting.png)
+ 1. Select **Save**. ## Modify outbound access settings
With outbound settings, you select which of your users and groups will be able t
1. Select **Save**.
+### To change outbound trust settings
+
+(This section applies to **Organizational settings** only.)
+
+1. Select the **Trust settings** tab.
+
+1. Review the consent prompt option:
+
+ - **Suppress consent prompts for users from my tenant when they access apps and resources in the other tenant**: Select this checkbox if you want to automatically redeem invitations so users from this tenant don't have to accept the consent prompt when they're added to the specified tenant using B2B collaboration. This setting will only suppress the consent prompt if the specified tenant checks this setting for inbound access as well.
+
+ ![Screenshot that shows the outbound suppress consent prompt check box.](../media/external-identities/outbound-consent-prompt-setting.png)
+
+1. Select **Save**.
+ ## Remove an organization When you remove an organization from your Organizational settings, the default cross-tenant access settings will go into effect for that organization.
active-directory Cross Tenant Access Settings B2b Direct Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-direct-connect.md
Previously updated : 08/05/2022 Last updated : 01/23/2023
With inbound settings, you select which external users and groups will be able t
- **Trust hybrid Azure AD joined devices**: Allows your Conditional Access policies to trust hybrid Azure AD joined device claims from an external organization when their users access your resources.
- ![Screenshot showing inbound trust settings](media/cross-tenant-access-settings-b2b-direct-connect/inbound-trust-settings.png)
+ ![Screenshot showing inbound trust settings.](media/cross-tenant-access-settings-b2b-direct-connect/inbound-trust-settings.png)
+
+1. (This step applies to **Organizational settings** only.) Review the consent prompt option:
+
+ - **Suppress consent prompts for users from the other tenant when they access apps and resources in my tenant**: Select this checkbox if you want to automatically redeem invitations so users from the specified tenant don't have to accept the consent prompt when they access resources in this tenant using B2B direct connect. This setting will only suppress the consent prompt if the specified tenant checks this setting for outbound access as well.
+
+ ![Screenshot that shows the inbound suppress consent prompt check box.](../media/external-identities/inbound-consent-prompt-setting.png)
1. Select **Save**.
With outbound settings, you select which of your users and groups will be able t
1. Select **Save**.
+### To change outbound trust settings
+
+(This section applies to **Organizational settings** only.)
+
+1. Select the **Trust settings** tab.
+
+1. Review the consent prompt option:
+
+ - **Suppress consent prompts for users from my tenant when they access apps and resources in the other tenant**: Select this checkbox if you want to automatically redeem invitations so users from this tenant don't have to accept the consent prompt when they access resources in the specified tenant using B2B direct connect. This setting will only suppress the consent prompt if the specified tenant checks this setting for inbound access as well.
+
+ ![Screenshot that shows the outbound suppress consent prompt check box.](../media/external-identities/outbound-consent-prompt-setting.png)
+
+1. Select **Save**.
+ ## Remove an organization When you remove an organization from your Organizational settings, the default cross-tenant access settings will go into effect for that organization.
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/redemption-experience.md
Previously updated : 12/16/2022 Last updated : 01/23/2023
Guest users can now sign in to your multi-tenant or Microsoft first-party apps t
![Screenshots showing common endpoints used for signing in.](media/redemption-experience/common-endpoint-flow-small.png)
-The user is then redirected to your tenanted endpoint, where they can either sign in with their email address or select an identity provider you've configured.
+The user is then redirected to your tenant-specific endpoint, where they can either sign in with their email address or select an identity provider you've configured.
## Redemption through a direct link
When a guest signs in to a resource in a partner organization for the first time
In your directory, the guest's **Invitation accepted** value changes to **Yes**. If an MSA was created, the guestΓÇÖs **Source** shows **Microsoft Account**. For more information about guest user account properties, see [Properties of an Azure AD B2B collaboration user](user-properties.md). If you see an error that requires admin consent while accessing an application, see [how to grant admin consent to apps](../develop/v2-admin-consent.md).
+### Automatic redemption setting
+
+> [!IMPORTANT]
+> Automatic redemption is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+You might want to automatically redeem invitations so users don't have to accept the consent prompt when they're added to another tenant for B2B collaboration. When configured, a notification email is sent to the B2B collaboration user that requires no action from the user. Users are sent the notification email directly and they don't need to access the tenant first before they receive the email. The following shows an example notification email if you automatically redeem invitations in both tenants.
++
+For information about how to automatically redeem invitations, see [cross-tenant access overview](cross-tenant-access-overview.md#automatic-redemption-setting) and [Configure cross-tenant access settings for B2B collaboration](../external-identities/cross-tenant-access-settings-b2b-collaboration.md).
+ ## Next steps - [What is Azure AD B2B collaboration?](what-is-b2b.md)
active-directory User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-properties.md
Previously updated : 01/09/2023 Last updated : 01/23/2023
The following table describes B2B collaboration users based on how they authenti
- **Internal guest**: Before Azure AD B2B collaboration was available, it was common to collaborate with distributors, suppliers, vendors, and others by setting up internal credentials for them and designating them as guests by setting the user object UserType to Guest. If you have internal guest users like these, you can invite them to use B2B collaboration instead so they can use their own credentials, allowing their external identity provider to manage authentication and their account lifecycle. - **Internal member**: These users are generally considered employees of your organization. The user authenticates internally via Azure AD, and the user object created in the resource Azure AD directory has a UserType of Member.
+The user type you choose has the following limitations for apps or services (but aren't limited to):
+
+ > [!IMPORTANT] > The [email one-time passcode](one-time-passcode.md) feature is now turned on by default for all new tenants and for any existing tenants where you haven't explicitly turned it off. When this feature is turned off, the fallback authentication method is to prompt invitees to create a Microsoft account.
active-directory Active Directory Deployment Checklist P2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-deployment-checklist-p2.md
Previously updated : 12/07/2021 Last updated : 01/23/2023
Next, we add to the foundation laid in phase 1 by importing our users and enabli
| [Decide on device management strategy](../devices/overview.md) | Decide what your organization allows regarding devices. Registering vs joining, Bring Your Own Device vs company provided. | | | [Deploy Windows Hello for Business in your organization](/windows/security/identity-protection/hello-for-business/hello-manage-in-organization) | Prepare for passwordless authentication using Windows Hello | | | [Deploy passwordless authentication methods for your users](../authentication/concept-authentication-passwordless.md) | Provide your users with convenient passwordless authentication methods | Azure AD Premium P1 |
+| [Configure cross-tenant synchronization (preview)](../multi-tenant-organizations/cross-tenant-synchronization-configure.md) | For multi-tenant organization scenarios, enable users to collaborate across tenants. (Currently in preview.) | Azure AD Premium P1 |
## Phase 3: Manage applications
Phase 4 sees administrators enforcing least privilege principles for administrat
| Task | Detail | Required license | | - | | - |
-| [Enforce the use of Privileged Identity Management](../privileged-identity-management/pim-security-wizard.md) | Remove administrative roles from normal day-to-day user accounts. Make administrative users eligible to use their role after succeeding a multi-factor authentication check, providing a business justification, or requesting approval from approvers. | Azure AD Premium P2 |
+| [Enforce the use of Privileged Identity Management](../privileged-identity-management/pim-security-wizard.md) | Remove administrative roles from normal day-to-day user accounts. Make administrative users eligible to use their role after succeeding a multi-factor authentication check, provide a business justification, or request approval from approvers. | Azure AD Premium P2 |
| [Complete an access review for Azure AD directory roles in PIM](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md) | Work with your security and leadership teams to create an access review policy to review administrative access based on your organization's policies. | Azure AD Premium P2 | | [Implement dynamic group membership policies](../enterprise-users/groups-dynamic-membership.md) | Use dynamic groups to automatically assign users to groups based on their attributes from HR (or your source of truth), such as department, title, region, and other attributes. | Azure AD Premium P1 | | [Implement group based application provisioning](../manage-apps/what-is-access-management.md) | Use group-based access management provisioning to automatically provision users for SaaS applications. | Azure AD Premium P1 |
active-directory Datawiza With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-with-azure-ad.md
Title: Secure hybrid access with Datawiza
-description: Learn how to integrate Datawiza with Azure AD. See how to use Datawiza and Azure AD to authenticate users and give them access to on-premises and cloud apps.
+ Title: Tutorial to configure Secure Hybrid Access with Azure Active Directory and Datawiza
+description: Learn to use Datawiza and Azure AD to authenticate users and give them access to on-premises and cloud apps.
Previously updated : 05/19/2022 Last updated : 01/23/2023
# Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Datawiza
-Datawiza's [Datawiza Access Broker (DAB)](https://www.datawiza.com/access-broker) extends Azure AD to enable single sign-on (SSO) and provide granular access controls to protect on-premises and cloud-hosted applications, such as Oracle E-Business Suite, Microsoft IIS, and SAP. By using this solution, enterprises can quickly transition from legacy web access managers (WAMs), such as Symantec SiteMinder, NetIQ, Oracle, and IBM, to Azure AD without rewriting applications. Enterprises can also use Datawiza as a no-code or low-code solution to integrate new applications to Azure AD. This approach enables enterprises to implement their Zero Trust strategy while saving engineering time and reducing costs.
+In this tutorial, learn how to integrate Azure Active Directory (Azure AD) with [Datawiza](https://www.datawiza.com/) for [hybrid access](../devices/concept-azure-ad-join-hybrid.md). [Datawiza Access Proxy (DAP)](https://www.datawiza.com) extends Azure AD to enable single sign-on (SSO) and provide access controls to protect on-premises and cloud-hosted applications, such as Oracle E-Business Suite, Microsoft IIS, and SAP. With this solution, enterprises can transition from legacy web access managers (WAMs), such as Symantec SiteMinder, NetIQ, Oracle, and IBM, to Azure AD without rewriting applications. Enterprises can use Datawiza as a no-code, or low-code, solution to integrate new applications to Azure AD. This approach enables enterprises to implement their Zero Trust strategy while saving engineering time and reducing costs.
-In this tutorial, learn how to integrate Azure Active Directory (Azure AD) with [Datawiza](https://www.datawiza.com/) for [hybrid access](../devices/concept-azure-ad-join-hybrid.md).
+Learn more: [Zero Trust security](../../security/fundamentals/zero-trust.md)
## Datawiza with Azure AD Authentication Architecture Datawiza integration includes the following components: -- [Azure AD](../fundamentals/active-directory-whatis.md) - A cloud-based identity and access management service from Microsoft. Azure AD helps users sign in and access external and internal resources.
+* **[Azure AD](../fundamentals/active-directory-whatis.md)** - Identity and access management service that helps users sign in and access external and internal resources
+* **Datawiza Access Proxy (DAP)** - This service transparently passes identity information to applications through HTTP headers
+* **Datawiza Cloud Management Console (DCMC)** - UI and RESTful APIs for administrators to manage the DAP configuration and access control policies
-- Datawiza Access Broker (DAB) - The service that users sign on to. DAB transparently passes identity information to applications through HTTP headers.
+The following diagram illustrates the authentication architecture with Datawiza in a hybrid environment.
-- Datawiza Cloud Management Console (DCMC) - A centralized management console that manages DAB. DCMC provides UI and RESTful APIs for administrators to manage the DAB configuration and access control policies.
+ ![Architecture diagram of the authentication process for user access to an on-premises application.](./media/datawiza-with-azure-active-directory/datawiza-architecture-diagram.png)
-The following diagram describes the authentication architecture orchestrated by Datawiza in a hybrid environment.
-
-![Architecture diagram that shows the authentication process that gives a user access to an on-premises application.](./media/datawiza-with-azure-active-directory/datawiza-architecture-diagram.png)
-
-|Step| Description|
-|:-|:--|
-| 1. | The user makes a request to access the on-premises or cloud-hosted application. DAB proxies the request made by the user to the application.|
-| 2. | DAB checks the user's authentication state. If it doesn't receive a session token, or the supplied session token is invalid, it sends the user to Azure AD for authentication.|
-| 3. | Azure AD sends the user request to the endpoint specified during the DAB application's registration in the Azure AD tenant.|
-| 4. | DAB evaluates access policies and calculates attribute values to be included in HTTP headers forwarded to the application. During this step, DAB may call out to the identity provider to retrieve the information needed to set the header values correctly. DAB sets the header values and sends the request to the application. |
-| 5. | The user is authenticated and has access to the application.|
+1. The user requests access to the on-premises or cloud-hosted application. DAP proxies the request to the application.
+2. DAP checks user authentication state. If there's no session token, or the session token is invalid, DAP sends the user request to Azure AD for authentication.
+3. Azure AD sends the user request to the endpoint specified during DAP registration in the Azure AD tenant.
+4. DAP evaluates policies and attribute values to be included in HTTP headers forwarded to the application. DAP might call out to the identity provider to retrieve the information to set the header values correctly. DAP sets the header values and sends the request to the application.
+5. The user is authenticated and is granted access.
## Prerequisites To get started, you need: -- An Azure subscription. If you don\'t have a subscription, you can get a [trial account](https://azure.microsoft.com/free/).--- An [Azure AD tenant](../fundamentals/active-directory-access-create-new-tenant.md)
-that's linked to your Azure subscription.
--- [Docker](https://docs.docker.com/get-docker/) and [docker-compose](https://docs.docker.com/compose/install/), which are required to run DAB. Your applications can run on any platform, such as a virtual machine and bare metal.--- An on-premises or cloud-hosted application that you'll transition from a legacy identity system to Azure AD. In this example, DAB is deployed on the same server as the application. The application runs on localhost: 3001, and DAB proxies traffic to the application via localhost: 9772. The traffic to the application reaches DAB first and is then proxied to the application.
+* An Azure subscription
+ * If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free/)
+* An [Azure AD tenant](../fundamentals/active-directory-access-create-new-tenant.md) linked to the Azure subscription
+* [Docker](https://docs.docker.com/get-docker/) and [docker-compose](https://docs.docker.com/compose/install/) are required to run DAP
+ * Your applications can run on platforms, such as a virtual machine (VM) or bare metal
+* An on-premises or cloud-hosted application to transition from a legacy identity system to Azure AD
+ * In this example, DAP is deployed on the same server as the application
+ * The application runs on localhost: 3001. DAP proxies traffic to the application via localhost: 9772
+ * The traffic to the application reaches DAP, and is proxied to the application
## Configure Datawiza Cloud Management Console 1. Sign in to [Datawiza Cloud Management Console](https://console.datawiza.com/) (DCMC).
+2. Create an application on DCMC and generate a key pair for the app: `PROVISIONING_KEY` and `PROVISIONING_SECRET`.
+3. To create the app and generate the key pair, follow the instructions in [Datawiza Cloud Management Console](https://docs.datawiza.com/step-by-step/step2.html).
+4. Register your application in Azure AD with [One Click Integration With Azure AD](https://docs.datawiza.com/tutorial/web-app-azure-one-click.html).
-2. Create an application on DCMC and generate a key pair for the app. The key pair consists of a `PROVISIONING_KEY` and `PROVISIONING_SECRET`. To create the app and generate the key pair, follow the instructions in [Datawiza Cloud Management Console](https://docs.datawiza.com/step-by-step/step2.html).
+ ![Screenshot of the Automatic Generator feature on the Configure IdP dialog.](./media/datawiza-with-azure-active-directory/configure-idp.png)
-3. Register your application in Azure AD by using Datawiza's convenient [one-click integration](https://docs.datawiza.com/tutorial/web-app-azure-one-click.html).
+5. To use a web application, manually populate form fields: **Tenant ID**, **Client ID**, and **Client Secret**.
-![Screenshot of the Datawiza Configure I D P page. Boxes for name, protocol, and other values are visible. An automatic generator option is turned on.](./media/datawiza-with-azure-active-directory/configure-idp.png)
+ Learn more: To create a web application and obtain values, go to docs.datawiza.com for [Microsoft Azure AD](https://docs.datawiza.com/idp/azure.html) documentation.
-To use an existing web application, you can manually populate the fields of the form. You'll need the tenant ID, client ID, and client secret. For more information about creating a web application and getting these values, see [Microsoft Azure AD in the Datawiza documentation](https://docs.datawiza.com/idp/azure.html).
+ ![Screenshot of the Configure IdP dialog with the Automatic Generator turned off.](./media/datawiza-with-azure-active-directory/use-form.png)
-![Screenshot of the Datawiza Configure I D P page. Boxes for name, protocol, and other values are visible. An automatic generator option is turned off.](./media/datawiza-with-azure-active-directory/use-form.png)
+6. Run DAP using either Docker or Kubernetes. The docker image is needed to create a sample header-based application.
-4. Run DAB using either Docker or Kubernetes. The docker image is needed to create a sample header-based application.
-
- - For Docker-specific instructions, see [Deploy Datawiza Access Broker With Your App](https://docs.datawiza.com/step-by-step/step3.html).
- - For Kubernetes-specific instructions, see [Deploy Datawiza Access Broker with a Web App using Kubernetes](https://docs.datawiza.com/tutorial/web-app-AKS.html).
-
- You can use the following sample docker image docker-compose.yml file:
+ - For Kubernetes, see [Deploy Datawiza Access Proxy with a Web App using Kubernetes](https://docs.datawiza.com/tutorial/web-app-AKS.html)
+ - For Docker, see [Deploy Datawiza Access Proxy With Your App](https://docs.datawiza.com/step-by-step/step3.html)
+ - You can use the following sample docker image docker-compose.yml file:
```yaml
To use an existing web application, you can manually populate the fields of the
- "3001:3001" ```
-5. Sign in to the container registry and download the images of DAB and the header-based application by following the instructions in this [Important Step](https://docs.datawiza.com/step-by-step/step3.html#important-step).
-
-6. Run the following command:
-
- `docker-compose -f docker-compose.yml up`
+7. Sign in to the container registry.
+8. Download the DAP images and the header-based application in this [Important Step](https://docs.datawiza.com/step-by-step/step3.html#important-step).
+9. Run the following command: `docker-compose -f docker-compose.yml up`.
+10. The header-based application has SSO enabled with Azure AD.
+11. In a browser, go to `http://localhost:9772/`.
+12. An Azure AD sign-in page appears.
+13. Pass user attributes to the header-based application. DAP gets user attributes from Azure AD and passes attributes to the application via a header or cookie.
+14. To pass user attributes such as email address, first name, and last name to the header-based application, see [Pass User Attributes](https://docs.datawiza.com/step-by-step/step4.html).
+15. To confirm configured user attributes, observe a green check mark next to each attribute.
- The header-based application should now have SSO enabled with Azure AD.
-
-7. In a browser, go to `http://localhost:9772/`. An Azure AD sign-in page appears.
-
-8. Pass user attributes to the header-based application. DAB gets user attributes from Azure AD and can pass these attributes to the application via a header or cookie. To pass user attributes such as an email address, a first name, and a last name to the header-based application, follow the instructions in [Pass User Attributes](https://docs.datawiza.com/step-by-step/step4.html).
-
-9. Confirm you have successfully configured user attributes by observing a green check mark next to each attribute.
-
-![Screenshot that shows the Datawiza application home page. Green check marks are visible next to the host, email, firstname, and lastname attributes.](./media/datawiza-with-azure-active-directory/datawiza-application-home-page.png)
+ ![Screenshot of the home page with host, email, firstname, and lastname attributes.](./media/datawiza-with-azure-active-directory/datawiza-application-home-page.png)
## Test the flow
-1. Go to the application URL. DAB should redirect you to the Azure AD sign-in page.
-
-2. After successfully authenticating, you should be redirected to DAB.
-
-DAB evaluates policies, calculates headers, and sends you to the upstream application. Your requested application should appear.
+1. Go to the application URL.
+2. DAP redirects you to the Azure AD sign-in page.
+3. After authentication, you're redirected to DAP.
+4. DAP evaluates policies, calculates headers, and sends you to the application.
+5. The requested application appears.
## Next steps -- [Configure Datawiza with Azure AD B2C](../../active-directory-b2c/partner-datawiza.md)--- [Configure Azure AD Multi-Factor Authentication and SSO for Oracle JDE applications using DAB](datawiza-azure-ad-sso-oracle-jde.md)--- [Configure Azure AD Multi-Factor Authentication and SSO for Oracle PeopleSoft applications using DAB](datawiza-azure-ad-sso-oracle-peoplesoft.md)--- [Datawiza documentation](https://docs.datawiza.com)
+* [Tutorial: Configure Azure Active Directory B2C with Datawiza to provide secure hybrid access](../../active-directory-b2c/partner-datawiza.md)
+* [Tutorial: Configure Datawiza to enable Azure AD MFA and SSO to Oracle JD Edwards](datawiza-azure-ad-sso-oracle-jde.md)
+* [Tutorial: Configure Datawiza to enable Azure AD MFA and SSO to Oracle PeopleSoft](datawiza-azure-ad-sso-oracle-peoplesoft.md)
+* Go to docs.datawiza.com for Datawiza [User Guides](https://docs.datawiza.com)
active-directory Cross Tenant Synchronization Configure Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure-graph.md
+
+ Title: Configure cross-tenant synchronization using Microsoft Graph API (preview)
+description: Learn how to configure cross-tenant synchronization in Azure Active Directory using Microsoft Graph API.
+++++++ Last updated : 01/23/2023+++
+#Customer intent: As a dev, devops, or it admin, I want to
++
+# Configure cross-tenant synchronization using Microsoft Graph API (preview)
+
+> [!IMPORTANT]
+> Cross-tenant synchronization is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+This article describes the keys steps to configure cross-tenant synchronization using Microsoft Graph API. When configured, Azure AD automatically provisions and de-provisions B2B users in your target tenant. For detailed steps using the Azure portal, see [Configure cross-tenant synchronization](cross-tenant-synchronization-configure.md).
++
+## Prerequisites
+
+- A source [Azure AD tenant](../develop/quickstart-create-new-tenant.md) with a Premium P1 or P2 license
+- A target [Azure AD tenant](../develop/quickstart-create-new-tenant.md) with a Premium P1 or P2 license
+- An account in the source tenant with the [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) role to configure cross-tenant provisioning
+- An account in the target tenant with the [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) role to configure the cross-tenant synchronization policy
+
+## Step 1: Sign in to the target tenant and consent to permissions
+
+![Icon for the target tenant.](./media/common/icon-tenant-target.png)<br/>**Target tenant**
+
+These steps describe how to use Microsoft Graph Explorer (recommended), but you can also use Postman, or another REST API client.
+
+1. Start [Microsoft Graph Explorer tool](https://aka.ms/ge).
+
+1. Sign in to the target tenant.
+
+1. Select **Modify permissions**.
+
+1. Consent to the following required permissions:
+
+ - `Policy.Read.All`
+ - `Policy.ReadWrite.CrossTenantAccess`
+
+## Step 2: Enable user synchronization in the target tenant
+
+![Icon for the target tenant.](./media/common/icon-tenant-target.png)<br/>**Target tenant**
+
+1. Use the [Create crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicy-post-partners?view=graph-rest-beta&preserve-view=true) API to create a new partner configuration in a cross-tenant access policy between the target tenant and the source tenant.
+
+ **Request**
+
+ ```http
+ POST https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/partners
+ Content-Type: application/json
+
+ {
+ "tenantId": "3d0f5dec-5d3d-455c-8016-e2af1ae4d31a",
+ }
+ ```
+
+ **Response**
+
+ ```http
+ HTTP/1.1 201 Created
+ Content-Type: application/json
+
+ {
+ "tenantId": "3d0f5dec-5d3d-455c-8016-e2af1ae4d31a",
+ "isServiceProvider": null,
+ "inboundTrust": null,
+ "b2bCollaborationOutbound": null,
+ "b2bCollaborationInbound": null,
+ "b2bDirectConnectOutbound": null,
+ "b2bDirectConnectInbound": null,
+ "tenantRestrictions": null,
+ "crossCloudMeetingConfiguration":
+ {
+ "inboundAllowed": null,
+ "outboundAllowed": null
+ },
+ "automaticUserConsentSettings":
+ {
+ "inboundAllowed": null,
+ "outboundAllowed": null
+ }
+ }
+ ```
+
+1. Use the [Update crossTenantIdentitySyncPolicyPartner](/graph/api/crosstenantidentitysyncpolicypartner-update?view=graph-rest-beta&preserve-view=true) API to enable user synchronization in the target tenant.
+
+ **Request**
+
+ ```http
+ PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/partners/3d0f5dec-5d3d-455c-8016-e2af1ae4d31a/identitySynchronization
+ Content-type: application/json
+
+ {
+ "userSyncInbound":
+ {
+ "isSyncAllowed": true
+ }
+ }
+ ```
+
+ **Response**
+
+ ```http
+ HTTP/1.1 204 No Content
+ ```
+
+## Step 3: Automatically redeem invitations in the target tenant
+
+![Icon for the target tenant.](./media/common/icon-tenant-target.png)<br/>**Target tenant**
+
+1. Use the [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?view=graph-rest-beta&preserve-view=true) API to automatically redeem invitations and suppress consent prompts for inbound access.
+
+ **Request**
+
+ ```http
+ PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/partners/3d0f5dec-5d3d-455c-8016-e2af1ae4d31a
+ Content-Type: application/json
+
+ {
+ "inboundTrust": null,
+ "automaticUserConsentSettings":
+ {
+ "inboundAllowed": true
+ }
+ }
+ ```
+
+ **Response**
+
+ ```http
+ HTTP/1.1 204 No Content
+ ```
+
+## Step 4: Automatically redeem invitations in the source tenant
+
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
+
+1. Sign in to the source tenant.
+
+2. Use the [Create crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicy-post-partners?view=graph-rest-beta&preserve-view=true) API to create a new partner configuration in a cross-tenant access policy between the source tenant and the target tenant.
+
+ **Request**
+
+ ```http
+ POST https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/partners
+ Content-Type: application/json
+
+ {
+ "tenantId": "376a1f89-b02f-4a85-8252-2974d1984d14",
+ }
+ ```
+
+ **Response**
+
+ ```http
+ HTTP/1.1 201 Created
+ Content-Type: application/json
+
+ {
+ "tenantId": "376a1f89-b02f-4a85-8252-2974d1984d14",
+ "isServiceProvider": null,
+ "inboundTrust": null,
+ "b2bCollaborationOutbound": null,
+ "b2bCollaborationInbound": null,
+ "b2bDirectConnectOutbound": null,
+ "b2bDirectConnectInbound": null,
+ "tenantRestrictions": null,
+ "crossCloudMeetingConfiguration":
+ {
+ "inboundAllowed": null,
+ "outboundAllowed": null
+ },
+ "automaticUserConsentSettings":
+ {
+ "inboundAllowed": null,
+ "outboundAllowed": null
+ }
+ }
+ ```
+
+3. Use the [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?view=graph-rest-beta&preserve-view=true) API to automatically redeem invitations and suppress consent prompts for outbound access.
+
+ **Request**
+
+ ```http
+ PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/partners/376a1f89-b02f-4a85-8252-2974d1984d14
+ Content-Type: application/json
+
+ {
+ "automaticUserConsentSettings":
+ {
+ "outboundAllowed": true
+ }
+ }
+ ```
+
+ **Response**
+
+ ```http
+ HTTP/1.1 204 No Content
+ ```
+
+## Step 5: Create a configuration application in the source tenant
+
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
+
+1. In the source tenant, use the [applicationTemplate: instantiate](/graph/api/applicationtemplate-instantiate?view=graph-rest-beta&preserve-view=true) API to add an instance of a configuration application from the Azure AD application gallery into your tenant.
+
+ **Request**
+
+ ```http
+ POST https://graph.microsoft.com/beta/applicationTemplates/518e5f48-1fc8-4c48-9387-9fdf28b0dfe7/instantiate
+ Content-type: application/json
+
+ {
+ "displayName": "Fabrikam"
+ }
+ ```
+
+ **Response**
+
+ ```http
+ HTTP/1.1 201 Created
+ Content-type: application/json
+
+ {
+ "application": {
+ "objectId": "{objectId}",
+ "appId": "{appId}",
+ "applicationTemplateId": "518e5f48-1fc8-4c48-9387-9fdf28b0dfe7",
+ "displayName": "Fabrikam",
+ "homepage": "{homepage}",
+ "identifierUris": [],
+ "publicClient": null,
+ "replyUrls": [],
+ "logoutUrl": null,
+ "samlMetadataUrl": null,
+ "errorUrl": null,
+ "groupMembershipClaims": null,
+ "availableToOtherTenants": false,
+ "requiredResourceAccess": []
+ },
+ "servicePrincipal": {
+ "objectId": "{objectId}",
+ "deletionTimestamp": null,
+ "accountEnabled": true,
+ "appId": "{appId}",
+ "appDisplayName": "Fabrikam",
+ "applicationTemplateId": "518e5f48-1fc8-4c48-9387-9fdf28b0dfe7",
+ "appOwnerTenantId": "376a1f89-b02f-4a85-8252-2974d1984d14",
+ "appRoleAssignmentRequired": true,
+ "displayName": "Fabrikam",
+ "errorUrl": null,
+ "loginUrl": null,
+ "logoutUrl": null,
+ "homepage": "{homepage}",
+ "samlMetadataUrl": null,
+ "microsoftFirstParty": null,
+ "publisherName": "{tenantDisplayName}",
+ "preferredSingleSignOnMode": null,
+ "preferredTokenSigningKeyThumbprint": null,
+ "preferredTokenSigningKeyEndDateTime": null,
+ "replyUrls": [],
+ "servicePrincipalNames": [
+ "{appId}"
+ ],
+ "tags": [
+ "WindowsAzureActiveDirectoryIntegratedApp"
+ ],
+ "notificationEmailAddresses": [],
+ "samlSingleSignOnSettings": null,
+ "keyCredentials": [],
+ "passwordCredentials": []
+ }
+ }
+ ```
+
+1. Save the service principal object ID.
+
+## Step 6: Test the connection to the target tenant
+
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
+
+1. Get the service principal object ID from the previous step.
+
+ Be sure to use the service principal object ID instead of the application ID.
+
+2. In the source tenant, use the [synchronizationJob: validateCredentials](/graph/api/synchronization-synchronizationjob-validatecredentials?view=graph-rest-beta&preserve-view=true) API to test the connection to the target tenant and validate the credentials.
+
+ **Request**
+
+ ```http
+ POST https://graph.microsoft.com/beta/servicePrincipals/{servicePrincipalId}/synchronization/jobs/validateCredentials
+ Content-Type: application/json
+
+ {
+ "useSavedCredentials": false,
+ "templateId": "Azure2Azure",
+ "credentials": [
+ {
+ "key": "CompanyId",
+ "value": "376a1f89-b02f-4a85-8252-2974d1984d14"
+ },
+ {
+ "key": "AuthenticationType",
+ "value": "SyncPolicy"
+ }
+ ]
+ }
+ ```
+
+ **Response**
+
+ ```http
+ HTTP/1.1 204 No Content
+ ```
+
+## Step 7: Assign a user to the configuration
+
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
+
+For cross-tenant synchronization to work, at least one internal user must be assigned to the configuration.
+
+1. In the source tenant, use the [Grant an appRoleAssignment for a service principal](/graph/api/serviceprincipal-post-approleassignedto) API to assign an internal user to the configuration.
+
+ **Request**
+
+ ```http
+ POST https://graph.microsoft.com/v1.0/servicePrincipals/{servicePrincipalId}/appRoleAssignedTo
+ Content-type: application/json
+
+ {
+ "appRoleId": "{appRoleId}",
+ "resourceId": "{servicePrincipalId}",
+ "principalId": "{principalId}"
+ }
+ ```
+
+ **Response**
+
+ ```http
+ HTTP/1.1 201 Created
+ Content-Type: application/json
+ {
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#servicePrincipals('{servicePrincipalId}')/appRoleAssignedTo/$entity",
+ "id": "{keyId}",
+ "deletedDateTime": null,
+ "appRoleId": "{appRoleId}",
+ "createdDateTime": "2022-11-27T22:23:48.6541804Z",
+ "principalDisplayName": "User1",
+ "principalId": "{principalId}",
+ "principalType": "User",
+ "resourceDisplayName": "Fabrikam",
+ "resourceId": "{servicePrincipalId}"
+ }
+ ```
+
+## Step 8: Create a provisioning job in the source tenant
+
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
+
+In the source tenant, to enable provisioning, create a provisioning job.
+
+1. Determine the [synchronization template](/graph/api/resources/synchronization-synchronizationtemplate?view=graph-rest-beta&preserve-view=true) to use, such as `Azure2Azure`.
+
+ A template has pre-configured synchronization settings.
+
+1. In the source tenant, use the [Create synchronizationJob](/graph/api/synchronization-synchronizationjob-post?view=graph-rest-beta&preserve-view=true) API to create a provisioning job based on a template.
+
+ **Request**
+
+ ```http
+ POST https://graph.microsoft.com/beta/servicePrincipals/{servicePrincipalId}/synchronization/jobs
+ Content-type: application/json
+
+ {
+ "templateId": "Azure2Azure"
+ }
+ ```
+
+ **Response**
+
+ ```http
+ HTTP/1.1 201 Created
+ Content-type: application/json
+
+ {
+ "id": "{jobId}",
+ "templateId": "Azure2Azure",
+ "schedule": {
+ "expiration": null,
+ "interval": "PT40M",
+ "state": "Disabled"
+ },
+ "status": {
+ "countSuccessiveCompleteFailures": 0,
+ "escrowsPruned": false,
+ "code": "Paused",
+ "lastExecution": null,
+ "lastSuccessfulExecution": null,
+ "lastSuccessfulExecutionWithExports": null,
+ "quarantine": null,
+ "steadyStateFirstAchievedTime": "0001-01-01T00:00:00Z",
+ "steadyStateLastAchievedTime": "0001-01-01T00:00:00Z",
+ "troubleshootingUrl": null,
+ "progress": [],
+ "synchronizedEntryCountByType": []
+ },
+ "synchronizationJobSettings": [
+ {
+ "name": "AzureIngestionAttributeOptimization",
+ "value": "False"
+ },
+ {
+ "name": "LookaheadQueryEnabled",
+ "value": "False"
+ }
+ ]
+ }
+ ```
+
+## Step 9: Save your credentials
+
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
+
+1. Use the [synchronization: secrets](/graph/api/synchronization-synchronization-secrets?view=graph-rest-beta&preserve-view=true) API to save your credentials.
+
+ **Request**
+
+ ```http
+ PUT https://graph.microsoft.com/beta/servicePrincipals/{servicePrincipalId}/synchronization/secrets
+ Content-Type: application/json
+
+ {
+ "value": [
+ {
+ "key": "CompanyId",
+ "value": "376a1f89-b02f-4a85-8252-2974d1984d14"
+ },
+ {
+ "key": "AuthenticationType",
+ "value": "SyncPolicy"
+ },
+ {
+ "key": "SyncNotificationSettings",
+ "value": "{\"Enabled\":false,\"DeleteThresholdEnabled\":false,\"HumanResourcesLookaheadQueryEnabled\":false}"
+ },
+ {
+ "key": "SyncAll",
+ "value": "false"
+ }
+ ]
+ }
+ ```
+
+ **Response**
+
+ ```http
+ HTTP/1.1 204 No Content
+ ```
+
+## Step 10: Test provision on demand
+
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
+
+Now that you have a configuration, you can test on-demand provisioning with one of your users.
+
+1. Use the [synchronizationJob: provisionOnDemand](/graph/api/synchronization-synchronizationjob-provision-on-demand?view=graph-rest-beta&preserve-view=true) API to provision a test user on demand.
+
+ **Request**
+
+ ```http
+ POST https://graph.microsoft.com/beta/servicePrincipals/{servicePrincipalId}/synchronization/jobs/{jobId}/provisionOnDemand
+ Content-Type: application/json
+
+ {
+ "parameters": [
+ {
+ "ruleId": "{ruleId}",
+ "subjects": [
+ {
+ "objectId": "{userObjectId}",
+ "objectTypeName": "User"
+ }
+ ]
+ }
+ ]
+ }
+ ```
+
+## Step 11: Start the provisioning job
+
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
+
+1. Now that the provisioning job is configured, use the [Start synchronizationJob](/graph/api/synchronization-synchronizationjob-start?view=graph-rest-beta&preserve-view=true) API to start the provisioning job.
+
+ **Request**
+
+ ```http
+ POST https://graph.microsoft.com/beta/servicePrincipals/{servicePrincipalId}/synchronization/jobs/{jobId}/start
+ ```
+
+
+ **Response**
+
+ ```http
+ HTTP/1.1 204 No Content
+ ```
+
+## Step 12: Monitor provisioning
+
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
+
+1. Now that the provisioning job is running, use the [Get synchronizationJob](/graph/api/synchronization-synchronizationjob-get?view=graph-rest-beta&preserve-view=true) API to monitor the progress of the current provisioning cycle as well as statistics to date such as the number of users and groups that have been created in the target system.
+
+ **Request**
+
+ ```http
+ GET https://graph.microsoft.com/beta/servicePrincipals/{servicePrincipalId}/synchronization/jobs/{jobId}
+ ```
+
+ **Response**
+
+ ```http
+ HTTP/1.1 200 OK
+ Content-type: application/json
+
+ {
+ "id": "{jobId}",
+ "templateId": "Azure2Azure",
+ "schedule": {
+ "expiration": null,
+ "interval": "PT40M",
+ "state": "Active"
+ },
+ "status": {
+ "countSuccessiveCompleteFailures": 0,
+ "escrowsPruned": false,
+ "code": "NotRun",
+ "lastSuccessfulExecution": null,
+ "lastSuccessfulExecutionWithExports": null,
+ "quarantine": null,
+ "steadyStateFirstAchievedTime": "0001-01-01T00:00:00Z",
+ "steadyStateLastAchievedTime": "0001-01-01T00:00:00Z",
+ "troubleshootingUrl": "",
+ "lastExecution": {
+ "activityIdentifier": null,
+ "countEntitled": 0,
+ "countEntitledForProvisioning": 0,
+ "countEscrowed": 0,
+ "countEscrowedRaw": 0,
+ "countExported": 0,
+ "countExports": 0,
+ "countImported": 0,
+ "countImportedDeltas": 0,
+ "countImportedReferenceDeltas": 0,
+ "state": "Failed",
+ "timeBegan": "0001-01-01T00:00:00Z",
+ "timeEnded": "0001-01-01T00:00:00Z",
+ "error": {
+ "code": "None",
+ "message": "",
+ "tenantActionable": false
+ }
+ },
+ "progress": [],
+ "synchronizedEntryCountByType": []
+ },
+ "synchronizationJobSettings": [
+ {
+ "name": "AzureIngestionAttributeOptimization",
+ "value": "False"
+ },
+ {
+ "name": "LookaheadQueryEnabled",
+ "value": "False"
+ }
+ ]
+ }
+ ```
+
+1. In addition to monitoring the status of the provisioning job, use the [List provisioningObjectSummary](/graph/api/provisioningobjectsummary-list) API to retrieve the provisioning logs and get all the provisioning events that occur. For example, query for a particular user and determine if they were successfully provisioned.
+
+ **Request**
+
+ ```http
+ GET https://graph.microsoft.com/v1.0/auditLogs/provisioning?$filter=((contains(tolower(servicePrincipal/id), '{servicePrincipalId}') or contains(tolower(servicePrincipal/displayName), '{servicePrincipalId}')) and activityDateTime gt 2022-12-10 and activityDateTime lt 2022-12-11)&$top=500&$orderby=activityDateTime desc
+ ```
+
+ **Response**
+
+ The response object shown here has been shortened for readability.
+
+ ```http
+ HTTP/1.1 200 OK
+ Content-type: application/json
+
+ {
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#auditLogs/provisioning",
+ "value": [
+ {
+ "id": "{id}",
+ "activityDateTime": "2022-12-11T00:40:37Z",
+ "tenantId": "376a1f89-b02f-4a85-8252-2974d1984d14",
+ "jobId": "{jobId}",
+ "cycleId": "{cycleId}",
+ "changeId": "{changeId}",
+ "provisioningAction": "create",
+ "durationInMilliseconds": 4375,
+ "servicePrincipal": {
+ "id": "{servicePrincipalId}",
+ "displayName": "Fabrikam"
+ },
+ "sourceSystem": {
+ "id": "{id}",
+ "displayName": "Azure Active Directory",
+ "details": {}
+ },
+ "targetSystem": {
+ "id": "{id}",
+ "displayName": "Azure Active Directory (target tenant)",
+ "details": {
+ "ApplicationId": "{applicationId}",
+ "ServicePrincipalId": "{servicePrincipalId}",
+ "ServicePrincipalDisplayName": "Fabrikam"
+ }
+ },
+ "initiatedBy": {
+ "id": "",
+ "displayName": "Azure AD Provisioning Service",
+ "initiatorType": "system"
+ },
+ "sourceIdentity": {
+ "id": "{sourceUserObjectId}",
+ "displayName": "User4",
+ "identityType": "User",
+ "details": {
+ "id": "{sourceUserObjectId}",
+ "odatatype": "User",
+ "DisplayName": "User4",
+ "UserPrincipalName": "user4@fabrikam.com"
+ }
+ },
+ "targetIdentity": {
+ "id": "{targetUserObjectId}",
+ "displayName": "",
+ "identityType": "User",
+ "details": {}
+ },
+ "provisioningStatusInfo": {
+ "status": "success",
+ "errorInformation": null
+ },
+ "provisioningSteps": [
+ {
+ "name": "EntryImportAdd",
+ "provisioningStepType": "import",
+ "status": "success",
+ "description": "Received User 'user4@fabrikam.com' change of type (Add) from Azure Active Directory",
+ "details": {
+ "objectId": "{sourceUserObjectId}",
+ "accountEnabled": "True",
+ "department": "Marketing",
+ "displayName": "User4",
+ "mailNickname": "user4",
+ "userPrincipalName": "user4@fabrikam.com",
+ "netId": "{netId}",
+ "showInAddressList": "",
+ "alternativeSecurityIds": "None",
+ "IsSoftDeleted": "False",
+ "appRoleAssignments": "msiam_access"
+ }
+ },
+ {
+ "name": "EntrySynchronizationScoping",
+ "provisioningStepType": "scoping",
+ "status": "success",
+ "description": "Determine if User in scope by evaluating against each scoping filter",
+ "details": {
+ "Active in the source system": "True",
+ "Assigned to the application": "True",
+ "User has the required role": "True",
+ "Scoping filter evaluation passed": "True",
+ "ScopeEvaluationResult": "{\"Marketing department filter.department EQUALS 'Marketing'\":true}"
+ }
+ },
+
+ ...
+
+ }
+ ]
+ }
+ ```
+
+## Troubleshooting tips
+
+#### Symptom - Insufficient privileges error
+
+When you try to perform an action, you receive an error message similar to the following:
+
+```
+code: Authorization_RequestDenied
+message: Insufficient privileges to complete the operation.
+```
+
+**Cause**
+
+Either the signed-in user doesn't have sufficient privileges, or you need to consent to one of the required permissions.
+
+**Solution**
+
+1. Make sure you're assigned the [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) role or another Azure AD role with privileges.
+
+2. In [Microsoft Graph Explorer tool](https://aka.ms/ge), make sure you consent to the required permissions:
+
+ - `Policy.Read.All`
+ - `Policy.ReadWrite.CrossTenantAccess`
+
+## Next steps
+
+- [Azure AD synchronization API overview](/graph/api/resources/synchronization-overview?view=graph-rest-beta&preserve-view=true)
+- [Tutorial: Develop and plan provisioning for a SCIM endpoint in Azure Active Directory](../app-provisioning/use-scim-to-provision-users-and-groups.md)
active-directory Cross Tenant Synchronization Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure.md
+
+ Title: Configure cross-tenant synchronization (preview)
+description: Learn how to configure cross-tenant synchronization in Azure Active Directory using the Azure portal.
+++++++ Last updated : 01/23/2023+++
+#Customer intent: As a dev, devops, or it admin, I want to
++
+# Configure cross-tenant synchronization (preview)
+
+> [!IMPORTANT]
+> Cross-tenant synchronization is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+This article describes the steps to configure cross-tenant synchronization using the Azure portal. When configured, Azure AD automatically provisions and de-provisions B2B users in your target tenant. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Learning objectives
+
+By the end of this article, you'll be able to:
+
+- Create B2B users in your target tenant
+- Remove B2B users in your target tenant
+- Keep user attributes synchronized between your source and target tenants
+
+## Prerequisites
+
+- A source [Azure AD tenant](../develop/quickstart-create-new-tenant.md) with a Premium P1 or P2 license
+- A target [Azure AD tenant](../develop/quickstart-create-new-tenant.md) with a Premium P1 or P2 license
+- An account in the source tenant with the [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) role to configure cross-tenant provisioning
+- An account in the target tenant with the [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) role to configure the cross-tenant synchronization policy
+
+## Step 1: Plan your provisioning deployment
+
+1. Define how you would like to [structure the tenants in your organization](cross-tenant-synchronization-topology.md).
+
+1. Learn about [how the provisioning service works](../app-provisioning/how-provisioning-works.md).
+
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md?toc=/azure/active-directory/multi-tenant-organizations/toc.json&pivots=cross-tenant-synchronization).
+
+1. Determine what data to [map between tenants](../app-provisioning/customize-application-attributes.md).
+
+## Step 2: Enable user synchronization in the target tenant
+
+![Icon for the target tenant.](./media/common/icon-tenant-target.png)<br/>**Target tenant**
+
+1. Sign in to the [Azure portal](https://portal.azure.com) as an administrator in the target tenant.
+
+1. Select **Azure Active Directory** > **External Identities**.
+
+1. Select **Cross-tenant access settings**.
+
+1. On the **Organization settings** tab, select **Add organization**.
+
+1. Add the source tenant by typing the tenant ID or domain name and selecting **Add**.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/access-settings-organization-add.png" alt-text="Screenshot that shows the Add organization pane to add the source tenant." lightbox="./media/cross-tenant-synchronization-configure/access-settings-organization-add.png":::
+
+1. Under **Inbound access** of the added organization, select **Inherited from default**.
+
+1. Select the **Cross-tenant sync (Preview)** tab.
+
+1. Check the **Allow users sync into this tenant** check box.
+
+ :::image type="content" source="../media/external-identities/access-settings-users-sync.png" alt-text="Screenshot that shows the Cross-tenant sync tab with the Allow users sync into this tenant check box." lightbox="../media/external-identities/access-settings-users-sync.png":::
+
+1. Select **Save**.
+
+## Step 3: Automatically redeem invitations in the target tenant
+
+![Icon for the target tenant.](./media/common/icon-tenant-target.png)<br/>**Target tenant**
+
+In this step, you automatically redeem invitations so users from the source tenant don't have to accept the consent prompt. This setting must be checked in both the source tenant (outbound) and target tenant (inbound). For more information, see [Automatic redemption setting](cross-tenant-synchronization-overview.md#automatic-redemption-setting).
+
+1. Select the **Trust settings** tab.
+
+1. Check the **Suppress consent prompts for users from the other tenant when they access apps and resources in my tenant** check box.
+
+ :::image type="content" source="../media/external-identities/inbound-consent-prompt-setting.png" alt-text="Screenshot that shows the inbound suppress consent prompt check box." lightbox="../media/external-identities/inbound-consent-prompt-setting.png":::
+
+1. Select **Save**.
+
+## Step 4: Automatically redeem invitations in the source tenant
+
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
+
+In this step, you automatically redeem invitations in the source tenant.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) as an administrator of the target tenant.
+
+1. Select **Azure Active Directory** > **External Identities**.
+
+1. Select **Cross-tenant access settings**.
+
+1. On the **Organization settings** tab, select **Add organization**.
+
+1. Add the target tenant by typing the tenant ID or domain name and selecting **Add**.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/access-settings-organization-add.png" alt-text="Screenshot that shows the Add organization pane to add the source tenant." lightbox="./media/cross-tenant-synchronization-configure/access-settings-organization-add.png":::
+
+1. Under **Outbound access** for the target organization, select **Inherited from default**.
+
+1. Select the **Trust settings** tab.
+
+1. Check the **Suppress consent prompts for users from my tenant when they access apps and resources in the other tenant** check box.
+
+ :::image type="content" source="../media/external-identities/outbound-consent-prompt-setting.png" alt-text="Screenshot that shows the outbound suppress consent prompt check box." lightbox="../media/external-identities/outbound-consent-prompt-setting.png":::
+
+1. Select **Save**.
+
+## Step 5: Create a configuration application in the source tenant
+
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
+
+1. Select **Azure Active Directory** > **Cross-tenant synchronization (Preview)**.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/azure-ad-overview.png" alt-text="Screenshot that shows the Azure Active Directory Overview page." lightbox="./media/cross-tenant-synchronization-configure/azure-ad-overview.png":::
+
+1. Select **Configurations**.
+
+1. At the top of the page, select **New configuration**.
+
+1. Provide a name for the configuration and select **Create**.
+
+ It can take up to 15 seconds for the configuration that you just created to appear in the list.
+
+1. Select **Refresh** to retrieve the latest list of configurations.
+
+## Step 6: Test the connection to the target tenant
+
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
+
+1. In the configuration list, select your configuration.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/configuration-select.png" alt-text="Screenshot that shows the Cross-tenant synchronization Configurations page and a new configuration." lightbox="./media/cross-tenant-synchronization-configure/configuration-select.png":::
+
+1. Select **Get started**.
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+1. Under the **Admin Credentials** section, change the **Authentication Method** to **Cross Tenant Synchronization Policy**.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/provisioning-policy.png" alt-text="Screenshot that shows the Provisioning page with the Cross Tenant Synchronization Policy selected." lightbox="./media/cross-tenant-synchronization-configure/provisioning-policy.png":::
+
+1. In the **Tenant Id** box, enter the tenant ID of the target tenant.
+
+1. Select **Test Connection** to test the connection.
+
+ You should see a message that the supplied credentials are authorized to enable provisioning. If the test connection fails, see [Troubleshooting tips](#troubleshooting-tips) later in this article.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/provisioning-test-connection-success.png" alt-text="Screenshot that shows a testing connection notification." lightbox="./media/cross-tenant-synchronization-configure/provisioning-test-connection-success.png":::
+
+1. Select **Save**.
+
+ Mappings and Settings sections appear.
+
+1. Close the **Provisioning** page.
+
+## Step 7: Define who is in scope for provisioning
+
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
+
+The Azure AD provisioning service allows you to define who will be provisioned in one or both of the following ways:
+
+- Based on assignment to the configuration
+- Based on attributes of the user
+
+Start small. Test with a small set of users before rolling out to everyone. When the scope for provisioning is set to assigned users and groups, you can control it by assigning one or two users to the configuration. You can further refine who is in scope for provisioning by creating attribute-based scoping filters, described in the [next step](#step-8-optional-define-who-is-in-scope-for-provisioning-with-scoping-filters).
+
+1. Select **Provisioning** and expand the **Settings** section.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/provisioning-settings-edit.png" alt-text="Screenshot of the Provisioning page that shows the Settings section with the Scope and Provisioning Status options." lightbox="./media/cross-tenant-synchronization-configure/provisioning-settings-edit.png":::
+
+1. In the **Scope** list, select whether to synchronize all users in the source tenant or only users assigned to the configuration.
+
+ It's recommended that you select **Sync only assigned users and groups** instead of **Sync all users and groups**. Reducing the number of users in scope improves performance.
+
+1. Select **Save**.
++
+1. On the configuration page, select **Users and groups**.
+
+ For cross-tenant synchronization to work, at least one internal user must be assigned to the configuration.
+
+1. Select **Add user/group**.
+
+1. On the **Add Assignment** page, under **Users and groups**, select **None Selected**.
+
+1. On the **Users and groups** pane, search for and select one or more internal users or groups you want to assign to the configuration.
+
+ If you select a group to assign to the configuration, only users that are direct members in the group will be in scope for provisioning. You can select a static group or a dynamic group. The assignment doesn't cascade to nested groups.
+
+1. Select **Select**.
+
+1. Select **Assign**.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/users-and-groups.png" alt-text="Screenshot that shows the Users and groups page with a user assigned to the configuration." lightbox="./media/cross-tenant-synchronization-configure/users-and-groups.png":::
+
+ For more information, see [Assign users and groups to an application](../manage-apps/assign-user-or-group-access-portal.md).
+
+## Step 8: (Optional) Define who is in scope for provisioning with scoping filters
+
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
+
+Regardless of the value you selected for **Scope** in the previous step, you can further limit which users are synchronized by creating attribute-based scoping filters.
+
+1. Select **Provisioning** and expand the **Mappings** section.
+
+1. Select **Provision Azure Active Directory Users**.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/provisioning-mappings.png" alt-text="Screenshot that shows the Provisioning page with the Mappings section expanded." lightbox="./media/cross-tenant-synchronization-configure/provisioning-mappings.png":::
+
+1. Under **Source Object Scope**, select **All records**.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/provisioning-attribute-mapping-scope.png" alt-text="Screenshot that shows the Attribute Mapping page with the Source Object Scope." lightbox="./media/cross-tenant-synchronization-configure/provisioning-attribute-mapping-scope.png":::
+
+1. On the **Source Object Scope** page, select **Add scoping filter**.
+
+1. Add any scoping filters to define which users are in scope for provisioning.
+
+ To configure scoping filters, refer to the instructions provided in [Scoping users or groups to be provisioned with scoping filters](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md?toc=/azure/active-directory/multi-tenant-organizations/toc.json&pivots=cross-tenant-synchronization).
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/provisioning-attribute-mapping-scoping-filter.png" alt-text="Screenshot that shows the Add Scoping Filter page with sample filter." lightbox="./media/cross-tenant-synchronization-configure/provisioning-attribute-mapping-scoping-filter.png":::
+
+1. Select **Ok** and **Save** to save any changes.
+
+ If you added a filter, you'll see a message that saving your changes will result in all assigned users and groups being resynchronized. This may take a long time depending on the size of your directory.
+
+1. Select **Yes** and close the **Attribute Mapping** page.
+
+## Step 9: Review attribute mappings
+
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
+
+Attribute mappings allow you to define how data should flow between the source tenant and target tenant. For information on how to customize the default attribute mappings, see [Tutorial - Customize user provisioning attribute-mappings for SaaS applications in Azure Active Directory](../app-provisioning/customize-application-attributes.md).
+
+1. Select **Provisioning** and expand the **Mappings** section.
+
+1. Select **Provision Azure Active Directory Users**.
+
+1. On the **Attribute Mapping** page, scroll down to review the user attributes that are synchronized between tenants in the **Attribute Mappings** section.
+
+ The attributes selected as **Matching** properties are used to match the user accounts between tenants and avoid creating duplicates.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/provisioning-attribute-mapping.png" alt-text="Screenshot of the Attribute Mapping page that shows the list of Azure Active Directory attributes." lightbox="./media/cross-tenant-synchronization-configure/provisioning-attribute-mapping.png":::
+
+1. Select the **Member (userType)** attribute.
+
+1. Review the **Constant Value** setting for the **userType** attribute.
+
+ This setting defines the type of user that will be created in the target tenant and can be one of the values in the following table. By default, users will be created as external member (B2B collaboration users). For more information, see [Properties of an Azure Active Directory B2B collaboration user](../external-identities/user-properties.md).
+
+ | Constant Value | Description |
+ | | |
+ | **Member** | Default. Users will be created as external member (B2B collaboration users) in the target tenant. Users will be able to function as any internal member of the target tenant. |
+ | **Guest** | Users will be created as external guests (B2B collaboration users) in the target tenant. |
+
+ The user type you choose has the following limitations for apps or services (but aren't limited to):
+
+ [!INCLUDE [user-type-workload-limitations-include](../includes/user-type-workload-limitations-include.md)]
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/provisioning-attribute-mapping-member.png" alt-text="Screenshot of the Edit Attribute page that shows the Member attribute." lightbox="./media/cross-tenant-synchronization-configure/provisioning-attribute-mapping-member.png":::
+
+1. On the **Attribute Mapping** page, select the **showInAddressList** attribute.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/provisioning-attribute-mapping-showinaddresslist.png" alt-text="Screenshot of the Edit Attribute page that shows the showInAddressList attribute." lightbox="./media/cross-tenant-synchronization-configure/provisioning-attribute-mapping-showinaddresslist.png":::
+
+ If you want the synchronized users to appear in the global address list of the target tenant for people search scenarios, you must set **Mapping type** to **Constant** and **Constant Value** to **True**.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/provisioning-attribute-mapping-showinaddresslist-peoplesearch.png" alt-text="Screenshot of the Edit Attribute page that shows the showInAddressList attribute with setting for people search." lightbox="./media/cross-tenant-synchronization-configure/provisioning-attribute-mapping-showinaddresslist-peoplesearch.png":::
+
+1. If you want to define any transformations, on the **Attribute Mapping** page, select the attribute you want to transform, such as **displayName**.
+
+1. Set the **Mapping type** to **Expression**.
+
+1. In the **Expression** box, enter the transformation expression. For example with the display name, you can do the following:
+
+ - Flip the first name and last name and add a comma in between.
+ - Add the domain name in parentheses at the end of the display name.
+
+ For examples, see [Reference for writing expressions for attribute mappings in Azure Active Directory](../app-provisioning/functions-for-customizing-application-data.md?toc=/azure/active-directory/multi-tenant-organizations/toc.json#examples).
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/provisioning-attribute-mapping-displayname-expression.png" alt-text="Screenshot of the Edit Attribute page that shows the displayName attribute with the Expression box." lightbox="./media/cross-tenant-synchronization-configure/provisioning-attribute-mapping-displayname-expression.png":::
+
+## Step 10: Specify additional provisioning settings
+
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
+
+1. Select **Provisioning** and expand the **Settings** section.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/provisioning-settings-edit.png" alt-text="Screenshot of the Provisioning page that shows the Settings section with the Scope and Provisioning Status options." lightbox="./media/cross-tenant-synchronization-configure/provisioning-settings-edit.png":::
+
+1. Check the **Send an email notification when a failure occurs** check box.
+
+1. In the **Notification Email** box, enter the email address of a person or group who should receive provisioning error notifications.
+
+ Email notifications are sent within 24 hours of the job entering quarantine state. For custom alerts, see [Understand how provisioning integrates with Azure Monitor logs](../app-provisioning/application-provisioning-log-analytics.md).
+
+1. To prevent accidental deletion, select **Prevent accidental deletion** and specify a threshold value.
+
+ For more information, see [Enable accidental deletions prevention in the Azure AD provisioning service](../app-provisioning/accidental-deletions.md?toc=/azure/active-directory/multi-tenant-organizations/toc.json&pivots=cross-tenant-synchronization).
+
+1. Select **Save** to save any changes.
+
+## Step 11: Test provision on demand
+
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
+
+Now that you have a configuration, you can test on-demand provisioning with one of your users.
+
+1. In the source tenant, select **Azure Active Directory** > **Cross-tenant synchronization (Preview)**.
+
+1. Select **Configurations** and then select your configuration.
+
+1. Select **Provision on demand**.
+
+1. In the **Select a user or group** box, search for and select one of your test users.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/provision-on-demand.png" alt-text="Screenshot of the Provision on demand page that shows a test user selected." lightbox="./media/cross-tenant-synchronization-configure/provision-on-demand.png":::
+
+1. Select **Provision**.
+
+ After a few moments, the **Perform action** page appears with information about the provisioning of the test user in the target tenant.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/provision-on-demand-provision.png" alt-text="Screenshot of the Perform action page that shows the test user and list of modified attributes." lightbox="./media/cross-tenant-synchronization-configure/provision-on-demand-provision.png":::
+
+ If the user isn't in scope, you'll see a page with information about why test user was skipped.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/provision-on-demand-provision-skipped.png" alt-text="Screenshot of the Determine if user is in scope page that shows information about why test user was skipped." lightbox="./media/cross-tenant-synchronization-configure/provision-on-demand-provision-skipped.png":::
+
+ On the **Provision on demand** page, you can view details about the provision and have the option to retry.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/provision-on-demand-provision-details.png" alt-text="Screenshot of the Provision on demand page that shows details about the provision." lightbox="./media/cross-tenant-synchronization-configure/provision-on-demand-provision-details.png":::
+
+1. In the target tenant, verify that the test user was provisioned.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/provision-on-demand-users-target.png" alt-text="Screenshot of the Users page of the target tenant that shows the test user provisioned." lightbox="./media/cross-tenant-synchronization-configure/provision-on-demand-users-target.png":::
+
+1. If all is working as expected, assign additional users to the configuration.
+
+ For more information, see [On-demand provisioning in Azure Active Directory](../app-provisioning/provision-on-demand.md?toc=/azure/active-directory/multi-tenant-organizations/toc.json&pivots=cross-tenant-synchronization).
+
+## Step 12: Start the provisioning job
+
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
+
+The provisioning job starts the initial synchronization cycle of all users defined in **Scope** of the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+1. In the source tenant, select **Azure Active Directory** > **Cross-tenant synchronization (Preview)**.
+
+1. Select **Configurations** and then select your configuration.
+
+1. On the **Overview** page, review the provisioning details.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/configuration-overview-provisioning.png" alt-text="Screenshot of the Configurations Overview page that lists provisioning details." lightbox="./media/cross-tenant-synchronization-configure/configuration-overview-provisioning.png":::
+
+1. Select **Start provisioning** to start the provisioning job.
+
+## Step 13: Monitor provisioning
+
+![Icon for the source tenant.](./media/common/icon-tenant-source.png) ![Icon for the target tenant.](./media/common/icon-tenant-target.png)<br/>**Source and target tenants**
+
+Once you've started a provisioning job, you can monitor the status.
+
+1. In the source tenant, on the **Overview** page, check the progress bar to see the status of the provisioning cycle and how close it's to completion. For more information, see [Check the status of user provisioning](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md).
+
+ If provisioning seems to be in an unhealthy state, the configuration will go into quarantine. For more information, see [Application provisioning in quarantine status](../app-provisioning/application-provisioning-quarantine-status.md).
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/provisioning-job-start.png" alt-text="Screenshot of the Configurations Overview page that shows the status of the provisioning cycle." lightbox="./media/cross-tenant-synchronization-configure/provisioning-job-start.png":::
+
+1. Select **Provisioning logs** to determine which users have been provisioned successfully or unsuccessfully. By default, the logs are filtered by the service principal ID of the configuration. For more information, see [Provisioning logs in Azure Active Directory](../reports-monitoring/concept-provisioning-logs.md?toc=/azure/active-directory/multi-tenant-organizations/toc.json).
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/provisioning-logs.png" alt-text="Screenshot of the Provisioning logs page that lists the log entries and their status." lightbox="./media/cross-tenant-synchronization-configure/provisioning-logs.png":::
+
+1. Select **Audit logs** to view all logged events in Azure AD. For more information, see [Audit logs in Azure Active Directory](../reports-monitoring/concept-audit-logs.md).
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/audit-logs-source.png" alt-text="Screenshot of the Audit logs page that lists the log entries and their status." lightbox="./media/cross-tenant-synchronization-configure/audit-logs-source.png":::
+
+ You can also view audit logs in the target tenant.
+
+1. In the target tenant, select **Users** > **Audit logs** to view logged events for user management.
+
+ :::image type="content" source="./media/cross-tenant-synchronization-configure/audit-logs-users-target.png" alt-text="Screenshot of the Audit logs page in the target tenant that lists the log entries for user management." lightbox="./media/cross-tenant-synchronization-configure/audit-logs-users-target.png":::
+
+## Step 14: Configure leave settings
+
+![Icon for the target tenant.](./media/common/icon-tenant-target.png)<br/>**Target tenant**
+
+Even though users are being provisioned in the target tenant, they still might be able to remove themselves. If users remove themselves and they are in scope, they'll be provisioned again during the next provisioning cycle. If you want to disallow the ability for users to remove themselves from your organization, you must configure the **External user leave settings**.
+
+1. In the target tenant, select **Azure Active Directory**.
+
+1. Select **External Identities** > **External collaboration settings**.
+
+1. Under **External user leave settings**, choose whether to allow external users to leave your organization themselves.
+
+This setting also applies to B2B collaboration and B2B direct connect, so if you set **External user leave settings** to **No**, B2B collaboration users and B2B direct connect users can't leave your organization themselves. For more information, see [Leave an organization as an external user](../external-identities/leave-the-organization.md#more-information-for-administrators).
+
+## Troubleshooting tips
+
+#### Symptom - Test connection fails with AzureDirectoryB2BManagementPolicyCheckFailure
+
+When configuring cross-tenant synchronization in the source tenant and you test the connection, it fails with the following error message:
+
+```
+You appear to have entered invalid credentials. Please confirm you are using the correct information for an administrative account.
+Error code: AzureDirectoryB2BManagementPolicyCheckFailure
+Details: Policy permitting auto-redemption of invitations not configured.
+```
++
+**Cause**
+
+This error indicates the policy to automatically redeem invitations in both the source and target tenants wasn't set up.
+
+**Solution**
+
+Follow the steps in [Step 3: Automatically redeem invitations in the target tenant](#step-3-automatically-redeem-invitations-in-the-target-tenant) and [Step 4: Automatically redeem invitations in the source tenant](#step-4-automatically-redeem-invitations-in-the-source-tenant).
+
+#### Symptom - Suppress consent prompt check box is disabled
+
+When configuring cross-tenant synchronization, the suppress consent prompt check box is disabled.
++
+**Cause**
+
+Your tenant doesn't have an Azure AD Premium P1 or P2 license.
+
+**Solution**
+
+You must have Azure AD Premium P1 or P2 to configure trust settings.
+
+#### Symptom - Recently deleted user in the target tenant is not restored
+
+After soft deleting a synchronized user in the target tenant, the user isn't restored during the next synchronization cycle. If you try to soft delete a user with on-demand provisioning and then restore the user, it can result in duplicate users.
+
+**Cause**
+
+Restoring a previously soft-deleted user in the target tenant isn't supported.
+
+**Solution**
+
+Manually restore the soft-deleted user in the target tenant. For more information, see [Restore or remove a recently deleted user using Azure Active Directory](../fundamentals/active-directory-users-restore.md).
+
+## Next steps
+
+- [Tutorial: Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md)
+- [Managing user account provisioning for enterprise apps in the Azure portal](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+- [What is single sign-on in Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
active-directory Cross Tenant Synchronization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-overview.md
+
+ Title: What is a cross-tenant synchronization in Azure Active Directory? (preview)
+description: Learn about cross-tenant synchronization in Azure Active Directory.
+++++++ Last updated : 01/23/2023+++
+#Customer intent: As a dev, devops, or it admin, I want to
++
+# What is cross-tenant synchronization? (preview)
+
+> [!IMPORTANT]
+> Cross-tenant synchronization is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+*Cross-tenant synchronization* automates creating, updating, and deleting [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) users across tenants in an organization. It enables users to access applications and collaborate across tenants, while still allowing the organization to evolve.
+
+Here are the primary goals of cross-tenant synchronization:
+
+- Seamless collaboration for a multi-tenant organization
+- Automate lifecycle management of B2B collaboration users in a multi-tenant organization
+- Automatically remove B2B accounts when a user leaves the organization
+
+## Why use cross-tenant synchronization?
+
+Cross-tenant synchronization automates creating, updating, and deleting B2B collaboration users. Users created with cross-tenant synchronization are able to access both Microsoft applications (such as Teams and SharePoint) and non-Microsoft applications (such as [ServiceNow](../saas-apps/servicenow-provisioning-tutorial.md), [Adobe](../saas-apps/adobe-identity-management-provisioning-tutorial.md), and many more), regardless of which tenant the apps are integrated with. These users continue to benefit from the security capabilities in Azure AD, such as [Azure AD Conditional Access](../conditional-access/overview.md) and [cross-tenant access settings](../external-identities/cross-tenant-access-overview.md), and can be governed through features such as [Azure AD entitlement management](../governance/entitlement-management-overview.md).
+
+The following diagram shows how you can use cross-tenant synchronization to enable users to access applications across tenants in your organization.
++
+## Who should use?
+
+- Organizations that own multiple Azure AD tenants and want to streamline intra-organization cross-tenant application access.
+- Cross-tenant synchronization is **not** currently suitable for use across organizational boundaries.
+
+## Benefits
+
+With cross-tenant synchronization, you can do the following:
+
+- Automatically create B2B collaboration users within your organization and provide them access to the applications they need, without creating and maintaining custom scripts.
+- Improve the user experience and ensure that users can access resources, without receiving an invitation email and having to accept a consent prompt in each tenant.
+- Automatically update users and remove them when they leave the organization.
+
+## Teams and Microsoft 365
+
+Users created by cross-tenant synchronization will have the same experience when accessing Microsoft Teams and other Microsoft 365 services as B2B collaboration users created through a manual invitation. The [userType](../external-identities/user-properties.md) property on the B2B user, whether guest or member, does change the end user experience. Over time, the member userType will be used by the various Microsoft 365 services to provide differentiated end user experiences for users in a multi-tenant organization.
+
+## Properties
+
+When you configure cross-tenant synchronization, you define a trust relationship between a source tenant and a target tenant. Cross-tenant synchronization has the following properties:
+
+- Based on the Azure AD provisioning engine.
+- Is a push process from the source tenant, not a pull process from the target tenant.
+- Supports pushing only internal members from the source tenant. It doesn't support syncing external users from the source tenant.
+- Users in scope for synchronization are configured in the source tenant.
+- Attribute mapping is configured in the source tenant.
+- Extension attributes are supported.
+- Target tenant administrators can stop a synchronization at any time.
+
+The following table shows the parts of cross-tenant synchronization and which tenant they're configured.
+
+| Tenant | Cross-tenant<br/>access settings | Automatic redemption | Sync settings<br/>configuration | Users in scope |
+| :: | :: | :: | :: | :: |
+| ![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>Source tenant | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| ![Icon for the target tenant.](./media/common/icon-tenant-target.png)<br/>Target tenant | :heavy_check_mark: | :heavy_check_mark: | | |
+
+## Cross-tenant synchronization setting
++
+To configure this setting using Microsoft Graph, see the [Update crossTenantIdentitySyncPolicyPartner](/graph/api/crosstenantidentitysyncpolicypartner-update?view=graph-rest-beta&preserve-view=true) API. For more information, see [Configure cross-tenant synchronization](cross-tenant-synchronization-configure.md).
+
+## Automatic redemption setting
++
+To configure this setting using Microsoft Graph, see the [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?view=graph-rest-beta&preserve-view=true) API. For more information, see [Configure cross-tenant synchronization](cross-tenant-synchronization-configure.md).
+
+#### How do users know what tenants they belong to?
+
+For cross-tenant synchronization, users don't receive an email or have to accept a consent prompt. If users want to see what tenants they belong to, they can open their [My Account](https://support.microsoft.com/account-billing/my-account-portal-for-work-or-school-accounts-eab41bfe-3b9e-441e-82be-1f6e568d65fd) page and select **Organizations**. In the Azure portal, users can open their [Azure portal settings](../../azure-portal/set-preferences.md), view their **Directories + subscriptions**, and switch directories.
+
+For more information, including privacy information, see [Leave an organization as an external user](../external-identities/leave-the-organization.md).
+
+## Get started
+
+Here are the basic steps to get started using cross-tenant synchronization.
+
+#### Step 1: Define how to structure the tenants in your organization
+
+Cross-tenant synchronization provides a flexible solution to enable collaboration, but every organization is different. For example, you might have a central tenant, satellite tenants, or sort of a mesh of tenants. Cross-tenant synchronization supports any of these topologies. For more information, see [Topologies for cross-tenant synchronization](cross-tenant-synchronization-topology.md).
++
+#### Step 2: Enable cross-tenant synchronization in the target tenants
+
+In the target tenant where users are created, navigate to the **Cross-tenant access settings** page. Here you enable cross-tenant synchronization and the B2B automatic redemption settings by selecting the respective check boxes. For more information, see [Configure cross-tenant synchronization](cross-tenant-synchronization-configure.md).
++
+#### Step 3: Enable cross-tenant synchronization in the source tenants
+
+In any source tenant, navigate to the **Cross-tenant access settings** page and enable the B2B automatic redemption feature. Next, you use the **Cross-tenant synchronization** page to set up a cross-tenant synchronization job and specify:
+
+- Which users you want to synchronize
+- What attributes you want to include
+- Any transformations
+
+For anyone that has used Azure AD to [provision identities into a SaaS application](../app-provisioning/user-provisioning.md), this experience will be familiar. Once you have synchronization configured, you can start testing with a few users and make sure they're created with all the attributes that you need. When testing is complete, you can quickly add additional users to synchronize and roll out across your organization. For more information, see [Configure cross-tenant synchronization](cross-tenant-synchronization-configure.md).
++
+## License requirements
+
+Using this feature requires Azure AD Premium P1 licenses. Each user who is synchronized with cross-tenant synchronization must have a P1 license in their home/source tenant. To find the right license for your requirements, see [Compare generally available features of Azure AD](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
+
+## Frequently asked questions
+
+#### Clouds
+
+Which clouds can cross-tenant synchronization be used in?
+
+- Cross-tenant synchronization is supported within the commercial and Azure Government clouds.
+- Synchronization is only supported between two tenants in the same cloud.
+- Cross-cloud (such as public cloud to Azure Government) isn't currently supported.
+
+#### Synchronization frequency
+
+How often does cross-tenant synchronization run?
+
+- The sync interval is currently fixed to start at 40-minute intervals. Sync duration varies based on the number of in-scope users. The initial sync cycle is likely to take significantly longer than the following incremental sync cycles.
+
+#### Scope
+
+How do I control what is synchronized into the target tenant?
+
+- In the source tenant, you can control which users are provisioned with the configuration or attribute-based filters. You can also control what attributes on the user object are synchronized. For more information, see [Scoping users or groups to be provisioned with scoping filters](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md?toc=/azure/active-directory/multi-tenant-organizations/toc.json&pivots=cross-tenant-synchronization).
+
+If a user is removed from the scope of sync in a source tenant, will cross-tenant synchronization soft delete them in the target?
+
+- Yes. If a user is removed from the scope of sync in a source tenant, cross-tenant synchronization will soft delete them in the target tenant.
+
+If the sync relationship is severed, are external users previously managed by cross-tenant synchronization deleted in the target tenant?
+
+- No. No changes are made to the external users previously managed by cross-tenant synchronization if the relationship is severed (for example, if the cross-tenant synchronization policy is deleted).
+
+#### Object types
+
+What object types can be synchronized?
+
+- Azure AD users can be synchronized between tenants. (Groups, devices, and contacts aren't currently supported.)
+
+What user types can be synchronized?
+
+- Internal members can be synchronized from source tenants. Internal guests can't be synchronized from source tenants.
+- Users can be synchronized to target tenants as external members (default) or external guests.
+- For more information about the UserType definitions, see [Properties of an Azure Active Directory B2B collaboration user](../external-identities/user-properties.md).
+
+I have existing B2B collaboration users. What will happen to them?
+
+- Cross-tenant synchronization will match the user and make any necessary updates to the user, such as update the display name. By default, the UserType won't be updated from guest to member, but you can configure this in the attribute mappings.
+
+#### Attributes
+
+What user attributes can be synchronized?
+
+- Cross-tenant synchronization will sync commonly used attributes on the user object in Azure AD, including (but not limited to) displayName, userPrincipalName, and directory extension attributes.
+
+What attributes can't be synchronized?
+
+- Attributes including (but not limited to) managers, photos, custom security attributes, and user attributes outside of the directory can't be synchronized by cross-tenant synchronization.
+
+Can I control where user attributes are sourced/managed?
+
+- Cross-tenant synchronization doesn't offer direct control over source of authority. The user and its attributes are deemed authoritative at the source tenant. There are parallel sources of authority workstreams that will evolve source of authority controls for users down to the attribute level and a user object at the source may ultimately reflect multiple underlying sources. For the tenant-to-tenant process, this is still treated as the source tenant's values being authoritative for the sync process (even if pieces actually originate elsewhere) into the target tenant. Currently, there's no support for reversing the sync process's source of authority.
+- Cross-tenant synchronization only supports source of authority at the object level. That means all attributes of a user must come from the same source, including credentials. It isn't possible to reverse the source of authority or federation direction of a synchronized object.
+
+What happens if attributes for a synced user are changed in the target tenant?
+
+- Cross-tenant synchronization doesn't query for changes in the target. If no changes are made to the synced user in the source tenant, then user attribute changes made in the target tenant will persist. However, if changes are made to the user in the source tenant, then during the next synchronization cycle, the user in the target tenant will be updated to match the user in the source tenant.
+
+Can the target tenant manually block sign-in for a specific home/source tenant user that is synced?
+
+- If no changes are made to the synced user in the source tenant, then the block sign-in setting in the target tenant will persist. If a change is detected for the user in the source tenant, cross-tenant synchronization will re-enable that user blocked from sign-in in the target tenant.
+
+#### Structure
+
+Can I sync a mesh between multiple tenants?
+
+- Cross-tenant synchronization is configured as a single-direction peer-to-peer sync, meaning sync is configured between one source and one target tenant. Multiple instances of cross-tenant synchronization can be configured to sync from a single source to multiple targets and from multiple sources into a single target. But only one sync instance can exist between a source and a target.
+- Cross-tenant synchronization only synchronizes users that are internal to the home/source tenant, ensuring that you can't end up with a loop where a user is written back to the same tenant.
+- Multiple topologies are supported. For more information, see [Topologies for cross-tenant synchronization](cross-tenant-synchronization-topology.md).
+
+Can I use cross-tenant synchronization across organizations (outside my multi-tenant organization)?
+
+- For privacy reasons, cross-tenant synchronization is intended for use within an organization. We recommend using [entitlement management](../governance/entitlement-management-overview.md) for inviting B2B collaboration users across organizations.
+
+Can cross-tenant synchronization be used to migrate users from one tenant to another tenant?
+
+- No. Cross-tenant synchronization isn't a migration tool because the source tenant is required for synchronized users to authenticate. In addition, tenant migrations would require migrating user data such as SharePoint and OneDrive.
+
+#### B2B collaboration
+
+Does cross-tenant synchronization resolve any present [B2B collaboration](../external-identities/what-is-b2b.md) limitations?
+
+- Since cross-tenant synchronization is built on existing B2B collaboration technology, existing limitations apply. Examples include (but aren't limited to):
+
+ [!INCLUDE [user-type-workload-limitations-include](../includes/user-type-workload-limitations-include.md)]
+
+#### B2B direct connect
+
+How does cross-tenant synchronization relate to [B2B direct connect](../external-identities/b2b-direct-connect-overview.md)?
+
+- B2B direct connect is the underlying identity technology required for [Teams Connect shared channels](/microsoftteams/platform/concepts/build-and-test/shared-channels).
+- We recommend B2B collaboration for all other cross-tenant application access scenarios, including both Microsoft and non-Microsoft applications.
+- B2B direct connect and cross-tenant synchronization are designed to co-exist, and you can enable them both for broad coverage of cross-tenant scenarios.
+
+We're trying to determine the extent to which we'll need to utilize cross-tenant synchronization in our multi-tenant organization. Do you plan to extend support for B2B direct connect beyond Teams Connect?
+
+- There's no plan to extend support for B2B direct connect beyond Teams Connect shared channels.
+
+#### Microsoft 365
+
+Does cross-tenant synchronization enhance any cross-tenant Microsoft 365 app access user experiences?
+
+- Cross-tenant synchronization utilizes a feature that improves the user experience by suppressing the first-time B2B consent prompt and redemption process in each tenant.
+- Synchronized users will have the same cross-tenant Microsoft 365 experiences available to any other B2B collaboration user.
+
+#### Teams
+
+Does cross-tenant synchronization enhance any current Teams experiences?
+
+- Synchronized users will have the same cross-tenant Microsoft 365 experiences available to any other B2B collaboration user.
+
+#### Integration
+
+What federation options are supported for users in the target tenant back to the source tenant?
+
+- For each internal user in the source tenant, cross-tenant synchronization creates a federated external user (commonly used in B2B) in the target. It supports syncing internal users. This includes internal users federated to other identity systems using domain federation (such as [Active Directory Federation Services](/windows-server/identity/ad-fs/ad-fs-overview)). It doesn't support syncing external users.
+
+Does cross-tenant synchronization use System for Cross-Domain Identity Management (SCIM)?
+
+- No. Currently, Azure AD supports a SCIM client, but not a SCIM server. For more information, see [SCIM synchronization with Azure Active Directory](../fundamentals/sync-scim.md).
++
+## Next steps
+
+- [Topologies for cross-tenant synchronization](cross-tenant-synchronization-topology.md)
+- [Configure cross-tenant synchronization](cross-tenant-synchronization-configure.md)
active-directory Cross Tenant Synchronization Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-topology.md
+
+ Title: Topologies for cross-tenant synchronization (preview)
+description: Learn about topologies for cross-tenant synchronization in Azure Active Directory.
+++++++ Last updated : 01/23/2023+++
+#Customer intent: As a dev, devops, or it admin, I want to
++
+# Topologies for cross-tenant synchronization (preview)
+
+> [!IMPORTANT]
+> Cross-tenant synchronization is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Cross-tenant synchronization provides a flexible solution to enable collaboration, but every organization is different. Each cross-tenant synchronization configuration provides one-way synchronization between two Azure AD tenants, which enables configuration of the following topologies.
+
+## Single source with a single target
+
+The following example shows the simplest topology where users in a single tenant need access to applications in the parent tenant.
++
+## Single source with multiple targets
+
+The following example shows a central user hub tenant where users need access to applications in smaller resource tenants across your organization.
++
+## Multiple sources with a single target
+
+The following example shows recently acquired tenants where users in multiple tenants need access to applications in the parent tenant.
++
+## Mesh peer-to-peer
+
+Your organization might be more complex that is similar to a mesh. The following example shows a topology where users flow across tenants in their organization. This topology is often used to enable people search scenarios where every user needs to be in every tenant to have a unified gallery.
++
+Cross-tenant synchronization is one way. An internal member user can be synchronized into multiple tenants as an external user. When the topology shows a synchronization going in both directions, it's a distinct set of users in each direction and each arrow is a separate configuration.
+
+## Next steps
+
+- [What is cross-tenant synchronization?](cross-tenant-synchronization-overview.md)
+- [Configure cross-tenant synchronization](cross-tenant-synchronization-configure.md)
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/overview.md
+
+ Title: What is a multi-tenant organization in Azure Active Directory?
+description: Learn about multi-tenant organizations in Azure Active Directory.
+++++++ Last updated : 01/23/2023+++
+#Customer intent: As a dev, devops, or it admin, I want to
++
+# What is a multi-tenant organization in Azure Active Directory?
+
+This article provides an overview of multi-tenant organizations.
+
+## What is a tenant?
+
+A *tenant* is an instance of Azure Active Directory (Azure AD) in which information about a single organization resides including organizational objects such as users, groups, and devices and also application registrations, such as Microsoft 365 and third-party applications. A tenant also contains access and compliance policies for resources, such as applications registered in the directory. The primary functions served by a tenant include identity authentication as well as resource access management.
+
+From an Azure AD perspective, a tenant forms an identity and access management scope. For example, a tenant administrator makes an application available to some or all the users in the tenant and enforces access policies on that application for users in that tenant. In addition, a tenant contains organizational branding data that drives end-user experiences, such as the organizations email domains and SharePoint URLs used by employees in that organization. From a Microsoft 365 perspective, a tenant forms the default collaboration and licensing boundary. For example, users in Microsoft Teams or Microsoft Outlook can easily find and collaborate with other users in their tenant, but don't have the ability to find or see users in other tenants.
+
+Tenants contain privileged organizational data and are securely isolated from other tenants. In addition, tenants can be configured to have data persisted and processed in a specific region or cloud, which enables organizations to use tenants as a mechanism to meet data residency and handling compliance requirements.
+
+## What is a multi-tenant organization?
+
+A *multi-tenant organization* is an organization that has more than one instance of Azure AD. Here are the primary reasons why an organization might have multiple tenants:
+
+- **Conglomerates:** Organizations with multiple subsidiaries or business units that operate independently.
+- **Mergers and acquisitions:** Organizations that merge or acquire companies.
+- **Divestiture activity:** In a divestiture, one organization splits off part of its business to form a new organization or sell it to an existing organization.
+- **Multiple clouds:** Organizations that have compliance or regulatory needs to exist in multiple cloud environments.
+- **Multiple geographical boundaries:** Organizations that operate in multiple geographic locations with various residency regulations.
+- **Test or staging tenants:** Organizations that need multiple tenants for testing or staging purposes before deploying more broadly to primary tenants.
+- **Department or employee-created tenants:** Organizations where departments or employees have created tenants for development, testing, or separate control.
+
+## Multi-tenant challenges
+
+Your organization may have recently acquired a new company, merged with another company, or restructured based on newly formed business units. If you have disparate identity management systems, it might be challenging for users in different tenants to access resources and collaborate.
+
+The following diagram shows how users in other tenants might not be able to access applications across tenants in your organization.
++
+As your organization evolves, your IT team must adapt to meet the changing needs. This often includes integrating with an existing tenant or forming a new one. Regardless of how the identity infrastructure is managed, it's critical that users have a seamless experience accessing resources and collaborating. Today, you may be using custom scripts or on-premises solutions to bring the tenants together to provide a seamless experience across tenants.
+
+## B2B collaboration
+
+To enable users across tenants to collaborate, you can use [Azure AD B2B collaboration](../external-identities/what-is-b2b.md). B2B collaboration is a feature within External Identities that lets you invite guest users to collaborate with your organization. Once the external user has redeemed their invitation or completed sign-up, they're represented in your tenant as a user object. With B2B collaboration, you can securely share your company's applications and services with external users, while maintaining control over your own corporate data.
+
+Here are the primary constraints with using B2B collaboration across multiple tenants:
+
+- Administrators must invite users using the B2B invitation process or build an onboarding experience using the [B2B collaboration invitation manager](../external-identities/external-identities-overview.md#azure-ad-microsoft-graph-api-for-b2b-collaboration).
+- Administrators might have to synchronize users using custom scripts.
+- Depending on automatic redemption settings, users might need to accept a consent prompt and follow a redemption process in each tenant.
+- By default, users are of type external guest, which has different permissions than external member and might not be the desired user experience.
++
+## B2B direct connect
+
+To enable users across tenants to collaborate in [Teams Connect shared channels](/microsoftteams/platform/concepts/build-and-test/shared-channels), you can use [Azure AD B2B direct connect](../external-identities/b2b-direct-connect-overview.md). B2B direct connect is a feature of External Identities that lets you set up a mutual trust relationship with another Azure AD organization for seamless collaboration in Teams. When the trust is established, the B2B direct connect user has single sign-on access using credentials from their home tenant.
+
+Here's the primary constraint with using B2B direct connect across multiple tenants:
+
+- Currently, B2B direct connect works only with Teams Connect shared channels.
++
+## Cross-tenant synchronization (preview)
+
+> [!IMPORTANT]
+> Cross-tenant synchronization is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+If you want users to have a more seamless collaboration experience across tenants, you can use [cross-tenant synchronization](./cross-tenant-synchronization-overview.md). Cross-tenant synchronization is a one-way synchronization service in Azure AD that automates creating, updating, and deleting B2B collaboration users across tenants in an organization. Cross-tenant synchronization builds on the B2B collaboration functionality and utilizes existing B2B cross-tenant access settings. Users are represented in the target tenant as a B2B collaboration user object.
+
+Here are the primary benefits with using cross-tenant synchronization:
+
+- Automatically create B2B collaboration users within your organization and provide them access to the applications they need, without creating and maintaining custom scripts.
+- Improve the user experience and ensure that users can access resources, without receiving an invitation email and having to accept a consent prompt in each tenant.
+- Automatically update users and remove them when they leave the organization.
+
+Here are the primary constraints with using cross-tenant synchronization across multiple tenants:
+
+- Doesn't enhance the current Teams or Microsoft 365 experiences. Synchronized users will have the same cross-tenant Teams and Microsoft 365 experiences available to any other B2B collaboration user.
+- Doesn't synchronize groups, devices, or contacts.
++
+## Compare multi-tenant capabilities
+
+Depending on the needs of your organization, you can use any combination of cross-tenant synchronization, B2B collaboration, and B2B direct connect. The following table compares the capabilities of each feature. For more information about different external identity scenarios, see [Comparing External Identities feature sets](../external-identities/external-identities-overview.md#comparing-external-identities-feature-sets).
+
+| | Cross-tenant synchronization<br/>(internal) | B2B collaboration<br/>(Org-to-org external) | B2B direct connect<br/>(Org-to-org external) |
+| | | | |
+| **Purpose** | Users can seamlessly access apps/resources across the same organization, even if they're hosted in different tenants. | Users can access apps/resources hosted in external tenants, usually with limited guest privileges. Depending on automatic redemption settings, users might need to accept a consent prompt in each tenant. | Users can access Teams Connect shared channels hosted in external tenants. |
+| **Value** | Enables collaboration across organizational tenants. Administrators don't have to manually invite and synchronize users between tenants to ensure continuous access to apps/resources within the organization. | Enables external collaboration. More control and monitoring for administrators by managing the B2B collaboration users. Administrators can limit the access that these external users have to their apps/resources. | Enables external collaboration within Teams Connect shared channels only. More convenient for administrators because they don't have to manage B2B users. |
+| **Primary administrator workflow** | Configure the cross-tenant synchronization engine to synchronize users between multiple tenants as B2B collaboration users. | Add external users to resource tenant by using the B2B invitation process or build your own onboarding experience using the [B2B collaboration invitation manager](../external-identities/external-identities-overview.md#azure-ad-microsoft-graph-api-for-b2b-collaboration). | Configure cross-tenant access to provide external users inbound access to tenant the credentials for their home tenant. |
+| **Trust level** | High trust. All tenants are part of the same organization, and users are typically granted member access to all apps/resources. | Low to mid trust. User objects can be tracked easily and managed with granular controls. | Mid trust. B2B direct connect users are less easy to track, mandating a certain level of trust with the external organization. |
+| **Effect on users** | Within the same organization, users are synchronized from their home tenant to the resource tenant as B2B collaboration users. | External users are added to a tenant as B2B collaboration users. | Users access the resource tenant using the credentials for their home tenant. User objects aren't created in the resource tenant. |
+| **User type** | B2B collaboration user<br/>- External member (default)<br/>- External guest | B2B collaboration user<br/>- External member<br/>- External guest (default) | B2B direct connect user<br/>- N/A |
+
+The following diagram shows how cross-tenant synchronization, B2B collaboration, and B2B direct connect could be used together.
++
+## Terminology
+
+To better understand multi-tenant organizations, you can refer back to the following list of terms.
+
+| Term | Definition |
+| | |
+| tenant | An instance of Azure Active Directory (Azure AD). |
+| organization | The top level of a business hierarchy. |
+| multi-tenant organization | An organization that has more than one instance of Azure AD. |
+| cross-tenant synchronization | A one-way synchronization service in Azure AD that automates creating, updating, and deleting B2B collaboration users across tenants in an organization. |
+| cross-tenant access settings | Settings to manage collaboration with external Azure AD organizations. |
+| organizational settings | Cross-tenant access settings for specific Azure AD organizations. |
+| configuration | An application and underlying service principal in Azure AD that includes the settings (such as target tenant, user scope, and attribute mappings) needed for cross-tenant synchronization. |
+| provisioning | The process of automatically creating or synchronizing objects across a boundary. |
+| automatic redemption | A B2B setting to automatically redeem invitations so newly created users don't receive an invitation email or have to accept a consent prompt when added to a target tenant. |
+
+## Next steps
+
+- [What is cross-tenant synchronization?](cross-tenant-synchronization-overview.md)
active-directory Concept Pim For Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-pim-for-groups.md
+
+ Title: Privileged Identity Management (PIM) for Groups (preview) - Azure Active Directory
+description: How to manage Azure AD Privileged Identity Management (PIM) for Groups.
+
+documentationcenter: ''
++
+ms.assetid:
+++
+ na
+ Last updated : 01/11/2023+++++
+#Customer intent: As a dev or IT admin, I want to manage group assignments in PIM, so that I can grant eligibility for elevation to a role assigned via group membership
++
+# Privileged Identity Management (PIM) for Groups (preview)
+
+With Azure Active Directory (Azure AD), part of Microsoft Entra, you can provide users just-in-time membership in the group and just-in-time ownership of the group using the Azure AD Privileged Identity Management for Groups feature. These groups can be used to govern access to a variety of scenarios that include Azure AD roles, Azure roles, as well as Azure SQL, Azure Key Vault, Intune, other application roles, and 3rd party applications.
+
+## What is PIM for Groups?
+
+PIM for Groups is part of Azure AD Privileged Identity Management ΓÇô alongside with PIM for Azure AD Roles and PIM for Azure Resources, PIM for Groups enables users to activate the ownership or membership of an Azure AD security group or Microsoft 365 group. Groups can be used to govern access to a variety of scenarios that include Azure AD roles, Azure roles, as well as Azure SQL, Azure Key Vault, Intune, other application roles, and 3rd party applications.
+
+With PIM for Groups you can use policies similar to ones you use in PIM for Azure AD Roles and PIM for Azure Resources: you can require approval for membership or ownership activation, enforce multi-factor authentication (MFA), require justification, limit maximum activation time, and more. Each group in PIM for Groups has two policies: one for activation of membership and another for activation of ownership in the group. Up until January 2023, PIM for Groups feature was called ΓÇ£Privileged Access GroupsΓÇ¥.
+
+>[!Note]
+> For groups used for elevating into Azure AD roles, we recommend that you require an approval process for eligible member assignments. Assignments that can be activated without approval can leave you vulnerable to a security risk from less-privileged administrators. For example, the Helpdesk Administrator has permission to reset an eligible user's passwords.
+
+## What are Azure AD role-assignable groups?
+
+With Azure Active Directory (Azure AD), part of Microsoft Entra, you can assign a cloud Azure AD security group or Microsoft 365 group to an Azure AD role. This is possible only with groups that are created as role-assignable.
+
+To learn more about Azure AD role-assignable groups, see [Create a role-assignable group in Azure Active Directory](../roles/groups-create-eligible.md).
+
+Role-assignable groups benefit from extra protections comparing to non-role-assignable groups:
+- For role-assignable groups, only the Global Administrator, Privileged Role Administrator, or the group Owner can manage the group. Also, no other users can change the credentials of the users who are (active) members of the group. This feature helps prevent an admin from elevating to a higher privileged role without going through a request and approval procedure.
+- For non-role-assignable groups, various Azure AD roles can manage group ΓÇô that includes Exchange Administrators, Groups Administrators, User Administrators, etc. Also, various roles Azure AD roles can change the credentials of the users who are (active) members of the group ΓÇô that includes Authentication Administrators, Helpdesk Administrators, User Administrators, etc.
+
+To learn more about Azure AD built-in roles and their permissions, see [Azure AD built-in roles](../roles/permissions-reference.md).
+
+One Azure AD tenant can have up to 500 role-assignable groups. To learn more about Azure AD service limits and restrictions, see [Azure AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md).
+
+Azure AD role-assignable group feature is not part of Azure AD Privileged Identity Management (Azure AD PIM). It requires Azure AD Premium P1 or P2 license.
+
+## Relationship between role-assignable groups and PIM for Groups
+
+Groups can be role-assignable or non-role-assignable. The group can be enabled in PIM for Groups or not enabled in PIM for Groups. These are independent properties of the group. Any Azure AD security group and any Microsoft 365 group (except dynamic groups and groups synchronized from on-premises environment) can be enabled in PIM for Groups. The group does not have to be role-assignable group to be enabled in PIM for Groups.
+
+If you want to assign Azure AD role to a group, it has to be role-assignable. Even if you do not intend to assign Azure AD role to the group but the group provides access to sensitive resources, it is still recommended to consider creating the group as role-assignable. This is because of extra protections role-assignable groups have ΓÇô see ΓÇ£What are Azure AD role-assignable groups?ΓÇ¥ in the section above.
+
+Up until January 2023, it was required that every Privileged Access Group (former name for this PIM for Groups feature) had to be role-assignable group. This restriction is currently removed. Because of that, it is now possible to enable more than 500 groups per tenant in PIM, but only up to 500 groups can be role-assignable.
+
+## Making group of users eligible for Azure AD role
+
+There are two ways to make a group of users eligible for Azure AD role:
+1. Make active assignments of users to the group, and then assign the group to a role as eligible for activation.
+2. Make active assignment of a role to a group and assign users to be eligible to group membership.
+
+To provide a group of users with just-in-time access to Azure AD directory roles with permissions in SharePoint, Exchange, or Security & Microsoft Purview compliance portal (for example, Exchange Administrator role), be sure to make active assignments of users to the group, and then assign the group to a role as eligible for activation (Option #1 above). If you choose to make active assignment of a group to a role and assign users to be eligible to group membership instead, it may take significant time to have all permissions of the role activated and ready to use.
+
+## Next steps
+
+- [Bring groups into Privileged Identity Management (preview)](groups-discover-groups.md)
+- [Assign eligibility for a group (preview) in Privileged Identity Management](groups-assign-member-owner.md)
+- [Activate your group membership or ownership in Privileged Identity Management](groups-activate-roles.md)
+- [Approve activation requests for group members and owners (preview)](groups-approval-workflow.md)
active-directory Concept Privileged Access Versus Role Assignable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-privileged-access-versus-role-assignable.md
- Title: What's the difference between Privileged Access groups and role-assignable groups - Azure AD | Microsoft Docs
-description: Learn how to tell the difference between Privileged Access groups and role-assignable groups in Azure AD Privileged Identity Management (PIM).
------- Previously updated : 06/24/2022------
-# What's the difference between Privileged Access groups and role-assignable groups?
-
-Privileged Identity Management (PIM) supports the ability to enable privileged access on role-assignable groups. But because an available role-assignable group is a prerequisite for creating a privileged access group, this article explains the differences and how to take advantage of them.
-
-## What are Azure AD role-assignable groups?
-
-Azure Active Directory (Azure AD), part of Microsoft Entra, lets you assign a cloud Azure AD security group to an Azure AD role. A Global Administrator or Privileged Role Administrator must create a new security group and make the group role-assignable at creation time. Only the Global Administrator, Privileged Role Administrator, or the group Owner role assignments can change the membership of the group. Also, no other users can reset the password of the users who are members of the group. This feature helps prevent an admin from elevating to a higher privileged role without going through a request and approval procedure.
-
-## What are Privileged Access groups?
-
-Privileged Access groups enable users to elevate to the owner or member role of an Azure AD security group. This feature allows you to set up just-in-time workflows for not only Azure AD and Azure roles in batches, and also enables just-in-time scenarios for other use cases like Azure SQL, Azure Key Vault, Intune, or other application roles. For more information, see [Management capabilities for Privileged Access groups](groups-features.md).
-
->[!Note]
->For privileged access groups used for elevating into Azure AD roles, Microsoft recommends that you require an approval process for eligible member assignments. Assignments that can be activated without approval can leave you vulnerable to a security risk from less-privileged administrators. For example, the Helpdesk Administrator has permission to reset an eligible user's passwords.
-
-## When to use a role-assignable group
-
-You can set up just-in-time access to permissions and roles beyond Azure AD and Azure Resource. If you have other resources whose authorization can be connected to an Azure AD security group (for Azure Key Vault, Intune, Azure SQL, or other apps and services), you should enable privileged access on the group and assign users as eligible for membership in the group.
-
-If you want to assign a group to an Azure AD or Azure Resource role and require elevation through a PIM process, there's only one way to do it:
--- **Assign the group persistently to a role**. Then, in PIM, you can grant users eligible role assignments to the group. Each eligible user must activate their role assignment to become members of the group, and activation is subject to approval policies. This path requires a role-assignable group to be enabled in PIM as a privileged access group for the Azure AD role.-
-This method allows maximum granularity of permissions. Use this method to:
--- Assign a group to multiple Azure AD or Azure resource roles and have users activate once to get access to multiple roles.-- Maintain different activation policies for different sets of users to access an Azure AD or Azure resource role. For example, if you want some users to be approved before becoming a Global Administrator while allowing other users to be auto-approved, you could set up two privileged access groups, assign them both persistently (a "permanent" assignment in Privileged Identity Management) to the Global Administrator role and then use a different activation policy for the Member role for each group.-
-## Next steps
--- [Approve or deny requests for Azure AD roles](azure-ad-pim-approval-workflow.md)-- [Approve or deny requests for Azure resource roles](pim-resource-roles-approval-workflow.md)-- [Approve activation requests for privileged access group members and owners (preview)](groups-approval-workflow.md)
active-directory Groups Activate Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-activate-roles.md
Title: Activate privileged access group roles in PIM - Azure AD | Microsoft Docs
-description: Learn how to activate your privileged access group roles in Azure AD Privileged Identity Management (PIM).
+ Title: Activate your group membership or ownership in Privileged Identity Management - Azure Active Directory
+description: Learn how to activate your group membership or ownership in Privileged Identity Management (PIM).
documentationcenter: ''
na Previously updated : 08/24/2022 Last updated : 01/12/2023
-# Activate my privileged access group roles in Privileged Identity Management
+# Activate your group membership or ownership in Privileged Identity Management
-Use Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, to allow eligible role members for privileged access groups to schedule role activation for a specified date and time. They can also select an activation duration up to the maximum duration configured by administrators.
+In Azure Active Directory (Azure AD), part of Microsoft Entra, you can use Privileged Identity Management (PIM) to have just-in-time membership in the group or just-in-time ownership of the group.
-This article is for eligible members who want to activate their privileged access group role in Privileged Identity Management.
+This article is for eligible members or owners who want to activate their group membership or ownership in PIM.
## Activate a role
-When you need to take on a privileged access group role, you can request activation by using the **My roles** navigation option in Privileged Identity Management.
+When you need to take on a group membership or ownership, you can request activation by using the **My roles** navigation option in PIM.
-1. [Sign in to Azure AD portal](https://aad.portal.azure.com) with Global Administrator or group Owner permissions.
+1. [Sign in to Azure AD portal](https://aad.portal.azure.com).
-1. Open [Privileged Identity Management](https://portal.azure.com/#blade/Microsoft_Azure_PIMCommon/CommonMenuBlade/quickStart).
+1. Select **Azure AD Privileged Identity Management -> My roles -> Groups (Preview)**.
+ >[!NOTE]
+ > You may also use this [short link](https://aka.ms/pim) to open the **My roles** page directly.
-1. Select **Privileged access groups (Preview)** and then select **Activate role** to open the **My roles** page for privileged access groups.
+1. Using **Eligible assignments** blade, review the list of groups that you have eligible membership or ownership for.
- ![Privileged access roles page in PIM](./media/groups-activate-roles/groups-select-group.png)
+ :::image type="content" source="media/pim-for-groups/pim-group-6.png" alt-text="Screenshot of the list of groups that you have eligible membership or ownership for." lightbox="media/pim-for-groups/pim-group-6.png":::
-1. On the **My roles** page, select **Activate** on the row of the eligible assignment you want to activate.
+1. Select **Activate** for the eligible assignment you want to activate.
- ![Activate link on the eligible role assignment row](./media/groups-activate-roles/groups-activate-link.png)
+1. Depending on the groupΓÇÖs setting, you may be asked to provide multi-factor authentication or another form of credential.
-1. If your role requires multi-factor authentication, select **Verify your identity before proceeding**. You only have to authenticate once per session.
+1. If necessary, specify a custom activation start time. The membership or ownership is to be activated only after the selected time.
- ![Verify my identity with MFA before role activation](./media/groups-activate-roles/groups-my-roles-mfa.png)
+1. Depending on the groupΓÇÖs setting, justification for activation may be required. If required, provide it in the **Reason** box.
-1. Select **Verify my identity** and follow the instructions to provide additional security verification.
+ :::image type="content" source="media/pim-for-groups/pim-group-7.png" alt-text="Screenshot of where to provide a justification in the Reason box." lightbox="media/pim-for-groups/pim-group-7.png":::
- ![Screen to provide security verification such as a PIN code](./media/groups-activate-roles/groups-mfa-enter-code.png)
-
-1. If necessary, specify a custom activation start time. The member or owner is to be activated only after the selected time.
-
-1. In the **Reason** box, enter the reason for the activation request.
-
- ![Activate page with duration and justification](./media/groups-activate-roles/groups-activate-page.png)
-
-1. Select **Activate**.
+1. Select **Activate**.
If the [role requires approval](pim-resource-roles-approval-workflow.md) to activate, an Azure notification appears in the upper right corner of your browser informing you the request is pending approval. ## View the status of your requests
-You can view the status of your pending requests to activate.
+You can view the status of your pending requests to activate. It is specifically important when your requests undergo approval of another person.
-1. Open Azure AD Privileged Identity Management.
+1. [Sign in to Azure AD portal](https://aad.portal.azure.com).
-1. Select **My requests** to see a list of your Azure AD role and privileged access group role requests.
+1. Select **Azure AD Privileged Identity Management -> My requests -> Groups (Preview)**.
-1. Scroll to the right, if needed, to view the **Request Status** column.
+1. Review list of requests.
-## Cancel a pending request
+ :::image type="content" source="media/pim-for-groups/pim-group-8.png" alt-text="Screenshot of where to review the list of requests." lightbox="media/pim-for-groups/pim-group-8.png":::
-If you do not require activation of a role that requires approval, you can cancel a pending request at any time.
-1. Open Azure AD Privileged Identity Management.
+## Cancel a pending request
-1. Select **My requests**.
+1. [Sign in to Azure AD portal](https://aad.portal.azure.com).
-1. For the role that you want to cancel, select the **Cancel** link.
+1. Select **Azure AD Privileged Identity Management -> My requests -> Groups (Preview)**.
- When you select **Cancel**, the request will be canceled. To activate the role again, you will have to submit a new request for activation.
+ :::image type="content" source="media/pim-for-groups/pim-group-8.png" alt-text="Screenshot of where to select the request you want to cancel." lightbox="media/pim-for-groups/pim-group-8.png":::
-## Deactivate a role assignment
+1. For the request that you want to cancel, select **Cancel**.
-When a role assignment is activated, you'll see a **Deactivate** option in the PIM portal for the role assignment. When you select **Deactivate**, there's a short time lag before the role is deactivated. Also, you can't deactivate a role assignment within five minutes after activation.
+When you select **Cancel**, the request will be canceled. To activate the role again, you will have to submit a new request for activation.
## Troubleshoot ### Permissions are not granted after activating a role
-When you activate a role in Privileged Identity Management, the activation may not instantly propagate to all portals that require the privileged role. Sometimes, even if the change is propagated, web caching in a portal may result in the change not taking effect immediately. If your activation is delayed, here is what you should do.
+When you activate a role in PIM, the activation may not instantly propagate to all portals that require the privileged role. Sometimes, even if the change is propagated, web caching in a portal may result in the change not taking effect immediately. If your activation is delayed, here is what you should do.
1. Sign out of the Azure portal and then sign back in.
-1. In Privileged Identity Management, verify that you are listed as the member of the role.
+1. In PIM, verify that you are listed as the member of the role.
## Next steps -- [Extend or renew privileged access group roles in Privileged Identity Management](groups-renew-extend.md)-- [Assign my privileged access group roles in Privileged Identity Management](groups-assign-member-owner.md)
+- [Approve activation requests for group members and owners (preview)](groups-approval-workflow.md)
+
active-directory Groups Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-approval-workflow.md
Title: Approve activation requests for group members and owners in Privileged Identity Management - Azure AD
-description: Learn how to approve or deny requests for role-assignable groups in Azure AD Privileged Identity Management (PIM).
+ Title: Approve activation requests for group members and owners (preview) - Azure Active Directory
+description: Learn how to approve activation requests for group members and owners (preview) in Azure AD Privileged Identity Management (PIM).
na Previously updated : 08/16/2022 Last updated : 01/12/2023 -+
-# Approve activation requests for privileged access group members and owners (preview)
+# Approve activation requests for group members and owners (preview)
-With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, you can configure privileged access group members and owners to require approval for activation, and choose users or groups from your Azure AD organization as delegated approvers. We recommend selecting two or more approvers for each group to reduce workload for the privileged role administrator. Delegated approvers have 24 hours to approve requests. If a request is not approved within 24 hours, then the eligible user must re-submit a new request. The 24 hour approval time window is not configurable.
+With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, you can configure activation of group membership and ownership to require approval, and choose users or groups from your Azure AD organization as delegated approvers. We recommend selecting two or more approvers for each group. Delegated approvers have 24 hours to approve requests. If a request is not approved within 24 hours, then the eligible user must re-submit a new request. The 24 hour approval time window is not configurable.
-Follow the steps in this article to approve or deny requests for Azure resource roles.
+Follow the steps in this article to approve or deny requests for group membership or ownership.
## View pending requests As a delegated approver, you'll receive an email notification when an Azure resource role request is pending your approval. You can view pending requests in Privileged Identity Management.
-1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. [Sign in to Azure AD portal](https://aad.portal.azure.com).
-1. Open **Azure AD Privileged Identity Management**.
+1. Select **Azure AD Privileged Identity Management -> Approve requests -> Groups (Preview)**.
-1. Select **Approve requests**.
+1. In the **Requests for role activations** section, you'll see a list of requests pending your approval.
- ![Approve requests - Azure resources page showing request to review](./media/groups-approval-workflow/groups-select-request.png)
-
- In the **Requests for role activations** section, you'll see a list of requests pending your approval.
+ :::image type="content" source="media/pim-for-groups/pim-group-9.png" alt-text="Screenshot of requests for role activations." lightbox="media/pim-for-groups/pim-group-9.png":::
## Approve requests 1. Find and select the request that you want to approve and select **Approve**.
- ![Screenshot that shows the "Approve requests" page with the "Approve" and "Confirm" buttons highlighted.](./media/groups-approval-workflow/groups-confirm-approval.png)
- 1. In the **Justification** box, enter the business justification. 1. Select **Confirm**. An Azure notification is generated by your approval.
+ :::image type="content" source="media/pim-for-groups/pim-group-10.png" alt-text="Screenshot of an Azure notification that is generated by your approval." lightbox="media/pim-for-groups/pim-group-10.png":::
+ ## Deny requests 1. Find and select the request that you want to deny and select **Deny**.
- ![Approve requests - approve or deny pane with details and Justification box](./media/groups-approval-workflow/groups-confirm-denial.png)
- 1. In the **Justification** box, enter the business justification. 1. Select **Confirm**. An Azure notification is generated by the denial.
When you activate a role in Privileged Identity Management, the activation may n
## Next steps -- [Create an access review of Privileged Access Groups (preview)](../governance/create-access-review-privileged-access-groups.md)-- [Extend or renew group assignments in Privileged Identity Management](pim-resource-roles-renew-extend.md)-- [Email notifications in Privileged Identity Management](pim-email-notifications.md)-- [Approve or deny requests for group assignments in Privileged Identity Management](azure-ad-pim-approval-workflow.md)
+- [Configure PIM for Groups settings (preview)](groups-role-settings.md)
+
active-directory Groups Assign Member Owner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-assign-member-owner.md
Title: Assign eligible owners and members for privileged access groups - Azure Active Directory
-description: Learn how to assign eligible owners or members of a role-assignable group in Azure AD Privileged Identity Management (PIM).
+ Title: Assign eligibility for a group (preview) in Privileged Identity Management - Azure Active Directory
+description: Learn how to assign eligibility for a group (preview) in Privileged Identity Management.
documentationcenter: ''
na Previously updated : 07/29/2022 Last updated : 01/12/2023 -+
-# Assign eligibility for a privileged access group (preview) in Privileged Identity Management
+# Assign eligibility for a group (preview) in Privileged Identity Management
-Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, can help you manage the eligibility and activation of assignments to privileged access groups in Azure AD. You can assign eligibility to members or owners of the group.
+In Azure Active Directory (Azure AD), part of Microsoft Entra, you can use Privileged Identity Management (PIM) to manage just-in-time membership in the group or just-in-time ownership of the group.
+
+When a membership or ownership is assigned, the assignment:
-When a role is assigned, the assignment:
- Can't be assigned for a duration of less than five minutes - Can't be removed within five minutes of it being assigned
When a role is assigned, the assignment:
## Assign an owner or member of a group
-Follow these steps to make a user eligible to be a member or owner of a privileged access group.
-
-1. Sign in to the [Azure AD admin center](https://aad.portal.azure.com/) with a user in the [Global Administrator](../roles/permissions-reference.md#global-administrator) role, the Privileged Role Administrator role, or the group Owner role.
-
-1. Select **Groups** and then select the [role-assignable group](concept-privileged-access-versus-role-assignable.md) you want to manage. You can search or filter the list.
-
- ![find a role-assignable group to manage in PIM](./media/groups-assign-member-owner/groups-list-in-azure-ad.png)
+Follow these steps to make a user eligible member or owner of a group. You will need to have Global Administrator, Privileged Role Administrator role, or be an Owner of the group.
-1. Open the group and select **Privileged access (Preview)**.
+1. [Sign in to Azure AD](https://aad.portal.azure.com).
- ![Open the Privileged Identity Management experience](./media/groups-assign-member-owner/groups-discover-groups.png)
+1. Select **Azure AD Privileged Identity Management -> Groups (Preview)** and view groups that are already enabled for PIM for Groups.
-1. Select **Add assignments**.
+ :::image type="content" source="media/pim-for-groups/pim-group-1.png" alt-text="Screenshot of where to view groups that are already enabled for PIM for Groups." lightbox="media/pim-for-groups/pim-group-1.png":::
- ![New assignment pane](./media/groups-assign-member-owner/groups-add-assignment.png)
+1. Select the group you need to manage.
-1. Select the members or owners you want to make eligible for the privileged access group.
+1. Select **Assignments**.
- ![Screenshot that shows the "Add assignments" page with the "Select a member or group" pane open and the "Select" button highlighted.](./media/groups-assign-member-owner/add-assignments.png)
+1. Use **Eligible assignments** and **Active assignments** blades to review existing membership or ownership assignments for selected group.
-1. Select **Next** to set the membership or ownership duration.
+ :::image type="content" source="media/pim-for-groups/pim-group-3.png" alt-text="Screenshot of where to review existing membership or ownership assignments for selected group." lightbox="media/pim-for-groups/pim-group-3.png":::
- ![Select a member or group pane](./media/groups-assign-member-owner/assignment-duration.png)
+1. Select **Add assignments**.
-1. In the **Assignment type** list, select **Eligible** or **Active**. Privileged access groups provide two distinct assignment types:
+1. Under **Select role**, choose between **Member** and **Owner** to assign membership or ownership.
- - **Eligible** assignments require the member of the role to perform an action to use the role. Actions might include performing a multi-factor authentication (MFA) check, providing a business justification, or requesting approval from designated approvers.
+1. Select the members or owners you want to make eligible for the group.
- > [!Important]
- > For privileged access groups used for elevating into Azure AD roles, Microsoft recommends that you require an approval process for eligible member assignments. Assignments that can be activated without approval can leave you vulnerable to a security risk from another administrator with permission to reset an eligible user's passwords.
+ :::image type="content" source="media/pim-for-groups/pim-group-4.png" alt-text="Screenshot of where to select the members or owners you want to make eligible for the group." lightbox="media/pim-for-groups/pim-group-4.png":::
- - **Active** assignments don't require the member to perform any action to use the role. Members assigned as active have the privileges assigned to the role at all times.
+1. Select **Next**.
-1. If the assignment should be permanent (permanently eligible or permanently assigned), select the **Permanently** checkbox. Depending on your organization's settings, the check box might not appear or might not be editable. For more information, check out the [Configure privileged access group settings](groups-role-settings.md#assignment-duration) article.
+1. In the Assignment type list, select Eligible or Active. Privileged Identity Management provides two distinct assignment types:
+ - Eligible assignment requires member or owner to perform an activation to use the role. Activations may also require providing a multi-factor authentication (MFA), providing a business justification, or requesting approval from designated approvers.
+ > [!IMPORTANT]
+ > For groups used for elevating into Azure AD roles, Microsoft recommends that you require an approval process for eligible member assignments. Assignments that can be activated without approval can leave you vulnerable to a security risk from another administrator with permission to reset an eligible user's passwords.
+ - Active assignments don't require the member to perform any activations to use the role. Members or owners assigned as active have the privileges assigned to the role at all times.
-1. When finished, select **Assign**.
+1. If the assignment should be permanent (permanently eligible or permanently assigned), select the **Permanently** checkbox. Depending on the group's settings, the check box might not appear or might not be editable. For more information, check out the [Configure privileged access group settings (preview) in Privileged Identity Management](groups-role-settings.md#assignment-duration) article.
-1. To create the new role assignment, select **Add**. A notification of the status is displayed.
+ :::image type="content" source="media/pim-for-groups/pim-group-5.png" alt-text="Screenshot of where to configure the setting for add assignments." lightbox="media/pim-for-groups/pim-group-5.png":::
- ![New assignment - Notification](./media/groups-assign-member-owner/groups-assignment-notification.png)
+1. Select **Assign**.
## Update or remove an existing role assignment
-Follow these steps to update or remove an existing role assignment.
+Follow these steps to update or remove an existing role assignment. You will need to have Global Administrator, Privileged Role Administrator role, or Owner role of the group.
-1. [Sign in to Azure AD](https://aad.portal.azure.com) with Global Administrator or group Owner permissions.
-1. Select **Groups** and then select the role-assignable group you want to manage. You can search or filter the list.
+1. [Sign in to Azure AD](https://aad.portal.azure.com) with appropriate role permissions.
- ![find a role-assignable group to manage in PIM](./media/groups-assign-member-owner/groups-list-in-azure-ad.png)
+1. Select **Azure AD Privileged Identity Management -> Groups (Preview)** and view groups that are already enabled for PIM for Groups.
-1. Open the group and select **Privileged access (Preview)**.
+ :::image type="content" source="media/pim-for-groups/pim-group-1.png" alt-text="Screenshot of where to view groups that are already enabled for PIM for Groups." lightbox="media/pim-for-groups/pim-group-1.png":::
- ![Open the Privileged Identity Management experience](./media/groups-assign-member-owner/groups-discover-groups.png)
+1. Select the group you need to manage.
-1. Select the role that you want to update or remove.
+1. Select **Assignments**.
-1. Find the role assignment on the **Eligible roles** or **Active roles** tabs.
+1. Use **Eligible assignments** and **Active assignments** blades to review existing membership or ownership assignments for selected group.
- ![Update or remove role assignment](./media/groups-assign-member-owner/groups-bring-under-management.png)
+ :::image type="content" source="media/pim-for-groups/pim-group-3.png" alt-text="Screenshot of where to review existing membership or ownership assignments for selected group." lightbox="media/pim-for-groups/pim-group-3.png":::
-1. Select **Update** or **Remove** to update or remove the role assignment.
-
- For information about extending a role assignment, see [Extend or renew Azure resource roles in Privileged Identity Management](pim-resource-roles-renew-extend.md).
+1. Select **Update** or **Remove** to update or remove the membership or ownership assignment.
## Next steps -- [Extend or renew Azure resource roles in Privileged Identity Management](pim-resource-roles-renew-extend.md)-- [Configure Azure resource role settings in Privileged Identity Management](pim-resource-roles-configure-role-settings.md)-- [Assign Azure AD roles in Privileged Identity Management](pim-how-to-add-role-to-user.md)
+- [Activate your group membership or ownership in Privileged Identity Management](groups-activate-roles.md)
+- [Approve activation requests for group members and owners (preview)](groups-approval-workflow.md)
+
active-directory Groups Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-audit.md
Title: View audit report for privileged access group assignments in Privileged Identity Management (PIM) - Azure AD | Microsoft Docs
-description: View activity and audit history for privileged access group assignments in Azure AD Privileged Identity Management (PIM).
+ Title: Audit activity history for group assignments (preview) in Privileged Identity Management - Azure Active Directory
+description: View activity and audit activity history for group assignments (preview) in Azure AD Privileged Identity Management (PIM).
documentationcenter: '' editor: ''- Previously updated : 06/24/2022 Last updated : 01/12/2023
-# Audit activity history for privileged access group assignments (preview) in Privileged Identity Management
+# Audit activity history for group assignments (preview) in Privileged Identity Management
-With Privileged Identity Management (PIM), you can view activity, activations, and audit history for Azure privileged access group members and owners within your organization in Azure Active Directory (Azure AD), part of Microsoft Entra.
+With Privileged Identity Management (PIM), you can view activity, activations, and audit history for group membership or ownership changes done through PIM for groups within your organization in Azure Active Directory (Azure AD), part of Microsoft Entra.
> [!NOTE] > If your organization has outsourced management functions to a service provider who uses [Azure Lighthouse](../../lighthouse/overview.md), role assignments authorized by that service provider won't be shown here.
-Follow these steps to view the audit history for privileged access groups.
+Follow these steps to view the audit history for groups in Privileged Identity Management.
## View resource audit history
-**Resource audit** gives you a view of all activity associated with your privileged access groups.
+**Resource audit** gives you a view of all activity associated with groups in PIM.
-1. Open **Azure AD Privileged Identity Management**.
+1. [Sign in to Azure AD portal](https://aad.portal.azure.com).
-1. Select **Privileged access groups (Preview)**.
+1. Select **Azure AD Privileged Identity Management -> Groups (Preview)**.
-1. Select the privileged access group you want to view audit history for.
+1. Select the group you want to view audit history for.
-1. Under **Activity**, select **Resource audit**.
+1. Select **Resource audit**.
-1. Filter the history using a predefined date or custom range.
+ :::image type="content" source="media/pim-for-groups/pim-group-19.png" alt-text="Screenshot of where to select Resource audit." lightbox="media/pim-for-groups/pim-group-19.png":::
- ![Resource audit list with filters](media/groups-audit/groups-resource-audit.png)
+1. Filter the history using a predefined date or custom range.
## View my audit
-**My audit** enables you to view your personal role activity for a privileged access group.
+**My audit** enables you to view your personal role activity for groups in PIM.
-1. Open **Azure AD Privileged Identity Management**.
+1. [Sign in to Azure AD portal](https://aad.portal.azure.com).
-1. Select **Privileged access groups (Preview)**.
+1. Select **Azure AD Privileged Identity Management -> Groups (Preview)**.
-1. Select the privileged access group you want to view audit history for.
+1. Select the group you want to view audit history for.
-1. Under **Activity**, select **My audit**.
+1. Select **My audit**.
-1. Filter the history using a predefined date or custom range.
+ :::image type="content" source="media/pim-for-groups/pim-group-20.png" alt-text="Screenshot of where to select My audit." lightbox="media/pim-for-groups/pim-group-20.png":::
- ![Audit list for the current user](media/azure-pim-resource-rbac/my-audit-time.png)
+1. Filter the history using a predefined date or custom range.
## Next steps -- [Assign group membership and ownership in Privileged Identity Management](groups-assign-member-owner.md)-- [Approve or deny requests for privileged access groups in PIM](groups-approval-workflow.md)-- [View audit history for Azure AD roles in PIM](groups-audit.md)
+- [Assign eligibility for a group (preview) in Privileged Identity Management](groups-assign-member-owner.md)
active-directory Groups Discover Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-discover-groups.md
Title: Identify a group to manage in Privileged Identity Management - Azure AD | Microsoft Docs
-description: Learn how to onboard role-assignable groups to manage as privileged access groups in Privileged Identity Management (PIM).
+ Title: Bring groups into Privileged Identity Management (preview) - Azure Active Directory
+description: Learn how to bring groups into Privileged Identity Management (preview).
documentationcenter: ''
na Previously updated : 06/24/2022 Last updated : 01/12/2023 -+
-# Bring privileged access groups (preview) into Privileged Identity Management
+# Bring groups into Privileged Identity Management (preview)
-In Azure Active Directory (Azure AD), part of Microsoft Entra, you can assign Azure AD built-in roles to cloud groups to simplify how you manage role assignments. To protect Azure AD roles and to secure access, you can now use Privileged Identity Management (PIM) to manage just-in-time access for members or owners of these groups. To manage an Azure AD role-assignable group as a privileged access group in Privileged Identity Management, you must bring it under management in PIM.
+In Azure Active Directory (Azure AD), part of Microsoft Entra, you can use Privileged Identity Management (PIM) to manage just-in-time membership in the group or just-in-time ownership of the group. Groups can be used to provide access to Azure AD Roles, Azure roles, and various other scenarios. To manage an Azure AD group in PIM, you must bring it under management in PIM.
## Identify groups to manage
-You can create a role-assignable group in Azure AD as described in [Create a role-assignable group in Azure Active Directory](../roles/groups-create-eligible.md). You must be in the group Owner role, Global Administrator role, or Privileged Role Administrator role to bring the group under management with Privileged Identity Management.
+Before you will start, you need an Azure AD Security group or Microsoft 365 group. To learn more about group management in Azure AD, see [Manage Azure Active Directory groups and group membership](../fundamentals/how-to-manage-groups.md).
-1. [Sign in to Azure AD](https://aad.portal.azure.com) with appropriate role permissions.
+Dynamic groups and groups synchronized from on-premises environment cannot be managed in PIM for Groups.
-1. Select **Groups** and then select the role-assignable group you want to manage in PIM. You can search and filter the list.
+You should either be a group Owner, have Global Administrator role, or Privileged Role Administrator role to bring the group under management with PIM.
- ![find a role-assignable group to manage in PIM](./media/groups-discover-groups/groups-list-in-azure-ad.png)
-1. Open the group and select **Privileged access (Preview)**.
+1. [Sign in to Azure AD](https://aad.portal.azure.com).
- ![Open the Privileged Identity Management experience](./media/groups-discover-groups/groups-discover-groups.png)
+1. Select **Azure AD Privileged Identity Management -> Groups (Preview)** and view groups that are already enabled for PIM for Groups.
-1. Start managing assignments in PIM.
+ :::image type="content" source="media/pim-for-groups/pim-group-1.png" alt-text="Screenshot of where to view groups that are already enabled for PIM for Groups." lightbox="media/pim-for-groups/pim-group-1.png":::
- ![Manage assignments in Privileged Identity Management](./media/groups-discover-groups/groups-bring-under-management.png)
+1. Select **Discover groups** and select a group that you want to bring under management with PIM.
+
+ :::image type="content" source="media/pim-for-groups/pim-group-2.png" alt-text="Screenshot of where to select a group that you want to bring under management with PIM." lightbox="media/pim-for-groups/pim-group-2.png":::
+
+1. Select **Manage groups** and **OK**.
+1. Select **Groups (Preview)** to return to the list of groups enabled in PIM for Groups.
++
+> [!NOTE]
+> Alternatively, you can use the Groups blade to bring group under Privileged Identity Management.
> [!NOTE]
-> Once a privileged access group is managed, it can't be taken out of management. This prevents another resource administrator from removing Privileged Identity Management settings.
+> Once a group is managed, it can't be taken out of management. This prevents another resource administrator from removing PIM settings.
> [!IMPORTANT]
-> If a privileged access group is deleted from Azure Active Directory, it may take up to 24 hours for the group to be removed from the Privileged access groups (Preview) blade.
+> If a group is deleted from Azure AD, it may take up to 24 hours for the group to be removed from the PIM for Groups blades.
## Next steps -- [Configure privileged access group assignments in Privileged Identity Management](pim-resource-roles-configure-role-settings.md)-- [Assign privileged access groups in Privileged Identity Management](pim-resource-roles-assign-roles.md)
+- [Assign eligibility for a group (preview) in Privileged Identity Management](groups-assign-member-owner.md)
+- [Activate your group membership or ownership in Privileged Identity Management](groups-activate-roles.md)
+- [Approve activation requests for group members and owners (preview)](groups-approval-workflow.md)
active-directory Groups Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-features.md
- Title: Managing Privileged Access groups in Privileged Identity Management (PIM) | Microsoft Docs
-description: How to manage members and owners of privileged access groups in Privileged Identity Management (PIM)
------- Previously updated : 08/15/2022-----
-#Customer intent: As a dev or IT admin, I want to manage group assignments in PIM, so that I can grant eligibility for elevation to a role assigned via group membership
--
-# Management capabilities for Privileged Access groups (preview)
-
-In Privileged Identity Management (PIM), you can now assign eligibility for membership or ownership of privileged access groups. Starting with this preview, you can assign built-in roles in Azure Active Directory (Azure AD), part of Microsoft Entra, to cloud groups and use PIM to manage group member and owner eligibility and activation. For more information about role-assignable groups in Azure AD, see [Use Azure AD groups to manage role assignments](../roles/groups-concept.md).
-
-> [!IMPORTANT]
-> To provide a group of users with just-in-time access to Azure AD directory roles with permissions in SharePoint, Exchange, or Security & Compliance Center (for example, Exchange Administrator role), be sure to make active assignments of users to the group, and then assign the group to a role as eligible for activation. If instead you make active assignment of a role to a group and assign users to be eligible to group membership, it might take significant time to have all permissions of the role activated and ready to use.
-
-> [!NOTE]
-> For privileged access groups that are used to elevate into Azure AD roles, we recommend that you require an approval process for eligible member assignments. Assignments that can be activated without approval might create a security risk from administrators who have a lower level of permissions. For example, the Helpdesk Administrator has permissions to reset an eligible user's password.
-
-## Require different policies for each role assignable group
-
-Some organizations use tools like Azure AD business-to-business (B2B) collaboration to invite their partners as guests to their Azure AD organization. Instead of a single just-in-time policy for all assignments to a privileged role, you can create two different privileged access groups with their own policies. You can enforce less strict requirements for your trusted employees, and stricter requirements like approval workflow for your partners when they request activation into their assigned role.
-
-## Activate multiple role assignments in a single request
-
-With the privileged access groups preview, you can give workload-specific administrators quick access to multiple roles with a single just-in-time request. For example, your Tier 0 Office Admins might need just-in-time access to the Exchange Admin, Office Apps Admin, Teams Admin, and Search Admin roles to thoroughly investigate incidents daily. You can create a role-assignable group called ΓÇ£Tier 0 Office AdminsΓÇ¥, and make it eligible for assignment to the four roles previously mentioned (or any Azure AD built-in roles) and enable it for Privileged Access in the groupΓÇÖs Activity section. Once enabled for privileged access, you can assign your admins and owners to the group. When the admins elevate the group into the roles, your staff will have permissions from all four Azure AD roles.
-
-## Extend and renew group assignments
-
-After you set up your time-bound owner or member assignments, the first question you might ask is what happens if an assignment expires? In this new version, we provide two options for this scenario:
--- Extend ΓÇô When a role assignment nears expiration, the user can use Privileged Identity Management to request an extension for the role assignment-- Renew ΓÇô When a role assignment has already expired, the user can use Privileged Identity Management to request a renewal for the role assignment-
-Both user-initiated actions require an approval from a Global administrator or Privileged role administrator. Admins will no longer need to be in the business of managing these expirations. They can just wait for the extension or renewal requests and approve them if the request is valid.
-
-## Next steps
--- [Assign a privileged access group owner or member](groups-assign-member-owner.md)-- [Approve or deny activation requests for privileged access group members and owners](groups-approval-workflow.md)
active-directory Groups Renew Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-renew-extend.md
Title: Renew expired group owner or member assignments in Privileged Identity Management - Azure AD | Microsoft Docs
-description: Learn how to extend or renew role-assignable group assignments in Azure AD Privileged Identity Management (PIM).
+ Title: Extend or renew PIM for groups assignments (preview) - Azure Active Directory
+description: Learn how to extend or renew PIM for groups assignments (preview).
documentationcenter: ''
na Previously updated : 06/24/2022 Last updated : 01/12/2023
-# Extend or renew privileged access group assignments (preview) in Privileged Identity Management
+# Extend or renew PIM for groups assignments (preview)
-Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, provides controls to manage the access and assignment lifecycle for privileged access groups. Administrators can assign roles using start and end date-time properties. When the assignment end approaches, Privileged Identity Management sends email notifications to the affected users or groups. It also sends email notifications to administrators of the resource to ensure that appropriate access is maintained. Assignments might be renewed and remain visible in an expired state for up to 30 days, even if access is not extended.
+Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, provides controls to manage the access and assignment lifecycle for group membership and ownership. Administrators can assign start and end date-time properties for group membership and ownership. When the assignment end approaches, Privileged Identity Management sends email notifications to the affected users or groups. It also sends email notifications to administrators of the resource to ensure that appropriate access is maintained. Assignments might be renewed and remain visible in an expired state for up to 30 days, even if access is not extended.
## Who can extend and renew
-Only administrators of the resource can extend or renew privileged access group assignments. The affected user or group can request to extend assignments that are about to expire and request to renew assignments that are already expired.
+Only Global Administrators, Privileged Role Administrators, or group owners can extend or renew group membership/ownership time-bound assignments. The affected user or group can request to extend assignments that are about to expire and request to renew assignments that are already expired.
## When notifications are sent
Administrators receive notifications when a user or group requests to extend or
## Extend group assignments
-The following steps outline the process for requesting, resolving, or administering an extension or renewal of a group assignment.
+The following steps outline the process for requesting, resolving, or administering an extension or renewal of a group membership or ownership assignment.
### Self-extend expiring assignments
-Users assigned to a privileged access group can extend expiring group assignments directly from the **Eligible** or **Active** tab on the **Assignments** page for the group. Users or groups can request to extend eligible and active assignments that expire in the next 14 days.
+Users assigned group membership or ownership can extend expiring group assignments directly from the **Eligible** or **Active** tab on the **Assignments** page for the group. Users or groups can request to extend eligible and active assignments that expire in the next 14 days.
-![My roles page listing eligible assgnments with an Action column](media/groups-renew-extend/self-extend-group-assignment.png)
When the assignment end date-time is within 14 days, the **Extend** command is available. To request an extension of a group assignment, select **Extend** to open the request form.
-![Extend group assignment pane with a Reason box and details](media/groups-renew-extend/extend-request-details-group-assignment.png)
>[!NOTE] >We recommend including the details of why the extension is necessary, and for how long the extension should be granted (if you have this information).
-In a matter of moments, administrators receive an email notification requesting that they review the extension request. If a request to extend has already been submitted, an Azure notification appears in the portal.
+Administrators receive an email notification requesting that they review the extension request. If a request to extend has already been submitted, an Azure notification appears in the portal.
To view the status of or cancel your request, open the **Pending requests** page for the group assignment.
-![Privileged access group assignments - Pending requests page showing the link to Cancel](media/groups-renew-extend/group-assignment-extend-cancel-request.png)
### Admin approved extension
When a user or group submits a request to extend a group assignment, administrat
In addition to using following the link from email, administrators can approve or deny requests by going to the Privileged Identity Management administration portal and selecting **Approve requests** in the left pane.
-![Privileged access group assignments - Approve requests page listing requests and links to approve or deny](media/groups-renew-extend/group-assignment-extend-admin-approve.png)
When an Administrator selects **Approve** or **Deny**, the details of the request are shown, along with a field to provide a business justification for the audit logs.
-![Approve group assignment request with requestor reason, assignment type, start time, end time, and reason](media/groups-renew-extend/group-assignment-extend-admin-approve-reason.png)
When approving a request to extend a group assignment, resource administrators can choose a new start date, end date, and assignment type. Changing assignment type might be necessary if the administrator wants to provide limited access to complete a specific task (one day, for example). In this example, the administrator can change the assignment from **Eligible** to **Active**. This means they can provide access to the requestor without requiring them to activate.
If a user assigned to a group doesn't request an extension for the group assignm
To extend a group assignment, browse to the assignment view in Privileged Identity Management. Find the assignment that requires an extension. Then select **Extend** in the action column.
-![Assignments page listing eligible group assignments with links to extend](media/groups-renew-extend/group-assignment-extend-admin-approve.png)
## Renew group assignments
While conceptually similar to the process for requesting an extension, the proce
Users who can no longer access resources can access up to 30 days of expired assignment history. To do this, they browse to **My Roles** in the left pane, and then select the **Expired assignments** tab.
-![My roles page - Expired assignments tab](media/groups-renew-extend/groups-renew-from-my-roles.png)
- The list of assignments shown defaults to **Eligible assignments**. Use the drop-down menu to toggle between Eligible and Active assignments. To request renewal for any of the group assignments in the list, select the **Renew** action. Then provide a reason for the request. It's helpful to provide a duration in addition to any additional context or a business justification that can help the resource administrator decide to approve or deny.
-![Renew group assignment pane showing Reason box](media/groups-renew-extend/groups-renew-request-form.png)
- After the request has been submitted, resource administrators are notified of a pending request to renew a group assignment. ### Admin approves
When approving a request to renew a group assignment, resource administrators mu
## Next steps -- [Approve or deny requests for privileged access group assignments in Privileged Identity Management](groups-approval-workflow.md)-- [Configure privileged access group settings in Privileged Identity Management](groups-role-settings.md)
+- [Approve activation requests for group members and owners (preview)](groups-approval-workflow.md)
+- [Configure PIM for Groups settings (preview)](groups-role-settings.md)
active-directory Groups Role Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-role-settings.md
Title: Configure privileged access groups settings in PIM - Azure Active Directory | Microsoft Docs
-description: Learn how to configure role-assignable groups settings in Azure AD Privileged Identity Management (PIM).
+ Title: Configure PIM for Groups settings (preview) - Azure Active Directory
+description: Learn how to configure PIM for Groups settings (preview).
documentationcenter: ''
na Previously updated : 06/24/2022 Last updated : 01/12/2023
-# Configure privileged access group settings (preview) in Privileged Identity Management
+# Configure PIM for Groups settings (preview)
-Role settings are the default settings that are applied to group owner and group member privileged access assignments in Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra. Use the following steps to set up the approval workflow to specify who can approve or deny requests to elevate privilege.
+In Privileged Identity Management (PIM) for groups in Azure Active Directory (Azure AD), part of Microsoft Entra, role settings define membership/ownership assignment properties: MFA and approval requirements for activation, assignment maximum duration, notification settings, etc. Use the following steps to configure role settings ΓÇô i.e., setup the approval workflow to specify who can approve or deny requests to elevate privilege.
-## Open role settings
+You need to have Global Administrator, Privileged Role Administrator, or group Owner permissions to manage settings for membership/ownership assignments of the group. Role settings are defined per role per group: all assignments for the same role (member/owner) for the same group follow same role settings. Role settings of one group are independent from role settings of another group. Role settings for one role (member) are independent from role settings for another role (owner).
-Follow these steps to open the settings for an Azure privileged access group role.
-1. Sign in to the [Azure portal](https://portal.azure.com/) with a user in the [Global Administrator](../roles/permissions-reference.md#global-administrator) role, the Privileged Role Administrator role, or the group Owner role.
+## Update role settings
-1. Open **Azure AD Privileged Identity Management**.
+Follow these steps to open the settings for a group role.
-1. Select **Privileged access (Preview)**.
- >[!NOTE]
- > Approver doesn't have to be member of the group, owner of the group or have Azure AD role assigned.
+1. [Sign in to Azure AD portal](https://aad.portal.azure.com).
-1. Select the group that you want to manage.
+1. Select **Azure AD Privileged Identity Management -> Groups (Preview)**.
- ![Privileged access groups filtered by a group name](./media/groups-role-settings/group-select.png)
+1. Select the group that you want to configure role settings for.
1. Select **Settings**.
- ![Settings page listing group settings for the selected group](./media/groups-role-settings/group-settings-select-role.png)
+1. Select the role you need to configure role settings for ΓÇô **Member** or **Owner**.
-1. Select the Owner or Member role whose settings you want to view or change. You can view the current settings for the role in the **Role setting details** page.
+ :::image type="content" source="media/pim-for-groups/pim-group-17.png" alt-text="Screenshot of where to select the role you need to configure role settings for." lightbox="media/pim-for-groups/pim-group-17.png":::
- ![Role setting details page listing several assignment and activation settings](./media/groups-role-settings/group-role-setting-details.png)
+1. Review current role settings.
-1. Select **Edit** to open the **Edit role setting** page. The **Activation** tab allows you to change the role activation settings, including whether to allow permanent eligible and active assignments.
+1. Select **Edit** to update role settings.
- ![Edit role settings page with Activation tab open](./media/groups-role-settings/role-settings-activation-tab.png)
+ :::image type="content" source="media/pim-for-groups/pim-group-18.png" alt-text="Screenshot of where to select Edit to update role settings." lightbox="media/pim-for-groups/pim-group-18.png":::
-1. Select the **Assignment** tab to open the assignment settings tab. These settings control the Privileged Identity Management assignment settings for this role.
+1. Once finished, select **Update**.
- ![Role Assignment tab in role settings page](./media/groups-role-settings/role-settings-assignment-tab.png)
+## Role settings
-1. Use the **Notification** tab or the **Next: Activation** button at the bottom of the page to get to the notification setting tab for this role. These settings control all the email notifications related to this role.
+### Activation maximum duration
- ![Role Notifications tab in role settings page](./media/groups-role-settings/role-settings-notification-tab.png)
+Use the **Activation maximum duration** slider to set the maximum time, in hours, that an activation request for a role assignment remains active before it expires. This value can be from one to 24 hours.
-1. Select the **Update** button at any time to update the role settings.
+### Require multi-factor authentication (MFA) on activation
-In the **Notifications** tab on the role settings page, Privileged Identity Management enables granular control over who receives notifications and which notifications they receive.
+You can require users who are eligible for a role to prove who they are using Azure AD Multi-Factor Authentication before they can activate. Multi-factor authentication ensures that the user is who they say they are with reasonable certainty. Enforcing this option protects critical resources in situations when the user account might have been compromised.
-- **Turning off an email**<br>You can turn off specific emails by clearing the default recipient check box and deleting any other recipients. -- **Limit emails to specified email addresses**<br>You can turn off emails sent to default recipients by clearing the default recipient check box. You can then add other email addresses as recipients. If you want to add more than one email address, separate them using a semicolon (;).-- **Send emails to both default recipients and more recipients**<br>You can send emails to both default recipient and another recipient by selecting the default recipient checkbox and adding email addresses for other recipients.-- **Critical emails only**<br>For each type of email, you can select the check box to receive critical emails only. What this means is that Privileged Identity Management will continue to send emails to the specified recipients only when the email requires an immediate action. For example, emails asking users to extend their role assignment will not be triggered while an emails requiring admins to approve an extension request will be triggered.
+User may not be prompted for multi-factor authentication if they authenticated with strong credential or provided multi-factor authentication earlier in this session.
+
+For more information, see [Multifactor authentication and Privileged Identity Management](pim-how-to-require-mfa.md).
+
+### Require justification on activation
+
+You can require that users enter a business justification when they activate the eligible assignment.
-## Assignment duration
+### Require ticket information on activation
+
+You can require that users enter a support ticket when they activate the eligible assignment. This is information only field and correlation with information in any ticketing system is not enforced.
+
+### Require approval to activate
+
+You can require approval for activation of eligible assignment. Approver doesnΓÇÖt have to be group member or owner. When using this option, you have to select at least one approver (we recommend to select at least two approvers), there are no default approvers.
+
+To learn more about approvals, see [Approve activation requests for privileged access group members and owners (preview)](groups-approval-workflow.md).
+
+### Assignment duration
You can choose from two assignment duration options for each assignment type (eligible and active) when you configure settings for a role. These options become the default maximum duration when a user is assigned to the role in Privileged Identity Management.
And, you can choose one of these **active** assignment duration options:
> [!NOTE] > All assignments that have a specified end date can be renewed by resource administrators. Also, users can initiate self-service requests to [extend or renew role assignments](pim-resource-roles-renew-extend.md).
-## Require multifactor authentication
-
-Privileged Identity Management provides optional enforcement of Azure AD Multi-Factor Authentication for two distinct scenarios.
-
-### Require multifactor authentication on active assignment
-
-This option requires admins must complete multifactor authentication before creating an active (as opposed to eligible) role assignment. Privileged Identity Management can't enforce multifactor authentication when the user uses their role assignment because they are already active in the role from the time that it is assigned.
-
-To require multifactor authentication when creating an active role assignment, select the **Require Multi-Factor Authentication on active assignment** check box.
+### Require multi-factor authentication on active assignment
-### Require multifactor authentication on activation
+You can require that administrator or group owner provides multi-factor authentication when they create an active (as opposed to eligible) assignment. Privileged Identity Management can't enforce multi-factor authentication when the user uses their role assignment because they are already active in the role from the time that it is assigned.
+User may not be prompted for multi-factor authentication if they authenticated with strong credential or provided multi-factor authentication earlier in this session.
-You can require users who are eligible for a role to prove who they are using Azure AD Multi-Factor Authentication before they can activate. Multifactor authentication ensures that the user is who they say they are with reasonable certainty. Enforcing this option protects critical resources in situations when the user account might have been compromised.
+### Require justification on active assignment
-To require multifactor authentication before activation, check the **Require Multi-Factor Authentication on activation** box.
+You can require that users enter a business justification when they create an active (as opposed to eligible) assignment.
-For more information, see [Multifactor authentication and Privileged Identity Management](pim-how-to-require-mfa.md).
-
-## Activation maximum duration
-
-Use the **Activation maximum duration** slider to set the maximum time, in hours, that an activation request for a role assignment remains active before it expires. This value can be from one to 24 hours.
-
-## Require justification
-
-You can require that users enter a business justification when they activate. To require justification, check the **Require justification on active assignment** box or the **Require justification on activation** box.
-
-## Require approval to activate
-
-If you want to require approval to activate a role, follow these steps.
-
-1. Check the **Require approval to activate** check box.
-
-1. Select **Select approvers** to open the **Select a member or group** page.
-
- ![Select a user or group pane to select approvers](./media/groups-role-settings/group-settings-select-approvers.png)
-
-1. Select at least one user or group and then click **Select**. You can add any combination of users and groups. You must select at least one approver. There are no default approvers.
-
- Your selections will appear in the list of selected approvers.
+In the **Notifications** tab on the role settings page, Privileged Identity Management enables granular control over who receives notifications and which notifications they receive.
-1. Once you have specified your all your role settings, select **Update** to save your changes.
+- **Turning off an email**<br>You can turn off specific emails by clearing the default recipient check box and deleting any other recipients.
+- **Limit emails to specified email addresses**<br>You can turn off emails sent to default recipients by clearing the default recipient check box. You can then add other email addresses as recipients. If you want to add more than one email address, separate them using a semicolon (;).
+- **Send emails to both default recipients and more recipients**<br>You can send emails to both default recipient and another recipient by selecting the default recipient checkbox and adding email addresses for other recipients.
+- **Critical emails only**<br>For each type of email, you can select the check box to receive critical emails only. What this means is that Privileged Identity Management will continue to send emails to the specified recipients only when the email requires an immediate action. For example, emails asking users to extend their role assignment will not be triggered while an email requiring admins to approve an extension request will be triggered.
## Next steps -- [Assign privileged access group membership or ownership in PIM](groups-assign-member-owner.md)
+- [Assign eligibility for a group (preview) in Privileged Identity Management](groups-assign-member-owner.md)
active-directory Pim Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-deployment-plan.md
Follow these tasks to prepare PIM to manage privileged access groups.
It may be the case that an individual has five or six eligible assignments to Azure AD roles through PIM. They will have to activate each role individually, which can reduce productivity. Worse still, they can also have tens or hundreds of Azure resources assigned to them, which aggravates the problem.
-In this case, you should use privileged access groups. Create a privileged access group and grant it permanent active access to multiple roles. See [privileged access groups management capabilities](groups-features.md).
+In this case, you should use privileged access groups. Create a privileged access group and grant it permanent active access to multiple roles. See [Privileged Identity Management (PIM) for Groups (preview)](concept-pim-for-groups.md).
To manage an Azure AD role-assignable group as a privileged access group, you must [bring it under management in PIM](groups-discover-groups.md).
active-directory Concept Provisioning Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
Previously updated : 11/04/2022 Last updated : 01/23/2023 - # Provisioning logs in Azure Active Directory
The **Identity** filter enables you to specify the name or the identity that you
You can search by the name or ID of the object. The ID varies by scenario. - If you're provisioning an object *from Azure AD to Salesforce*, the **source ID** is the object ID of the user in Azure AD. The **target ID** is the ID of the user at Salesforce. - If you're provisioning *from Workday to Azure AD*, the **source ID** is the Workday worker employee ID. The **target ID** is the ID of the user in Azure AD.
+- If you're provisioning users for [cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-configure.md), the **source ID** is ID of the user in the source tenant. The **target ID** is ID of the user in the target tenant.
> [!NOTE] > The name of the user might not always be present in the **Identity** column. There will always be one ID.
In addition to the filters of the default view, you can set the following filter
- **Target System**: You can specify where the identity is getting provisioned to. For example, when you're provisioning an object from Azure AD to ServiceNow, the target system is ServiceNow. -- **Application**: You can show only records of applications with a display name that contains a specific string.
+- **Application**: You can show only records of applications with a display name or object ID that contains a specific string. For [cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-configure.md), use the object ID of the configuration and not the application ID.
## Analyze the provisioning logs
Use the following table to better understand how to resolve errors that you find
|SystemForCrossDomainIdentity<br>ManagementServiceIncompatible|The Azure AD provisioning service is unable to parse the response from the third party application. Work with the application developer to ensure that the SCIM server is compatible with the [Azure AD SCIM client](../app-provisioning/use-scim-to-provision-users-and-groups.md#understand-the-azure-ad-scim-implementation).| |SchemaPropertyCanOnlyAcceptValue|The property in the target system can only accept one value, but the property in the source system has multiple. Ensure that you either map a single-valued attribute to the property that is throwing an error, update the value in the source to be single-valued, or remove the attribute from the mappings.|
+## Error codes for cross-tenant synchronization
+
+Use the following table to better understand how to resolve errors that you find in the provisioning logs for [cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-configure.md). For any error codes that are missing, provide feedback by using the link at the bottom of this page.
+
+> [!div class="mx-tableFixed"]
+> | Error code | Cause | Solution |
+> | | | |
+> | AzureActiveDirectoryCannotUpdateObjectsOriginatedInExternalService | The synchronization engine could not update one or more user properties in the target tenant.<br/><br/>The operation failed in Microsoft Graph API because of Source of Authority (SOA) enforcement. Currently, the following properties show up in the list:<br/>`Mail`<br/>`showInAddressList` | In some cases (for example when `showInAddressList` property is part of the user update), the synchronization engine might automatically retry the (user) update without the offending property. Otherwise, you will need to update the property directly in the target tenant. |
+> | AzureDirectoryB2BManagementPolicyCheckFailure | The cross-tenant synchronization policy allowing automatic redemption failed.<br/><br/>The synchronization engine checks to ensure that the administrator of the target tenant has created an inbound cross-tenant synchronization policy allowing automatic redemption. The synchronization engine also checks if the administrator of the source tenant has enabled an outbound policy for automatic redemption. | Ensure that the automatic redemption setting has been enabled for both the source and target tenants. For more information, see [Automatic redemption setting](../multi-tenant-organizations/cross-tenant-synchronization-overview.md#automatic-redemption-setting). |
+> | AzureActiveDirectoryQuotaLimitExceeded | The number of objects in the tenant exceeds the directory limit.<br/><br/>Azure AD has limits for the number of objects that can be created in a tenant. | Check whether the quota can be increased. For information about the directory limits and steps to increase the quota, see [Azure AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md). |
+ ## Next steps * [Check the status of user provisioning](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md)
active-directory Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/best-practices.md
Microsoft recommends that you keep two break glass accounts that are permanently
## 6. Use groups for Azure AD role assignments and delegate the role assignment
-If you have an external governance system that takes advantage of groups, then you should consider assigning roles to Azure AD groups, instead of individual users. You can also manage role-assignable groups in PIM to ensure that there are no standing owners or members in these privileged groups. For more information, see [Management capabilities for privileged access Azure AD groups](../privileged-identity-management/groups-features.md).
+If you have an external governance system that takes advantage of groups, then you should consider assigning roles to Azure AD groups, instead of individual users. You can also manage role-assignable groups in PIM to ensure that there are no standing owners or members in these privileged groups. For more information, see [Privileged Identity Management (PIM) for Groups (preview)](../privileged-identity-management/concept-pim-for-groups.md).
You can assign an owner to role-assignable groups. That owner decides who is added to or removed from the group, so indirectly, decides who gets the role assignment. In this way, a Global Administrator or Privileged Role Administrator can delegate role management on a per-role basis by using groups. For more information, see [Use Azure AD groups to manage role assignments](groups-concept.md).
You can assign an owner to role-assignable groups. That owner decides who is add
It may be the case that an individual has five or six eligible assignments to Azure AD roles through PIM. They will have to activate each role individually, which can reduce productivity. Worse still, they can also have tens or hundreds of Azure resources assigned to them, which aggravates the problem.
-In this case, you should use [privileged access groups](../privileged-identity-management/groups-features.md). Create a privileged access group and grant it permanent access to multiple roles (Azure AD and/or Azure). Make that user an eligible member or owner of this group. With just one activation, they will have access to all the linked resources.
+In this case, you should use [Privileged Identity Management (PIM) for Groups (preview)](../privileged-identity-management/concept-pim-for-groups.md). Create a privileged access group and grant it permanent access to multiple roles (Azure AD and/or Azure). Make that user an eligible member or owner of this group. With just one activation, they will have access to all the linked resources.
![Privileged access group diagram showing activating multiple roles at once](./media/best-practices/privileged-access-group.png)
active-directory 15Five Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/15five-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in 15Five and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-provision users and/or groups to [15Five](https://www.15five.com/pricing/). For important details on what this service does, how it works, and frequently asked questions, see Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory.
-> [!NOTE]
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-- ## Capabilities supported > [!div class="checklist"] > * Create users in 15Five
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md).
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md).
active-directory 4Me Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/4me-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in 4m
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Airstack Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/airstack-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Ai
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Bitabiz Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bitabiz-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Bi
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Blink Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blink-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Bl
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Brivo Onair Identity Connector Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/brivo-onair-identity-connector-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Br
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Comeet Recruiting Software Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/comeet-recruiting-software-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Co
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Druva Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/druva-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Dr
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Dynamic Signal Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dynamic-signal-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Dy
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Federated Directory Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/federated-directory-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Fe
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Figma Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/figma-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Fi
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Flock Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/flock-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Fl
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Fuze Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fuze-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Fuze and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-provision users and/or groups to [Fuze](https://www.fuze.com/). For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
-> [!NOTE]
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-- ## Capabilities supported > [!div class="checklist"] > * Create users in Fuze
active-directory Ideo Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ideo-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in IDEO and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-provision users and/or groups to IDEO. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
-> [!NOTE]
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-- ## Capabilities supported > [!div class="checklist"] > * Create users in IDEO
active-directory Infor Cloudsuite Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/infor-cloudsuite-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in In
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Ipass Smartconnect Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ipass-smartconnect-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in iP
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Keeper Password Manager Digitalvault Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/keeper-password-manager-digitalvault-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Ke
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Looop Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/looop-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Lo
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Meta Networks Connector Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/meta-networks-connector-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Me
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Mindtickle Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mindtickle-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Mi
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Miro Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/miro-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Mi
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Mypolicies Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mypolicies-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in my
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Netskope Administrator Console Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netskope-administrator-console-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Ne
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Officespace Software Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/officespace-software-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Of
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Priority Matrix Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/priority-matrix-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Pr
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Promapp Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/promapp-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Pr
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Proxyclick Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/proxyclick-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Pr
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Rfpio Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rfpio-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in RF
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Robin Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/robin-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Ro
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Sap Cloud Platform Identity Authentication Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-cloud-platform-identity-authentication-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in SA
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Signagelive Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/signagelive-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Si
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Smartfile Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smartfile-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Sm
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Smartsheet Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smartsheet-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Sm
> * Keep user attributes synchronized between Azure AD and Smartsheet > * Single sign-on to Smartsheet (recommended)
-> [!NOTE]
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites:
active-directory Soloinsight Cloudgate Sso Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/soloinsight-cloudgate-sso-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in So
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Spaceiq Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/spaceiq-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Sp
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Storegate Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/storegate-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in St
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Surveymonkey Enterprise Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/surveymonkey-enterprise-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure SurveyMonkey Enterprise for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to SurveyMonkey Enterprise.
+
+documentationcenter: ''
+
+writer: Thwimmer
++
+ms.assetid: 50c400a2-8dd9-41ba-b11d-b1516b9d2967
+++
+ms.devlang: na
+ Last updated : 01/19/2023+++
+# Tutorial: Configure SurveyMonkey Enterprise for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both SurveyMonkey Enterprise and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [SurveyMonkey Enterprise](https://www.surveymonkey.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in SurveyMonkey Enterprise.
+> * Remove users in SurveyMonkey Enterprise when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and SurveyMonkey Enterprise.
+> * [Single sign-on](surveymonkey-enterprise-tutorial.md) to SurveyMonkey Enterprise (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in SurveyMonkey Enterprise with Admin or Primary Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and SurveyMonkey Enterprise](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure SurveyMonkey Enterprise to support provisioning with Azure AD
+
+### Set Up SCIM Provisioning
+Only the Primary Admin can set up SCIM provisioning for your organization. To make sure SCIM is a good fit for your IdP, the Primary Admin should check in with their SurveyMonkey Customer Success Manager (CSM) and their organizationΓÇÖs IT department.
+
+Once the team is aligned, the Primary Admin can:
+
+1. Go to [**Settings**](https://www.surveymonkey.com/team/settings/).
+1. Select **User provisioning with SCIM**.
+1. Copy the SCIM endpoint link and provide it to your IT partner.
+1. Select **Generate token**. Treat this unique token as you would your Primary Admin password and only give it to your IT partner.
+
+Your organizationΓÇÖs IT partner will use the SCIM endpoint link and access token during setup of the IdP. They will also need to adjust the default mapping for your teamΓÇÖs needs.
+
+### Revoke SCIM Provisioning
+If you need to disconnect Surveymonkey from your IdP so the systems no longer sync, the Primary Admin can revoke SCIM provisioning. As long as SSO is enabled, there will be no impact to users who have already been synced.
+
+To revoke the SCIM provisioning:
+
+1. Go to [**Settings**](https://www.surveymonkey.com/team/settings/).
+1. Select **User provisioning with SCIM**.
+1. Next to the access token, select **Revoke**.
+
+## Step 3. Add SurveyMonkey Enterprise from the Azure AD application gallery
+
+Add SurveyMonkey Enterprise from the Azure AD application gallery to start managing provisioning to SurveyMonkey Enterprise. If you have previously setup SurveyMonkey Enterprise for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+
+## Step 5. Configure automatic user provisioning to SurveyMonkey Enterprise
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in SurveyMonkey Enterprise based on user assignments in Azure AD.
+
+### To configure automatic user provisioning for SurveyMonkey Enterprise in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **SurveyMonkey Enterprise**.
+
+ ![Screenshot of the SurveyMonkey Enterprise link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your SurveyMonkey Enterprise Tenant URL and corresponding Secret Token. Click **Test Connection** to ensure Azure AD can connect to SurveyMonkey Enterprise.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to SurveyMonkey Enterprise**.
+
+1. Review the user attributes that are synchronized from Azure AD to SurveyMonkey Enterprise in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in SurveyMonkey Enterprise for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the SurveyMonkey Enterprise API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by SurveyMonkey Enterprise|
+ |||||
+ |userName|String|&check;|&check;
+ |emails[type eq "work"].value|String||&check;
+ |active|Boolean|||
+ |name.givenName|String|||
+ |name.familyName|String|||
+ |externalId|String|||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String|||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter|String|||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String|||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for SurveyMonkey Enterprise, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users that you would like to provision to SurveyMonkey Enterprise by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Templafy Openid Connect Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/templafy-openid-connect-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Te
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Templafy Saml 2 Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/templafy-saml-2-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Te
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Theorgwiki Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/theorgwiki-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Th
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Visitly Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/visitly-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps you perform in Visitl
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD user provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to software-as-a-service (SaaS) applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for preview features, see [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Workgrid Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workgrid-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Wo
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Workteam Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workteam-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Wo
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Wrike Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wrike-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps you perform in Wrike
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD user provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to software-as-a-service (SaaS) applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for preview features, see [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
active-directory Zenya Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zenya-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Zenya and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-provision users and/or groups to [Zenya](https://www.infoland.nl/). For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). Before you attempt to use this tutorial, be sure that you know and meet all requirements. If you have questions, contact Infoland.
-> [!NOTE]
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-- ## Capabilities supported > * Create users in Zenya > * Remove/disable users in Zenya when they do not require access anymore
active-directory Zscaler Private Access Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscaler-private-access-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Zs
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md
spec:
To create the issuer, use the `kubectl apply` command. ```console
-kubectl apply -f cluster-issuer.yaml
+kubectl apply -f cluster-issuer.yaml --namespace ingress-basic
``` ## Update your ingress routes
aks Managed Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/managed-aad.md
Title: Use Azure AD in Azure Kubernetes Service
description: Learn how to use Azure AD in Azure Kubernetes Service (AKS) Previously updated : 10/20/2021 Last updated : 01/23/2023 # AKS-managed Azure Active Directory integration
-AKS-managed Azure AD integration simplifies the Azure AD integration process. Previously, users were required to create a client and server app, and required the Azure AD tenant to grant Directory Read permissions. In the new version, the AKS resource provider manages the client and server apps for you.
+AKS-managed Azure Active Directory (Azure AD) integration simplifies the Azure AD integration process. Previously, you were required to create a client and server app, and the Azure AD tenant had to grant Directory Read permissions. Now, the AKS resource provider manages the client and server apps for you.
## Azure AD authentication overview Cluster administrators can configure Kubernetes role-based access control (Kubernetes RBAC) based on a user's identity or directory group membership. Azure AD authentication is provided to AKS clusters with OpenID Connect. OpenID Connect is an identity layer built on top of the OAuth 2.0 protocol. For more information on OpenID Connect, see the [Open ID connect documentation][open-id-connect].
-Learn more about the Azure AD integration flow on the [Azure Active Directory integration concepts documentation](concepts-identity.md#azure-ad-integration).
+Learn more about the Azure AD integration flow in the [Azure AD documentation](concepts-identity.md#azure-ad-integration).
-## Limitations
+## Limitations
-* AKS-managed Azure AD integration can't be disabled
-* Changing a AKS-managed Azure AD integrated cluster to legacy AAD is not supported
-* Clusters without Kubernetes RBAC enabled aren't supported for AKS-managed Azure AD integration
+* AKS-managed Azure AD integration can't be disabled.
+* Changing an AKS-managed Azure AD integrated cluster to legacy Azure AD is not supported.
+* Clusters without Kubernetes RBAC enabled aren't supported with AKS-managed Azure AD integration.
## Prerequisites
-* The Azure CLI version 2.29.0 or later
-* Kubectl with a minimum version of [1.18.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1181) or [kubelogin](https://github.com/Azure/kubelogin)
-* If you are using [helm](https://github.com/helm/helm), minimum version of helm 3.3.
+Before getting started, make sure you have the following prerequisites:
-> [!Important]
-> You must use Kubectl with a minimum version of 1.18.1 or kubelogin. The difference between the minor versions of Kubernetes and kubectl should not be more than 1 version. If you don't use the correct version, you will notice authentication issues.
+* Azure CLI version 2.29.0 or later.
+* `kubectl`, with a minimum version of [1.18.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1181) or [`kubelogin`](https://github.com/Azure/kubelogin).
+* If you're using [helm](https://github.com/helm/helm), you need a minimum version of helm 3.3.
-To install kubectl and kubelogin, use the following commands:
+> [!IMPORTANT]
+> You must use `kubectl` with a minimum version of 1.18.1 or `kubelogin`. The difference between the minor versions of Kubernetes and `kubectl` shouldn't be more than 1 version. You'll experience authentication issues if you don't use the correct version.
+
+Use the following commands to install kubectl and kubelogin:
```azurecli-interactive sudo az aks install-cli
Use [these instructions](https://kubernetes.io/docs/tasks/tools/install-kubectl/
## Before you begin
-For your cluster, you need an Azure AD group. This group will be registered as an admin group on the cluster to grant cluster admin permissions. You can use an existing Azure AD group, or create a new one. Record the object ID of your Azure AD group.
+You need an Azure AD group for your cluster. This group will be registered as an admin group on the cluster to grant cluster admin permissions. You can use an existing Azure AD group or create a new one. Make sure to record the object ID of your Azure AD group.
```azurecli-interactive # List existing groups in the directory az ad group list --filter "displayname eq '<group-name>'" -o table ```
-To create a new Azure AD group for your cluster administrators, use the following command:
+Use the following command to create a new Azure AD group for your cluster administrators:
```azurecli-interactive # Create an Azure AD group
az ad group create --display-name myAKSAdminGroup --mail-nickname myAKSAdminGrou
## Create an AKS cluster with Azure AD enabled
-Create an AKS cluster by using the following CLI commands.
-
-Create an Azure resource group:
+1. Create an Azure resource group.
```azurecli-interactive # Create an Azure resource group az group create --name myResourceGroup --location centralus ```
-Create an AKS cluster, and enable administration access for your Azure AD group
+1. Create an AKS cluster and enable administration access for your Azure AD group.
```azurecli-interactive # Create an AKS-managed Azure AD cluster az aks create -g myResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <id> [--aad-tenant-id <id>] ```
-A successful creation of an AKS-managed Azure AD cluster has the following section in the response body
+A successful creation of an AKS-managed Azure AD cluster has the following section in the response body:
+ ```output "AADProfile": { "adminGroupObjectIds": [
A successful creation of an AKS-managed Azure AD cluster has the following secti
} ```
-Once the cluster is created, you can start accessing it.
- ## Access an Azure AD enabled cluster Before you access the cluster using an Azure AD defined group, you'll need the [Azure Kubernetes Service Cluster User](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-user-role) built-in role.
-Get the user credentials to access the cluster:
-
+1. Get the user credentials to access the cluster.
+ ```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myManagedCluster ```
-Follow the instructions to sign in.
-Use the kubectl get nodes command to view nodes in the cluster:
+1. Follow the instructions to sign in.
+
+1. Use the kubectl `get nodes` command to view nodes in the cluster.
```azurecli-interactive kubectl get nodes
aks-nodepool1-15306047-0 Ready agent 102m v1.15.10
aks-nodepool1-15306047-1 Ready agent 102m v1.15.10 aks-nodepool1-15306047-2 Ready agent 102m v1.15.10 ```
-Configure [Azure role-based access control (Azure RBAC)](./azure-ad-rbac.md) to configure additional security groups for your clusters.
+
+1. Configure [Azure role-based access control (Azure RBAC)](./azure-ad-rbac.md) to configure other security groups for your clusters.
## Troubleshooting access issues with Azure AD
-> [!Important]
-> The steps described below are bypassing the normal Azure AD group authentication. Use them only in an emergency.
+> [!IMPORTANT]
+> The steps described in this section bypass the normal Azure AD group authentication. Use them only in an emergency.
If you're permanently blocked by not having access to a valid Azure AD group with access to your cluster, you can still obtain the admin credentials to access the cluster directly.
-To do these steps, you'll need to have access to the [Azure Kubernetes Service Cluster Admin](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-admin-role) built-in role.
+To do these steps, you need to have access to the [Azure Kubernetes Service Cluster Admin](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-admin-role) built-in role.
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myManagedCluster --admin ```
-## Enable AKS-managed Azure AD Integration on your existing cluster
+## Enable AKS-managed Azure AD integration on your existing cluster
-You can enable AKS-managed Azure AD Integration on your existing Kubernetes RBAC enabled cluster. Ensure to set your admin group to keep access on your cluster.
+You can enable AKS-managed Azure AD integration on your existing Kubernetes RBAC enabled cluster. Make sure to set your admin group to keep access on your cluster.
```azurecli-interactive az aks update -g MyResourceGroup -n MyManagedCluster --enable-aad --aad-admin-group-object-ids <id-1> [--aad-tenant-id <id>] ```
-A successful activation of an AKS-managed Azure AD cluster has the following section in the response body
+A successful activation of an AKS-managed Azure AD cluster has the following section in the response body:
```output "AADProfile": {
A successful activation of an AKS-managed Azure AD cluster has the following sec
Download user credentials again to access your cluster by following the steps [here][access-cluster].
-## Upgrading to AKS-managed Azure AD Integration
+## Upgrading to AKS-managed Azure AD integration
-If your cluster uses legacy Azure AD integration, you can upgrade to AKS-managed Azure AD Integration.
+If your cluster uses legacy Azure AD integration, you can upgrade to AKS-managed Azure AD integration by running the following command:
```azurecli-interactive az aks update -g myResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <id> [--aad-tenant-id <id>] ```
-A successful migration of an AKS-managed Azure AD cluster has the following section in the response body
+A successful migration of an AKS-managed Azure AD cluster has the following section in the response body:
```output "AADProfile": {
A successful migration of an AKS-managed Azure AD cluster has the following sect
} ```
-Update kubeconfig in order to access the cluster, follow the steps [here][access-cluster].
+In order to access the cluster, follow the steps [here][access-cluster] to update kubeconfig.
## Non-interactive sign in with kubelogin
-There are some non-interactive scenarios, such as continuous integration pipelines, that aren't currently available with kubectl. You can use [`kubelogin`](https://github.com/Azure/kubelogin) to access the cluster with non-interactive service principal sign-in.
+There are some non-interactive scenarios, such as continuous integration pipelines, that aren't currently available with `kubectl`. You can use [`kubelogin`](https://github.com/Azure/kubelogin) to connect to the cluster with a non-interactive service principal credential.
## Disable local accounts
-When deploying an AKS Cluster, local accounts are enabled by default. Even when enabling RBAC or Azure Active Directory integration, `--admin` access still exists, essentially as a non-auditable backdoor option. With this in mind, AKS offers users the ability to disable local accounts via a flag, `disable-local-accounts`. A field, `properties.disableLocalAccounts`, has also been added to the managed cluster API to indicate whether the feature has been enabled on the cluster.
-
-> [!NOTE]
-> On clusters with Azure AD integration enabled, users belonging to a group specified by `aad-admin-group-object-ids` will still be able to gain access via non-admin credentials. On clusters without Azure AD integration enabled and `properties.disableLocalAccounts` set to true, obtaining both user and admin credentials will fail.
+When you deploy an AKS cluster, local accounts are enabled by default. Even when enabling RBAC or Azure AD integration, `--admin` access still exists as a non-auditable backdoor option. You can disable local accounts using the parameter `disable-local-accounts`. The `properties.disableLocalAccounts` field has been added to the managed cluster API to indicate whether the feature is enabled or not on the cluster.
> [!NOTE]
-> After disabling local accounts users on an already existing AKS cluster where users might have used local account/s, admin must [rotate the cluster certificates](certificate-rotation.md), in order to revoke the certificates those users might have access to. If this is a new cluster then no action is required.
+>
+> * On clusters with Azure AD integration enabled, users assigned to an Azure AD administrators group specified by `aad-admin-group-object-ids` can still gain access using non-administrator credentials. On clusters without Azure AD integration enabled and `properties.disableLocalAccounts` set to `true`, any attempt to authenticate with user or admin credentials will fail.
+>
+> * After disabling local user accounts on an existing AKS cluster where users might have authenticated with local accounts, the administrator must [rotate the cluster certificates](certificate-rotation.md) to revoke certificates they might have had access to. If this is a new cluster, no action is required.
### Create a new cluster without local accounts
-To create a new AKS cluster without any local accounts, use the [az aks create][az-aks-create] command with the `disable-local-accounts` flag:
+To create a new AKS cluster without any local accounts, use the [`az aks create`][az-aks-create] command with the `disable-local-accounts` flag.
```azurecli-interactive az aks create -g <resource-group> -n <cluster-name> --enable-aad --aad-admin-group-object-ids <aad-group-id> --disable-local-accounts ```
-In the output, confirm local accounts have been disabled by checking the field `properties.disableLocalAccounts` is set to true:
+In the output, confirm local accounts have been disabled by checking the field `properties.disableLocalAccounts` is set to `true`.
```output "properties": {
Attempting to get admin credentials will fail with an error message indicating t
```azurecli-interactive az aks get-credentials --resource-group <resource-group> --name <cluster-name> --admin
-Operation failed with status: 'Bad Request'. Details: Getting static credential is not allowed because this cluster is set to disable local accounts.
+Operation failed with status: 'Bad Request'. Details: Getting static credential isn't allowed because this cluster is set to disable local accounts.
``` ### Disable local accounts on an existing cluster
-To disable local accounts on an existing AKS cluster, use the [az aks update][az-aks-update] command with the `disable-local-accounts` flag:
+To disable local accounts on an existing AKS cluster, use the [`az aks update`][az-aks-update] command with the `disable-local-accounts` parameter.
```azurecli-interactive az aks update -g <resource-group> -n <cluster-name> --enable-aad --aad-admin-group-object-ids <aad-group-id> --disable-local-accounts ```
-In the output, confirm local accounts have been disabled by checking the field `properties.disableLocalAccounts` is set to true:
+In the output, confirm local accounts have been disabled by checking the field `properties.disableLocalAccounts` is set to `true`.
```output "properties": {
Attempting to get admin credentials will fail with an error message indicating t
```azurecli-interactive az aks get-credentials --resource-group <resource-group> --name <cluster-name> --admin
-Operation failed with status: 'Bad Request'. Details: Getting static credential is not allowed because this cluster is set to disable local accounts.
+Operation failed with status: 'Bad Request'. Details: Getting static credential isn't allowed because this cluster is set to disable local accounts.
``` ### Re-enable local accounts on an existing cluster
-AKS also offers the ability to re-enable local accounts on an existing cluster with the `enable-local` flag:
+AKS supports enabling a disabled local account on an existing cluster with the `enable-local` parameter.
```azurecli-interactive az aks update -g <resource-group> -n <cluster-name> --enable-aad --aad-admin-group-object-ids <aad-group-id> --enable-local ```
-In the output, confirm local accounts have been re-enabled by checking the field `properties.disableLocalAccounts` is set to false:
+In the output, confirm local accounts have been re-enabled by checking the field `properties.disableLocalAccounts` is set to `false`.
```output "properties": {
When integrating Azure AD with your AKS cluster, you can also use [Conditional A
> [!NOTE] > Azure AD Conditional Access is an Azure AD Premium capability.
-To create an example Conditional Access policy to use with AKS, complete the following steps:
+Complete the following steps to create an example Conditional Access policy to use with AKS:
-1. At the top of the Azure portal, search for and select Azure Active Directory.
-1. In the menu for Azure Active Directory on the left-hand side, select *Enterprise applications*.
-1. In the menu for Enterprise applications on the left-hand side, select *Conditional Access*.
-1. In the menu for Conditional Access on the left-hand side, select *Policies* then *New policy*.
+1. In the Azure portal, navigate to the **Azure Active Directory** page.
+2. From the left-hand pane, select **Enterprise applications**.
+3. On the **Enterprise applications** page, from the left-hand pane select **Conditional Access**.
+4. On the **Conditional Access** page, from the left-hand pane select **Policies** and then select **New policy**.
:::image type="content" source="./media/managed-aad/conditional-access-new-policy.png" alt-text="Adding a Conditional Access policy":::
-1. Enter a name for the policy such as *aks-policy*.
-1. Select *Users and groups*, then under *Include* select *Select users and groups*. Choose the users and groups where you want to apply the policy. For this example, choose the same Azure AD group that has administration access to your cluster.
+5. Enter a name for the policy, for example **aks-policy**.
+6. Under **Assignments** select **Users and groups**. Choose your users and groups you want to apply the policy to. In this example, choose the same Azure AD group that has administrator access to your cluster.
:::image type="content" source="./media/managed-aad/conditional-access-users-groups.png" alt-text="Selecting users or groups to apply the Conditional Access policy":::
-1. Select *Cloud apps or actions*, then under *Include* select *Select apps*. Search for *Azure Kubernetes Service* and select *Azure Kubernetes Service AAD Server*.
+7. Under **Cloud apps or actions > Include**, select **Select apps**. Search for **Azure Kubernetes Service** and then select **Azure Kubernetes Service AAD Server**.
:::image type="content" source="./media/managed-aad/conditional-access-apps.png" alt-text="Selecting Azure Kubernetes Service AD Server for applying the Conditional Access policy":::
-1. Under *Access controls*, select *Grant*. Select *Grant access* then *Require device to be marked as compliant*.
+8. Under **Access controls > Grant**, select **Grant access**, **Require device to be marked as compliant**, and select **Select**.
:::image type="content" source="./media/managed-aad/conditional-access-grant-compliant.png" alt-text="Selecting to only allow compliant devices for the Conditional Access policy":::
-1. Under *Enable policy*, select *On* then *Create*.
+9. Confirm your settings and set **Enable policy** to **On**.
:::image type="content" source="./media/managed-aad/conditional-access-enable-policy.png" alt-text="Enabling the Conditional Access policy":::
+10. Select **Create** to create and enable your policy.
-Get the user credentials to access the cluster, for example:
+After creating the Conditional Access policy, perform the following steps to verify it has been successfully listed.
-```azurecli-interactive
- az aks get-credentials --resource-group myResourceGroup --name myManagedCluster
-```
+11. To get the user credentials to access the cluster, run the following command:
-Follow the instructions to sign in.
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myManagedCluster
+ ```
-Use the `kubectl get nodes` command to view nodes in the cluster:
+12. Follow the instructions to sign in.
-```azurecli-interactive
-kubectl get nodes
-```
+13. View nodes in the cluster with the `kubectl get nodes` command:
-Follow the instructions to sign in again. Notice there is an error message stating you are successfully logged in, but your admin requires the device requesting access to be managed by your Azure AD to access the resource.
+ ```azurecli-interactive
+ kubectl get nodes
+ ```
-In the Azure portal, navigate to Azure Active Directory, select *Enterprise applications* then under *Activity* select *Sign-ins*. Notice an entry at the top with a *Status* of *Failed* and a *Conditional Access* of *Success*. Select the entry then select *Conditional Access* in *Details*. Notice your Conditional Access policy is listed.
+14. In the Azure portal, navigate to **Azure Active Directory**. From the left-hand pane select **Enterprise applications**, and then under **Activity** select **Sign-ins**.
+15. Notice in the top of the results an event with a status of **Failed**, and under the **Conditional Access** column, a status of **Success**. Select the event and then select **Conditional Access** tab. Notice your Conditional Access policy is listed.
+ :::image type="content" source="./media/managed-aad/conditional-access-sign-in-activity.png" alt-text="Screenshot that shows failed sign-in entry due to Conditional Access policy.":::
## Configure just-in-time cluster access with Azure AD and AKS
Another option for cluster access control is to use Privileged Identity Manageme
To integrate just-in-time access requests with an AKS cluster using AKS-managed Azure AD integration, complete the following steps:
-1. At the top of the Azure portal, search for and select Azure Active Directory.
-1. Take note of the Tenant ID, referred to for the rest of these instructions as `<tenant-id>`
+1. In the Azure portal, navigate to **Azure Active Directory**.
+1. Select **Properties**. Scroll down to the **Tenant ID** field. Your tenant ID will be in the box. Note this value as it's referenced later in a step as `<tenant-id>`.
:::image type="content" source="./media/managed-aad/jit-get-tenant-id.png" alt-text="In a web browser, the Azure portal screen for Azure Active Directory is shown with the tenant's ID highlighted.":::
-1. In the menu for Azure Active Directory on the left-hand side, under *Manage* select *Groups* then *New Group*.
+2. From the left-hand pane, under **Manage**, select **Groups** and then select **New group**.
:::image type="content" source="./media/managed-aad/jit-create-new-group.png" alt-text="Shows the Azure portal Active Directory groups screen with the 'New Group' option highlighted.":::
-1. Make sure a Group Type of *Security* is selected and enter a group name, such as *myJITGroup*. Under *Azure AD Roles can be assigned to this group (Preview)*, select *Yes*. Finally, select *Create*.
+3. Verify the group type **Security** is selected and specify a group name, such as **myJITGroup**. Under the option **Azure AD roles can be assigned to this group (Preview)**, select **Yes** and then select **Create**.
:::image type="content" source="./media/managed-aad/jit-new-group-created.png" alt-text="Shows the Azure portal's new group creation screen.":::
-1. You will be brought back to the *Groups* page. Select your newly created group and take note of the Object ID, referred to for the rest of these instructions as `<object-id>`.
+4. On the **Groups** page, select the group you just created and note the Object ID. This will be referenced in a later step as `<object-id>`.
:::image type="content" source="./media/managed-aad/jit-get-object-id.png" alt-text="Shows the Azure portal screen for the just-created group, highlighting the Object Id":::
-1. Deploy an AKS cluster with AKS-managed Azure AD integration by using the `<tenant-id>` and `<object-id>` values from earlier:
+5. Create the AKS cluster with AKS-managed Azure AD integration using the `az aks create` command with the `--aad-admin-group-objects-ids` and `--aad-tenant-id parameters` and include the values noted in the steps earlier.
```azurecli-interactive az aks create -g myResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <object-id> --aad-tenant-id <tenant-id> ```
-1. Back in the Azure portal, in the menu for *Activity* on the left-hand side, select *Privileged Access (Preview)* and select *Enable Privileged Access*.
+6. In the Azure portal, select **Activity** from the left-hand pane. Select **Privileged Access (Preview)** and then select **Enable Privileged Access**.
:::image type="content" source="./media/managed-aad/jit-enabling-priv-access.png" alt-text="The Azure portal's Privileged access (Preview) page is shown, with 'Enable privileged access' highlighted":::
-1. Select *Add Assignments* to begin granting access.
+7. To grant access, select **Add assignments**.
:::image type="content" source="./media/managed-aad/jit-add-active-assignment.png" alt-text="The Azure portal's Privileged access (Preview) screen after enabling is shown. The option to 'Add assignments' is highlighted.":::
-1. Select a role of *member*, and select the users and groups to whom you wish to grant cluster access. These assignments can be modified at any time by a group admin. When you're ready to move on, select *Next*.
+8. From the **Select role** drop-down list, select the users and groups you want to grant cluster access. These assignments can be modified at any time by a group administrator. Then select **Next**.
:::image type="content" source="./media/managed-aad/jit-adding-assignment.png" alt-text="The Azure portal's Add assignments Membership screen is shown, with a sample user selected to be added as a member. The option 'Next' is highlighted.":::
-1. Choose an assignment type of *Active*, the desired duration, and provide a justification. When you're ready to proceed, select *Assign*. For more on assignment types, see [Assign eligibility for a privileged access group (preview) in Privileged Identity Management][aad-assignments].
+9. Under **Assignment type**, select **Active** and then specify the desired duration. Provide a justification and then select **Assign**. For more information about assignment types, see [Assign eligibility for a privileged access group (preview) in Privileged Identity Management][aad-assignments].
:::image type="content" source="./media/managed-aad/jit-set-active-assignment-details.png" alt-text="The Azure portal's Add assignments Setting screen is shown. An assignment type of 'Active' is selected and a sample justification has been given. The option 'Assign' is highlighted."::: Once the assignments have been made, verify just-in-time access is working by accessing the cluster. For example:
Use the `kubectl get nodes` command to view nodes in the cluster:
kubectl get nodes ```
-Note the authentication requirement and follow the steps to authenticate. If successful, you should see output similar to the following:
+Note the authentication requirement and follow the steps to authenticate. If successful, you should see an output similar to the following output:
```output To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code AAAAAAAAA to authenticate.
aks-nodepool1-61156405-vmss000000 Ready agent 6m36s v1.18.14
aks-nodepool1-61156405-vmss000001 Ready agent 6m42s v1.18.14 aks-nodepool1-61156405-vmss000002 Ready agent 6m33s v1.18.14 ```+ ### Apply Just-in-Time access at the namespace level 1. Integrate your AKS cluster with [Azure RBAC](manage-azure-rbac.md).
aks-nodepool1-61156405-vmss000002 Ready agent 6m33s v1.18.14
```azurecli-interactive az role assignment create --role "Azure Kubernetes Service RBAC Reader" --assignee <AAD-ENTITY-ID> --scope $AKS_ID/namespaces/<namespace-name> ```
-3. Associate the group you just configured at the namespace level with PIM to complete the configuration.
+
+1. Associate the group you configured at the namespace level with PIM to complete the configuration.
### Troubleshooting
-If `kubectl get nodes` returns an error similar to the following:
+If `kubectl get nodes` returns an error similar to the following error:
```output Error from server (Forbidden): nodes is forbidden: User "aaaa11111-11aa-aa11-a1a1-111111aaaaa" cannot list resource "nodes" in API group "" at the cluster scope
Make sure the admin of the security group has given your account an *Active* ass
## Next steps
-* Learn about [Azure RBAC integration for Kubernetes Authorization][azure-rbac-integration]
+* Learn about [Azure RBAC integration for Kubernetes Authorization][azure-rbac-integration].
* Learn about [Azure AD integration with Kubernetes RBAC][azure-ad-rbac]. * Use [kubelogin](https://github.com/Azure/kubelogin) to access features for Azure authentication that aren't available in kubectl. * Learn more about [AKS and Kubernetes identity concepts][aks-concepts-identity].
-* Use [Azure Resource Manager (ARM) templates ][aks-arm-template] to create AKS-managed Azure AD enabled clusters.
+* Use [Azure Resource Manager (ARM) templates][aks-arm-template] to create AKS-managed Azure AD enabled clusters.
<!-- LINKS - external --> [kubernetes-webhook]:https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kub
| 1.23 | Dec 2021 | Jan 2022 | Apr 2022 | Apr 2023 | | 1.24 | Apr-22-22 | May 2022 | Jul 2022 | Jul 2023 | 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Dec 2023
-| 1.26 | Dec 2022 | Jan 2023 | Feb 2023 | Feb 2024
+| 1.26 | Dec 2022 | Feb 2023 | Mar 2023 | Mar 2024
| 1.27 | Apr 2023 | May 2023 | Jun 2023 | Jun 2024 > [!NOTE]
aks Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade.md
An Azure Kubernetes Service (AKS) cluster will periodically need to be updated t
- *Cluster Kubernetes version*: Part of the AKS cluster lifecycle involves performing upgrades to the latest Kubernetes version. ItΓÇÖs important you upgrade to apply the latest security releases and to get access to the latest Kubernetes features, as well as to stay within the [AKS support window][supported-k8s-versions]. - *Node image version*: AKS regularly provides new node images with the latest OS and runtime updates. It's beneficial to upgrade your nodes' images regularly to ensure support for the latest AKS features and to apply essential security patches and hot fixes.
+For Linux nodes, node image security patches and hotfixes may be performed without your initiation as *unattended updates*. These updates are automatically applied, but AKS doesn't automatically reboot your Linux nodes to complete the update process. You're required to use a tool like [kured][node-updates-kured] or [node image upgrade][node-image-upgrade] to reboot the nodes and complete the cycle.
+ The following table summarizes the details of updating each component: |Component name|Frequency of upgrade|Planned Maintenance supported|Supported operation methods|Documentation link|
The following table summarizes the details of updating each component:
|Cluster Kubernetes version (minor) upgrade|Roughly every three months|Yes| Automatic, Manual|[Upgrade an AKS cluster][upgrade-cluster]| |Cluster Kubernetes version upgrade to supported patch version|Approximately weekly. To determine the latest applicable version in your region, see the [AKS release tracker][release-tracker]|Yes|Automatic, Manual|[Upgrade an AKS cluster][upgrade-cluster]| |Node image version upgrade|**Linux**: weekly<br>**Windows**: monthly|Yes|Automatic, Manual|[AKS node image upgrade][node-image-upgrade]|
-|Security patches and hot fixes for node images|As-necessary||||
+|Security patches and hot fixes for node images|As-necessary|||[AKS node security patches][node-security-patches]|
## Automatic upgrades
For more information what cluster operations may trigger specific upgrade events
[ts-ip-limit]: /troubleshoot/azure/azure-kubernetes/error-code-publicipcountlimitreached [ts-quota-exceeded]: /troubleshoot/azure/azure-kubernetes/error-code-quotaexceeded [ts-subnet-full]: /troubleshoot/azure/azure-kubernetes/error-code-subnetisfull-upgrade
+[node-security-patches]: ./concepts-security.md#node-security-patches
+[node-updates-kured]: ./node-updates-kured.md
app-service Networking Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking-features.md
ms.assetid: 5c61eed1-1ad1-4191-9f71-906d610ee5b7 Previously updated : 09/01/2022 Last updated : 01/23/2023
For any given use case, there might be a few ways to solve the problem. Choosing
| Expose your app on a private IP in your virtual network | ILB ASE </br> Private endpoints </br> Private IP for inbound traffic on an Application Gateway instance with service endpoints | | Protect your app with a web application firewall (WAF) | Application Gateway and ILB ASE </br> Application Gateway with private endpoints </br> Application Gateway with service endpoints </br> Azure Front Door with access restrictions | | Load balance traffic to your apps in different regions | Azure Front Door with access restrictions |
-| Load balance traffic in the same region | [Application Gateway with service endpoints][appgwserviceendpoints] |
+| Load balance traffic in the same region | [Application Gateway with service endpoints](./networking/app-gateway-with-service-endpoints.md) |
The following outbound use cases suggest how to use App Service networking features to solve outbound access needs for your app:
When you use an app-assigned address, your traffic still goes through the same f
* Support IP-based SSL needs for your app. * Set a dedicated address for your app that's not shared.
-To learn how to set an address on your app, see [Add a TLS/SSL certificate in Azure App Service][appassignedaddress].
+To learn how to set an address on your app, see [Add a TLS/SSL certificate in Azure App Service](./configure-ssl-certificate.md).
### Access restrictions
This feature allows you to build a list of allow and deny rules that are evaluat
Private endpoint is a network interface that connects you privately and securely to your Web App by Azure private link. Private endpoint uses a private IP address from your virtual network, effectively bringing the web app into your virtual network. This feature is only for *inbound* flows to your web app. For more information, see
-[Using private endpoints for Azure Web App][privateendpoints].
+[Using private endpoints for Azure Web App](./networking/private-endpoint.md).
Some use cases for this feature:
App Service Hybrid Connections enables your apps to make *outbound* calls to spe
App Service Hybrid Connections is built on the Azure Relay Hybrid Connections capability. App Service uses a specialized form of the feature that only supports making outbound calls from your app to a TCP host and port. This host and port only need to resolve on the host where Hybrid Connection Manager is installed.
-When the app, in App Service, does a DNS lookup on the host and port defined in your hybrid connection, the traffic automatically redirects to go through the hybrid connection and out of Hybrid Connection Manager. To learn more, see [App Service Hybrid Connections][hybridconn].
+When the app, in App Service, does a DNS lookup on the host and port defined in your hybrid connection, the traffic automatically redirects to go through the hybrid connection and out of Hybrid Connection Manager. To learn more, see [App Service Hybrid Connections](./app-service-hybrid-connections.md).
This feature is commonly used to:
App Service Hybrid Connections is unaware of what you're doing on top of it. So
Hybrid Connections is popular for development, but it's also used in production applications. It's great for accessing a web service or database, but it's not appropriate for situations that involve creating many connections.
-### Gateway-required virtual network integration
+### <a id="regional-vnet-integration"></a>Virtual network integration
-Gateway-required App Service virtual network integration enables your app to make *outbound* requests into an Azure virtual network. The feature works by connecting the host your app is running on to a Virtual Network gateway on your virtual network by using a point-to-site VPN. When you configure the feature, your app gets one of the point-to-site addresses assigned to each instance. This feature enables you to access resources in either classic or Azure Resource Manager virtual networks in any region.
+App Service virtual network integration enables your app to make *outbound* requests into an Azure virtual network.
-![Diagram that illustrates gateway-required virtual network integration.](media/networking-features/gw-vnet-integration.png)
-
-This feature solves the problem of accessing resources in other virtual networks. It can even be used to connect through a virtual network to either other virtual networks or on-premises. It doesn't work with ExpressRoute-connected virtual networks, but it does work with site-to-site VPN-connected networks. It's inappropriate to use this feature from an app in an App Service Environment (ASE) because the ASE is already in your virtual network. Use cases for this feature:
-
-* Access resources in cross region virtual networks that aren't peered to a virtual network in the region.
-
-When this feature is enabled, your app will use the DNS server that the destination virtual network is configured with. For more information on this feature, see [App Service virtual network integration][vnetintegrationp2s].
-
-### <a id="regional-vnet-integration"></a>Regional virtual network integration
-
-Gateway-required virtual network integration is useful, but it doesn't solve the problem of accessing resources across ExpressRoute. On top of needing to reach across ExpressRoute connections, there's a need for apps to be able to make calls to services secured by service endpoint. Another virtual network integration capability can meet these needs.
-
-The regional virtual network integration feature enables you to place the back end of your app in a subnet in a Resource Manager virtual network in the same region as your app. This feature isn't available from an App Service Environment, which is already in a virtual network. Use cases for this feature:
+The virtual network integration feature enables you to place the back end of your app in a subnet in a Resource Manager virtual network. The virtual network must be in the same region as your app. This feature isn't available from an App Service Environment, which is already in a virtual network. Use cases for this feature:
* Access resources in Resource Manager virtual networks in the same region. * Access resources in peered virtual networks, including cross region connections.
The regional virtual network integration feature enables you to place the back e
![Diagram that illustrates virtual network integration.](media/networking-features/vnet-integration.png)
-To learn more, see [App Service virtual network integration][vnetintegration].
+To learn more, see [App Service virtual network integration](./overview-vnet-integration.md).
+
+#### Gateway-required virtual network integration
+
+Gateway-required virtual network integration was the first edition of virtual network integration in App Service. The feature works by connecting the host your app is running on to a Virtual Network gateway on your virtual network by using a point-to-site VPN. When you configure the feature, your app gets one of the point-to-site assigned addresses assigned to each instance.
+
+![Diagram that illustrates gateway-required virtual network integration.](media/networking-features/gw-vnet-integration.png)
+
+Gateway required integration allows you to connect directly to a virtual network in another region without peering and to connect to a classic virtual network. The feature is limited to App Service Windows plans and doesn't work with ExpressRoute-connected virtual networks. It's recommended to use the regional virtual network integration. For more information on this feature, see [App Service virtual network integration](./configure-gateway-required-vnet-integration.md).
### App Service Environment
Some things aren't currently possible from the multi-tenant service but are poss
The ASE provides the best story around isolated and dedicated app hosting, but it does involve some management challenges. Some things to consider before you use an operational ASE:
- * An ASE runs inside your virtual network, but it does have dependencies outside the virtual network. Those dependencies must be allowed. For more information, see [Networking considerations for an App Service Environment][networkinfo].
+ * An ASE runs inside your virtual network, but it does have dependencies outside the virtual network. Those dependencies must be allowed. For more information, see [Networking considerations for an App Service Environment](./environment/network-info.md).
* An ASE doesn't scale immediately like the multi-tenant service. You need to anticipate scaling needs rather than reactively scaling. * An ASE does have a higher up-front cost. To get the most out of your ASE, you should plan to put many workloads into one ASE rather than using it for small efforts. * The apps in an ASE can't selectively restrict access to some apps in the ASE and not others.
The ASE provides the best story around isolated and dedicated app hosting, but i
## Combining features
-The features noted for the multi-tenant service can be used together to solve more elaborate use cases. Two of the more common use cases are described here, but they're just examples. By understanding what the various features do, you can meet nearly all your system architecture needs.
+The features noted for the multi-tenant service can be used together to solve more elaborate use cases. Two of the more common use cases are described here, but that's just examples. By understanding what the various features do, you can meet nearly all your system architecture needs.
### Place an app into a virtual network
Line-of-business (LOB) applications are internal applications that aren't normal
If neither of these needs apply, you're better off using private endpoints. With private endpoints available in App Service, you can expose your apps on private addresses in your virtual network. The private endpoint you place in your virtual network can be reached across ExpressRoute and VPN connections.
-Configuring private endpoints will expose your apps on a private address, but you'll need to configure DNS to reach that address from on-premises. To make this configuration work, you'll need to forward the Azure DNS private zone that contains your private endpoints to your on-premises DNS servers. Azure DNS private zones don't support zone forwarding, but you can support zone forwarding by using a DNS server for that purpose. The [DNS Forwarder](https://azure.microsoft.com/resources/templates/dns-forwarder/) template makes it easier to forward your Azure DNS private zone to your on-premises DNS servers.
+Configuring private endpoints will expose your apps on a private address, but you'll need to configure DNS to reach that address from on-premises. To make this configuration work, you'll need to forward the Azure DNS private zone that contains your private endpoints to your on-premises DNS servers. Azure DNS private zones don't support zone forwarding, but you can support zone forwarding by using [Azure DNS private resolver](../dns/dns-private-resolver-overview.md).
## App Service ports
If you scan App Service, you'll find several ports that are exposed for inbound
| FTP/FTPS | 21, 990, 10001-10300 | | Visual Studio remote debugging | 4020, 4022, 4024 | | Web Deploy service | 8172 |
-| Infrastructure use | 7654, 1221 |
-
-<!--Links-->
-[appassignedaddress]: ./configure-ssl-certificate.md
-[serviceendpoints]: ./app-service-ip-restrictions.md
-[hybridconn]: ./app-service-hybrid-connections.md
-[vnetintegrationp2s]: ./overview-vnet-integration.md
-[vnetintegration]: ./overview-vnet-integration.md
-[networkinfo]: ./environment/network-info.md
-[appgwserviceendpoints]: ./networking/app-gateway-with-service-endpoints.md
-[privateendpoints]: ./networking/private-endpoint.md
-[servicetags]: ../virtual-network/service-tags-overview.md
+| Infrastructure use | 7654, 1221 |
app-service Overview Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-authentication-authorization.md
When using Azure App Service with Easy Auth behind Azure Front Door or other rev
**Export settings**
- `az rest --uri /subscriptions/REPLACE-ME-SUBSCRIPTIONID/resourceGroups/REPLACE-ME-RESOURCEGROUP/providers/Microsoft.Web/sites/REPLACE-ME-APPNAME?api-version=2020-09-01 --method get > auth.json`
+ `az rest --uri /subscriptions/REPLACE-ME-SUBSCRIPTIONID/resourceGroups/REPLACE-ME-RESOURCEGROUP/providers/Microsoft.Web/sites/REPLACE-ME-APPNAME/config/authsettingsV2?api-version=2020-09-01 --method get > auth.json`
**Update settings**
When using Azure App Service with Easy Auth behind Azure Front Door or other rev
**Import settings**
- `az rest --uri /subscriptions/REPLACE-ME-SUBSCRIPTIONID/resourceGroups/REPLACE-ME-RESOURCEGROUP/providers/Microsoft.Web/sites/REPLACE-ME-APPNAME?api-version=2020-09-01 --method put --body @auth.json`
+ `az rest --uri /subscriptions/REPLACE-ME-SUBSCRIPTIONID/resourceGroups/REPLACE-ME-RESOURCEGROUP/providers/Microsoft.Web/sites/REPLACE-ME-APPNAME/config/authsettingsV2?api-version=2020-09-01 --method put --body @auth.json`
## More resources
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md
Update Management requiresΓÇ»[Log Analytics agent](../../azure-monitor/agents/lo
You must update Log Analytics agent to the latest version, by following below steps:ΓÇ»
-Check the current version of Log Analytics agent for your machine:  Go to the installation path - *C:\ProgramFiles\Microsoft Monitoring Agent\Agent* and right-click on *HealthService.exe* to check **Properties**. In the **Details** tab, the field **Product version** provides version number of the Log Analytics agent.
+1. Check the current version of Log Analytics agent for your machine:  Go to the installation path - *C:\ProgramFiles\Microsoft Monitoring Agent\Agent* and right-click on *HealthService.exe* to check **Properties**. In the **Details** tab, the field **Product version** provides version number of the Log Analytics agent.
-If your Log Analytics agent version is prior to [10.20.18053 (bundle) and 1.0.18053.0 (extension)](../../virtual-machines/extensions/oms-windows.md#agent-and-vm-extension-version), upgrade to the latest version of the Windows Log Analytics agent, following these [guidelines](../../azure-monitor/agents/agent-manage.md).ΓÇ»
+1. If your Log Analytics agent version is prior to [10.20.18053 (bundle) and 1.0.18053.0 (extension)](../../virtual-machines/extensions/oms-windows.md#agent-and-vm-extension-version), upgrade to the latest version of the Windows Log Analytics agent, following these [guidelines](../../azure-monitor/agents/agent-manage.md).ΓÇ»
>[!NOTE] > During the upgrade process, update management schedules might fail. Ensure to do this when there is no planned schedule.
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Azure Connected Machine agent description: This article provides a detailed overview of the Azure Arc-enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 11/18/2022 Last updated : 01/23/2023
The Azure Connected Machine agent is designed to manage agent and system resourc
* The Log Analytics agent and Azure Monitor Agent are allowed to use up to 60% of the CPU during their install/upgrade/uninstall operations on Red Hat Linux, CentOS, and other enterprise Linux variants. The limit is higher for this combination of extensions and operating systems to accommodate the performance impact of [SELinux](https://www.redhat.com/en/topics/linux/what-is-selinux) on these systems. * The Azure Monitor Agent can use up to 30% of the CPU during normal operations. * The Linux OS Update Extension (used by Azure Update Management Center) can use up to 30% of the CPU to patch the server.
+ * The Microsoft Defender for Endpoint extension can use up to 30% of the CPU during installation, upgrades, and removal operations.
## Instance metadata
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md
Title: Archive for What's new with Azure Arc-enabled servers agent description: The What's new release notes in the Overview section for Azure Arc-enabled servers agent contains six months of activity. Thereafter, the items are removed from the main article and put into this article. Previously updated : 11/18/2022 Last updated : 01/23/2023
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
- Known issues - Bug fixes
+## Version 1.21 - August 2022
+
+### New features
+
+- `azcmagent connect` usability improvements:
+ - The `--subscription-id (-s)` parameter now accepts friendly names in addition to subscription IDs
+ - Automatic registration of any missing resource providers for first-time users (additional user permissions required to register resource providers)
+ - A progress bar now appears while the resource is being created and connected
+ - The onboarding script now supports both the yum and dnf package managers on RPM-based Linux systems
+- You can now restrict which URLs can be used to download machine configuration (formerly Azure Policy guest configuration) packages by setting the `allowedGuestConfigPkgUrls` tag on the server resource and providing a comma-separated list of URL patterns to allow.
+
+### Fixed
+
+- Extension installation failures are now reported to Azure more reliably to prevent extensions from being stuck in the "creating" state
+- Metadata for Google Cloud Platform virtual machines can now be retrieved when the agent is configured to use a proxy server
+- Improved network connection retry logic and error handling
+- Linux only: resolves local escalation of privilege vulnerability [CVE-2022-38007](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-38007)
+
+## Version 1.20 - July 2022
+
+### Known issues
+
+- Some systems may incorrectly report their cloud provider as Azure Stack HCI.
+
+### New features
+
+- Added support for connecting the agent to the Azure China cloud
+- Added support for Debian 10
+- Updates to the [instance metadata](agent-overview.md#instance-metadata) collected on each machine:
+ - GCP VM OS is no longer collected
+ - CPU logical core count is now collected
+- Improved error messages and colorization
+
+### Fixed
+
+- Agents configured to use private endpoints will now download extensions over the private endpoint
+- The `--use-private-link` flag on [azcmagent check](manage-agent.md#check) has been renamed to `--enable-pls-check` to more accurately represent its function
+ ## Version 1.19 - June 2022 ### Known issues
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Arc-enabled servers agent description: This article has release notes for Azure Arc-enabled servers agent. For many of the summarized issues, there are links to more details. Previously updated : 11/15/2022 Last updated : 01/23/2023
The Azure Connected Machine agent receives improvements on an ongoing basis. To
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Arc-enabled servers agent](agent-release-notes-archive.md).
+## Version 1.26 - January 2022
+
+> [!NOTE]
+> Version 1.26 is only available for Linux operating systems. The most recent Windows agent version is 1.25.
+
+### Fixed
+
+- Increased the [resource limits](agent-overview.md#agent-resource-governance) for the Microsoft Defender for Endpoint extension (MDE.Linux) on Linux to improve installation reliability
+
+## Version 1.25 - January 2022
+
+### New features
+
+- Red Hat Enterprise Linux (RHEL) 9 is now a [supported operating system](prerequisites.md#supported-operating-systems)
+
+### Fixed
+
+- Reliability improvements in the machine (guest) configuration policy engine
+- Improved error messages in the Windows MSI installer
+- Additional improvements to the detection logic for machines running on Azure Stack HCI
+ ## Version 1.24 - November 2022 ### New features
This page is updated monthly, so revisit it regularly. If you're looking for ite
- Improved accuracy of network connectivity checks - When switching the agent from monitoring mode to full mode, existing restrictions are now retained. Use [azcmagent clear](manage-agent.md#config) to reset individual configuration settings to the default state.
-## Version 1.21 - August 2022
-
-### New features
--- `azcmagent connect` usability improvements:
- - The `--subscription-id (-s)` parameter now accepts friendly names in addition to subscription IDs
- - Automatic registration of any missing resource providers for first-time users (additional user permissions required to register resource providers)
- - A progress bar now appears while the resource is being created and connected
- - The onboarding script now supports both the yum and dnf package managers on RPM-based Linux systems
-- You can now restrict which URLs can be used to download machine configuration (formerly Azure Policy guest configuration) packages by setting the `allowedGuestConfigPkgUrls` tag on the server resource and providing a comma-separated list of URL patterns to allow.-
-### Fixed
--- Extension installation failures are now reported to Azure more reliably to prevent extensions from being stuck in the "creating" state-- Metadata for Google Cloud Platform virtual machines can now be retrieved when the agent is configured to use a proxy server-- Improved network connection retry logic and error handling-- Linux only: resolves local escalation of privilege vulnerability [CVE-2022-38007](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-38007)-
-## Version 1.20 - July 2022
-
-### Known issues
--- Some systems may incorrectly report their cloud provider as Azure Stack HCI.-
-### New features
--- Added support for connecting the agent to the Azure China cloud-- Added support for Debian 10-- Updates to the [instance metadata](agent-overview.md#instance-metadata) collected on each machine:
- - GCP VM OS is no longer collected
- - CPU logical core count is now collected
-- Improved error messages and colorization-
-### Fixed
--- Agents configured to use private endpoints will now download extensions over the private endpoint-- The `--use-private-link` flag on [azcmagent check](manage-agent.md#check) has been renamed to `--enable-pls-check` to more accurately represent its function- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
The following versions of the Windows and Linux operating system are officially
* CentOS Linux 7 and 8 * Rocky Linux 8 * SUSE Linux Enterprise Server (SLES) 12 and 15
-* Red Hat Enterprise Linux (RHEL) 7 and 8
+* Red Hat Enterprise Linux (RHEL) 7, 8 and 9
* Amazon Linux 2 * Oracle Linux 7
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
After a rewrite, two sets of AOF files exist in storage. Rewrites occur in the b
Using managed identity adds the cache instance to the [trusted services list](../storage/common/storage-network-security.md?tabs=azure-portal), making firewall exceptions easier to carry out. If you aren't using managed identity and instead authorizing to a storage account using a key, then having firewall exceptions on the storage account tends to break the persistence process. +
+### Can I have AOF persistence enabled if I have more than one replica?
+
+No, AOF persistence cannot be enabled with replicas (i.e replica count >= 2).
+ ## Next steps Learn more about Azure Cache for Redis features.
azure-cache-for-redis Cache How To Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-zone-redundancy.md
To create a cache, follow these steps:
1. Configure your settings for clustering and/or RDB persistence. > [!NOTE]
- > Zone redundancy doesn't support AOF persistence or work with geo-replication currently.
+ > Zone redundancy doesn't support AOF persistence with 2 or more replicas or work with geo-replication currently.
> 1. Select **Create**.
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
When you create or update an action group in the Azure portal, you can **test**
1. Define an action, as described in the previous few sections. Then select **Review + create**.
-1. On the page that lists the information that you entered, select **Test action group**.
+> [!NOTE]
+>
+> If you are editing an already exisitng action group, you must save changes to the action group before testing.
+
+2. On the page that lists the information that you entered, select **Test action group**.
:::image type="content" source="./media/action-groups/test-action-group.png" alt-text="Screenshot of test action group start page. A Test action group button is visible.":::
-2. Select a sample type and the notification and action types that you want to test. Then select **Test**.
+3. Select a sample type and the notification and action types that you want to test. Then select **Test**.
:::image type="content" source="./media/action-groups/test-sample-action-group.png" alt-text="Screenshot of the Test sample action group page. An email notification type and a webhook action type are visible.":::
-3. If you close the window or select **Back to test setup** while the test is running, the test is stopped, and you don't get test results.
+4. If you close the window or select **Back to test setup** while the test is running, the test is stopped, and you don't get test results.
:::image type="content" source="./media/action-groups/stop-running-test.png" alt-text="Screenshot of the Test sample action group page. A dialog box contains a Stop button and asks the user about stopping the test.":::
-4. When the test is complete, a test status of either **Success** or **Failed** appears. If the test failed and you'd like to get more information, select **View details**.
+5. When the test is complete, a test status of either **Success** or **Failed** appears. If the test failed and you'd like to get more information, select **View details**.
:::image type="content" source="./media/action-groups/test-sample-failed.png" alt-text="Screenshot of the Test sample action group page. Error details are visible, and a white X on a red background indicates that a test failed.":::
-You can use the information in the **Error details** section to understand the issue. Then you can edit and test the action group again.
+You can use the information in the **Error details** section to understand the issue. Then you can edit, save changes, and test the action group again.
When you run a test and select a notification type, you get a message with "Test" in the subject. The tests provide a way to check that your action group works as expected before you enable it in a production environment. All the details and links in test email notifications are from a sample reference set.
azure-monitor Alerts Common Schema Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema-definitions.md
- Title: Alert schema definitions in Azure Monitor
-description: This article explains the common alert schema definitions for Azure Monitor.
-- Previously updated : 07/20/2021---
-# Common alert schema definitions
-
-This article describes the [common alert schema definitions](./alerts-common-schema.md) for Azure Monitor. It includes the definitions for webhooks, Azure Logic Apps, Azure Functions, and Azure Automation runbooks.
-
-Any alert instance describes the resource that was affected and the cause of the alert. These instances are described in the common schema in the following sections:
-
-* **Essentials**: A set of standardized fields that are common across all alert types. Fields describe what resource the alert is on, along with other common alert metadata. Examples are severity or description.
-* **Alert context**: A set of fields that describe the cause of the alert. Fields vary based on the alert type. For example, a metric alert includes fields like the metric name and metric value in the alert context. An activity log alert has information about the event that generated the alert.
-
-**Sample alert payload**
-
-```json
-{
- "schemaId": "azureMonitorCommonAlertSchema",
- "data": {
- "essentials": {
- "alertId": "/subscriptions/<subscription ID>/providers/Microsoft.AlertsManagement/alerts/b9569717-bc32-442f-add5-83a997729330",
- "alertRule": "WCUS-R2-Gen2",
- "severity": "Sev3",
- "signalType": "Metric",
- "monitorCondition": "Resolved",
- "monitoringService": "Platform",
- "alertTargetIDs": [
- "/subscriptions/<subscription ID>/resourcegroups/pipelinealertrg/providers/microsoft.compute/virtualmachines/wcus-r2-gen2"
- ],
- "configurationItems": [
- "wcus-r2-gen2"
- ],
- "originAlertId": "3f2d4487-b0fc-4125-8bd5-7ad17384221e_PipeLineAlertRG_microsoft.insights_metricAlerts_WCUS-R2-Gen2_-117781227",
- "firedDateTime": "2019-03-22T13:58:24.3713213Z",
- "resolvedDateTime": "2019-03-22T14:03:16.2246313Z",
- "description": "",
- "essentialsVersion": "1.0",
- "alertContextVersion": "1.0"
- },
- "alertContext": {
- "properties": null,
- "conditionType": "SingleResourceMultipleMetricCriteria",
- "condition": {
- "windowSize": "PT5M",
- "allOf": [
- {
- "metricName": "Percentage CPU",
- "metricNamespace": "Microsoft.Compute/virtualMachines",
- "operator": "GreaterThan",
- "threshold": "25",
- "timeAggregation": "Average",
- "dimensions": [
- {
- "name": "ResourceId",
- "value": "3efad9dc-3d50-4eac-9c87-8b3fd6f97e4e"
- }
- ],
- "metricValue": 7.727
- }
- ]
- }
- }
- }
-}
-```
-
-## Essentials
-
-| Field | Description|
-|:|:|
-| alertId | The unique resource ID that identifies the alert instance. |
-| alertRule | The name of the alert rule that generated the alert instance. |
-| Severity | The severity of the alert. Possible values are Sev0, Sev1, Sev2, Sev3, or Sev4. |
-| signalType | Identifies the signal on which the alert rule was defined. Possible values are Metric, Log, or Activity Log. |
-| monitorCondition | When an alert fires, the alert's monitor condition is set to **Fired**. When the underlying condition that caused the alert to fire clears, the monitor condition is set to **Resolved**. |
-| monitoringService | The monitoring service or solution that generated the alert. The fields for the alert context are dictated by the monitoring service. |
-| alertTargetIds | The list of the Azure Resource Manager IDs that are affected targets of an alert. For a log alert defined on a Log Analytics workspace or Application Insights instance, it's the respective workspace or application. |
-| configurationItems |The list of affected resources of an alert.<br>In some cases, the configuration items can be different from the alert targets. For example, in metric-for-log or log alerts defined on a Log Analytics workspace, the configuration items are the actual resources sending the telemetry and not the workspace.<br><ul><li>In the log alerts API (Scheduled Query Rules) v2021-08-01, the `configurationItem` values are taken from explicitly defined dimensions in this priority: `Computer`, `_ResourceId`, `ResourceId`, `Resource`.</li><li>In earlier versions of the log alerts API, the `configurationItem` values are taken implicitly from the results in this priority: `Computer`, `_ResourceId`, `ResourceId`, `Resource`.</li></ul>In ITSM systems, the `configurationItems` field is used to correlate alerts to resources in a configuration management database. |
-| originAlertId | The ID of the alert instance, as generated by the monitoring service generating it. |
-| firedDateTime | The date and time when the alert instance was fired in Coordinated Universal Time (UTC). |
-| resolvedDateTime | The date and time when the monitor condition for the alert instance is set to **Resolved** in UTC. Currently only applicable for metric alerts.|
-| description | The description, as defined in the alert rule. |
-|essentialsVersion| The version number for the essentials section.|
-|alertContextVersion | The version number for the `alertContext` section. |
-
-**Sample values**
-
-```json
-{
- "essentials": {
- "alertId": "/subscriptions/<subscription ID>/providers/Microsoft.AlertsManagement/alerts/b9569717-bc32-442f-add5-83a997729330",
- "alertRule": "Contoso IT Metric Alert",
- "severity": "Sev3",
- "signalType": "Metric",
- "monitorCondition": "Fired",
- "monitoringService": "Platform",
- "alertTargetIDs": [
- "/subscriptions/<subscription ID>/resourceGroups/aimon-rg/providers/Microsoft.Insights/components/ai-orion-int-fe"
- ],
- "originAlertId": "74ff8faa0c79db6084969cf7c72b0710e51aec70b4f332c719ab5307227a984f",
- "firedDateTime": "2019-03-26T05:25:50.4994863Z",
- "description": "Test Metric alert",
- "essentialsVersion": "1.0",
- "alertContextVersion": "1.0"
- }
-}
-```
-
-## Alert context
-
-The following fields describe the cause of an alert.
-
-### Metric alerts - Static threshold
-
-#### `monitoringService` = `Platform`
-
-**Sample values**
-
-```json
-{
- "alertContext": {
- "properties": null,
- "conditionType": "SingleResourceMultipleMetricCriteria",
- "condition": {
- "windowSize": "PT5M",
- "allOf": [
- {
- "metricName": "Percentage CPU",
- "metricNamespace": "Microsoft.Compute/virtualMachines",
- "operator": "GreaterThan",
- "threshold": "25",
- "timeAggregation": "Average",
- "dimensions": [
- {
- "name": "ResourceId",
- "value": "3efad9dc-3d50-4eac-9c87-8b3fd6f97e4e"
- }
- ],
- "metricValue": 31.1105
- }
- ],
- "windowStartTime": "2019-03-22T13:40:03.064Z",
- "windowEndTime": "2019-03-22T13:45:03.064Z"
- }
- }
-}
-```
-
-### Metric alerts - Dynamic threshold
-
-#### `monitoringService` = `Platform`
-
-**Sample values**
-
-```json
-{
- "alertContext": {
- "properties": null,
- "conditionType": "DynamicThresholdCriteria",
- "condition": {
- "windowSize": "PT5M",
- "allOf": [
- {
- "alertSensitivity": "High",
- "failingPeriods": {
- "numberOfEvaluationPeriods": 1,
- "minFailingPeriodsToAlert": 1
- },
- "ignoreDataBefore": null,
- "metricName": "Egress",
- "metricNamespace": "microsoft.storage/storageaccounts",
- "operator": "GreaterThan",
- "threshold": "47658",
- "timeAggregation": "Total",
- "dimensions": [],
- "metricValue": 50101
- }
- ],
- "windowStartTime": "2021-07-20T05:07:26.363Z",
- "windowEndTime": "2021-07-20T05:12:26.363Z"
- }
- }
-}
-```
-
-### Metric alerts - Availability tests
-
-#### `monitoringService` = `Platform`
-
-**Sample values**
-
-```json
-{
- "alertContext": {
- "properties": null,
- "conditionType": "WebtestLocationAvailabilityCriteria",
- "condition": {
- "windowSize": "PT5M",
- "allOf": [
- {
- "metricName": "Failed Location",
- "metricNamespace": null,
- "operator": "GreaterThan",
- "threshold": "2",
- "timeAggregation": "Sum",
- "dimensions": [],
- "metricValue": 5,
- "webTestName": "myAvailabilityTest-myApplication"
- }
- ],
- "windowStartTime": "2019-03-22T13:40:03.064Z",
- "windowEndTime": "2019-03-22T13:45:03.064Z"
- }
- }
-}
-```
-
-### Log alerts
-
-> [!NOTE]
-> For log alerts that have a custom email subject and/or JSON payload defined, enabling the common schema reverts email subject and/or payload schema to the one described as follows. This means that if you want to have a custom JSON payload defined, the webhook can't use the common alert schema. Alerts with the common schema enabled have an upper size limit of 256 KB per alert. Search results aren't embedded in the log alerts payload if they cause the alert size to cross this threshold. You can determine this by checking the flag `IncludedSearchResults`. When the search results aren't included, use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get).
-
-#### `monitoringService` = `Log Analytics`
-
-**Sample values**
-
-```json
-{
- "alertContext": {
- "SearchQuery": "Perf | where ObjectName == \"Processor\" and CounterName == \"% Processor Time\" | summarize AggregatedValue = avg(CounterValue) by bin(TimeGenerated, 5m), Computer",
- "SearchIntervalStartTimeUtc": "3/22/2019 1:36:31 PM",
- "SearchIntervalEndtimeUtc": "3/22/2019 1:51:31 PM",
- "ResultCount": 2,
- "LinkToSearchResults": "https://portal.azure.com/#Analyticsblade/search/index?_timeInterval.intervalEnd=2018-03-26T09%3a10%3a40.0000000Z&_timeInterval.intervalDuration=3600&q=Usage",
- "LinkToFilteredSearchResultsUI": "https://portal.azure.com/#Analyticsblade/search/index?_timeInterval.intervalEnd=2018-03-26T09%3a10%3a40.0000000Z&_timeInterval.intervalDuration=3600&q=Usage",
- "LinkToSearchResultsAPI": "https://api.loganalytics.io/v1/workspaces/workspaceID/query?query=Heartbeat&timespan=2020-05-07T18%3a11%3a51.0000000Z%2f2020-05-07T18%3a16%3a51.0000000Z",
- "LinkToFilteredSearchResultsAPI": "https://api.loganalytics.io/v1/workspaces/workspaceID/query?query=Heartbeat&timespan=2020-05-07T18%3a11%3a51.0000000Z%2f2020-05-07T18%3a16%3a51.0000000Z",
- "SeverityDescription": "Warning",
- "WorkspaceId": "12345a-1234b-123c-123d-12345678e",
- "SearchIntervalDurationMin": "15",
- "AffectedConfigurationItems": [
- "INC-Gen2Alert"
- ],
- "SearchIntervalInMinutes": "15",
- "Threshold": 10000,
- "Operator": "Less Than",
- "Dimensions": [
- {
- "name": "Computer",
- "value": "INC-Gen2Alert"
- }
- ],
- "SearchResults": {
- "tables": [
- {
- "name": "PrimaryResult",
- "columns": [
- {
- "name": "$table",
- "type": "string"
- },
- {
- "name": "Computer",
- "type": "string"
- },
- {
- "name": "TimeGenerated",
- "type": "datetime"
- }
- ],
- "rows": [
- [
- "Fabrikam",
- "33446677a",
- "2018-02-02T15:03:12.18Z"
- ],
- [
- "Contoso",
- "33445566b",
- "2018-02-02T15:16:53.932Z"
- ]
- ]
- }
- ],
- "dataSources": [
- {
- "resourceId": "/subscriptions/a5ea55e2-7482-49ba-90b3-60e7496dd873/resourcegroups/test/providers/microsoft.operationalinsights/workspaces/test",
- "tables": [
- "Heartbeat"
- ]
- }
- ]
- },
- "IncludedSearchResults": "True",
- "AlertType": "Metric measurement"
- }
-}
-```
-
-#### `monitoringService` = `Application Insights`
-
-**Sample values**
-
-```json
-{
- "alertContext": {
- "SearchQuery": "requests | where resultCode == \"500\" | summarize AggregatedValue = Count by bin(Timestamp, 5m), IP",
- "SearchIntervalStartTimeUtc": "3/22/2019 1:36:33 PM",
- "SearchIntervalEndtimeUtc": "3/22/2019 1:51:33 PM",
- "ResultCount": 2,
- "LinkToSearchResults": "https://portal.azure.com/AnalyticsBlade/subscriptions/12345a-1234b-123c-123d-12345678e/?query=search+*+&timeInterval.intervalEnd=2018-03-26T09%3a10%3a40.0000000Z&_timeInterval.intervalDuration=3600&q=Usage",
- "LinkToFilteredSearchResultsUI": "https://portal.azure.com/AnalyticsBlade/subscriptions/12345a-1234b-123c-123d-12345678e/?query=search+*+&timeInterval.intervalEnd=2018-03-26T09%3a10%3a40.0000000Z&_timeInterval.intervalDuration=3600&q=Usage",
- "LinkToSearchResultsAPI": "https://api.applicationinsights.io/v1/apps/0MyAppId0/metrics/requests/count",
- "LinkToFilteredSearchResultsAPI": "https://api.applicationinsights.io/v1/apps/0MyAppId0/metrics/requests/count",
- "SearchIntervalDurationMin": "15",
- "SearchIntervalInMinutes": "15",
- "Threshold": 10000.0,
- "Operator": "Less Than",
- "ApplicationId": "8e20151d-75b2-4d66-b965-153fb69d65a6",
- "Dimensions": [
- {
- "name": "IP",
- "value": "1.1.1.1"
- }
- ],
- "SearchResults": {
- "tables": [
- {
- "name": "PrimaryResult",
- "columns": [
- {
- "name": "$table",
- "type": "string"
- },
- {
- "name": "Id",
- "type": "string"
- },
- {
- "name": "Timestamp",
- "type": "datetime"
- }
- ],
- "rows": [
- [
- "Fabrikam",
- "33446677a",
- "2018-02-02T15:03:12.18Z"
- ],
- [
- "Contoso",
- "33445566b",
- "2018-02-02T15:16:53.932Z"
- ]
- ]
- }
- ],
- "dataSources": [
- {
- "resourceId": "/subscriptions/a5ea27e2-7482-49ba-90b3-52e7496dd873/resourcegroups/test/providers/microsoft.operationalinsights/workspaces/test",
- "tables": [
- "Heartbeat"
- ]
- }
- ]
- },
- "IncludedSearchResults": "True",
- "AlertType": "Metric measurement"
- }
-}
-```
-
-#### `monitoringService` = `Log Alerts V2`
-
-> [!NOTE]
-> Log alerts rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log alerts payload when you use this version. Use [dimensions](./alerts-unified-log.md#split-by-alert-dimensions) to provide context to fired alerts. You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get). If you must embed the results, use a logic app with the provided links to generate a custom payload.
-
-**Sample values**
-
-```json
-{
- "alertContext": {
- "properties": {
- "name1": "value1",
- "name2": "value2"
- },
- "conditionType": "LogQueryCriteria",
- "condition": {
- "windowSize": "PT10M",
- "allOf": [
- {
- "searchQuery": "Heartbeat",
- "metricMeasureColumn": "CounterValue",
- "targetResourceTypes": "['Microsoft.Compute/virtualMachines']",
- "operator": "LowerThan",
- "threshold": "1",
- "timeAggregation": "Count",
- "dimensions": [
- {
- "name": "Computer",
- "value": "TestComputer"
- }
- ],
- "metricValue": 0.0,
- "failingPeriods": {
- "numberOfEvaluationPeriods": 1,
- "minFailingPeriodsToAlert": 1
- },
- "linkToSearchResultsUI": "https://portal.azure.com#@12345a-1234b-123c-123d-12345678e/blade/Microsoft_Azure_Monitoring_Logs/LogsBlade/source/Alerts.EmailLinks/scope/%7B%22resources%22%3A%5B%7B%22resourceId%22%3A%22%2Fsubscriptions%212345a-1234b-123c-123d-12345678e%2FresourceGroups%2FContoso%2Fproviders%2FMicrosoft.Compute%2FvirtualMachines%2FContoso%22%7D%5D%7D/q/eJzzSE0sKklKTSypUSjPSC1KVQjJzE11T81LLUosSU1RSEotKU9NzdNIAfJKgDIaRgZGBroG5roGliGGxlYmJlbGJnoGEKCpp4dDmSmKMk0A/prettify/1/timespan/2020-07-07T13%3a54%3a34.0000000Z%2f2020-07-09T13%3a54%3a34.0000000Z",
- "linkToFilteredSearchResultsUI": "https://portal.azure.com#@12345a-1234b-123c-123d-12345678e/blade/Microsoft_Azure_Monitoring_Logs/LogsBlade/source/Alerts.EmailLinks/scope/%7B%22resources%22%3A%5B%7B%22resourceId%22%3A%22%2Fsubscriptions%212345a-1234b-123c-123d-12345678e%2FresourceGroups%2FContoso%2Fproviders%2FMicrosoft.Compute%2FvirtualMachines%2FContoso%22%7D%5D%7D/q/eJzzSE0sKklKTSypUSjPSC1KVQjJzE11T81LLUosSU1RSEotKU9NzdNIAfJKgDIaRgZGBroG5roGliGGxlYmJlbGJnoGEKCpp4dDmSmKMk0A/prettify/1/timespan/2020-07-07T13%3a54%3a34.0000000Z%2f2020-07-09T13%3a54%3a34.0000000Z",
- "linkToSearchResultsAPI": "https://api.loganalytics.io/v1/subscriptions/12345a-1234b-123c-123d-12345678e/resourceGroups/Contoso/providers/Microsoft.Compute/virtualMachines/Contoso/query?query=Heartbeat%7C%20where%20TimeGenerated%20between%28datetime%282020-07-09T13%3A44%3A34.0000000%29..datetime%282020-07-09T13%3A54%3A34.0000000%29%29&timespan=2020-07-07T13%3a54%3a34.0000000Z%2f2020-07-09T13%3a54%3a34.0000000Z",
- "linkToFilteredSearchResultsAPI": "https://api.loganalytics.io/v1/subscriptions/12345a-1234b-123c-123d-12345678e/resourceGroups/Contoso/providers/Microsoft.Compute/virtualMachines/Contoso/query?query=Heartbeat%7C%20where%20TimeGenerated%20between%28datetime%282020-07-09T13%3A44%3A34.0000000%29..datetime%282020-07-09T13%3A54%3A34.0000000%29%29&timespan=2020-07-07T13%3a54%3a34.0000000Z%2f2020-07-09T13%3a54%3a34.0000000Z"
- }
- ],
- "windowStartTime": "2020-07-07T13:54:34Z",
- "windowEndTime": "2020-07-09T13:54:34Z"
- }
- }
-}
-```
-
-### Activity log alerts
-
-#### `monitoringService` = `Activity Log - Administrative`
-
-**Sample values**
-
-```json
-{
- "alertContext": {
- "authorization": {
- "action": "Microsoft.Compute/virtualMachines/restart/action",
- "scope": "/subscriptions/<subscription ID>/resourceGroups/PipeLineAlertRG/providers/Microsoft.Compute/virtualMachines/WCUS-R2-ActLog"
- },
- "channels": "Operation",
- "claims": "{\"aud\":\"https://management.core.windows.net/\",\"iss\":\"https://sts.windows.net/12345a-1234b-123c-123d-12345678e/\",\"iat\":\"1553260826\",\"nbf\":\"1553260826\",\"exp\":\"1553264726\",\"aio\":\"42JgYNjdt+rr+3j/dx68v018XhuFAwA=\",\"appid\":\"e9a02282-074f-45cf-93b0-50568e0e7e50\",\"appidacr\":\"2\",\"http://schemas.microsoft.com/identity/claims/identityprovider\":\"https://sts.windows.net/12345a-1234b-123c-123d-12345678e/\",\"http://schemas.microsoft.com/identity/claims/objectidentifier\":\"9778283b-b94c-4ac6-8a41-d5b493d03aa3\",\"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier\":\"9778283b-b94c-4ac6-8a41-d5b493d03aa3\",\"http://schemas.microsoft.com/identity/claims/tenantid\":\"12345a-1234b-123c-123d-12345678e\",\"uti\":\"v5wYC9t9ekuA2rkZSVZbAA\",\"ver\":\"1.0\"}",
- "caller": "9778283b-b94c-4ac6-8a41-d5b493d03aa3",
- "correlationId": "8ee9c32a-92a1-4a8f-989c-b0ba09292a91",
- "eventSource": "Administrative",
- "eventTimestamp": "2019-03-22T13:56:31.2917159+00:00",
- "eventDataId": "161fda7e-1cb4-4bc5-9c90-857c55a8f57b",
- "level": "Informational",
- "operationName": "Microsoft.Compute/virtualMachines/restart/action",
- "operationId": "310db69b-690f-436b-b740-6103ab6b0cba",
- "status": "Succeeded",
- "subStatus": "",
- "submissionTimestamp": "2019-03-22T13:56:54.067593+00:00"
- }
-}
-```
-
-#### `monitoringService` = `Activity Log - Policy`
-
-**Sample values**
-
-```json
-{
- "alertContext": {
- "authorization": {
- "action": "Microsoft.Resources/checkPolicyCompliance/read",
- "scope": "/subscriptions/<GUID>"
- },
- "channels": "Operation",
- "claims": "{\"aud\":\"https://management.azure.com/\",\"iss\":\"https://sts.windows.net/<GUID>/\",\"iat\":\"1566711059\",\"nbf\":\"1566711059\",\"exp\":\"1566740159\",\"aio\":\"42FgYOhynHNw0scy3T/bL71+xLyqEwA=\",\"appid\":\"<GUID>\",\"appidacr\":\"2\",\"http://schemas.microsoft.com/identity/claims/identityprovider\":\"https://sts.windows.net/<GUID>/\",\"http://schemas.microsoft.com/identity/claims/objectidentifier\":\"<GUID>\",\"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier\":\"<GUID>\",\"http://schemas.microsoft.com/identity/claims/tenantid\":\"<GUID>\",\"uti\":\"Miy1GzoAG0Scu_l3m1aIAA\",\"ver\":\"1.0\"}",
- "caller": "<GUID>",
- "correlationId": "<GUID>",
- "eventSource": "Policy",
- "eventTimestamp": "2019-08-25T11:11:34.2269098+00:00",
- "eventDataId": "<GUID>",
- "level": "Warning",
- "operationName": "Microsoft.Authorization/policies/audit/action",
- "operationId": "<GUID>",
- "properties": {
- "isComplianceCheck": "True",
- "resourceLocation": "eastus2",
- "ancestors": "<GUID>",
- "policies": "[{\"policyDefinitionId\":\"/providers/Microsoft.Authorization/policyDefinitions/<GUID>/\",\"policySetDefinitionId\":\"/providers/Microsoft.Authorization/policySetDefinitions/<GUID>/\",\"policyDefinitionReferenceId\":\"vulnerabilityAssessmentMonitoring\",\"policySetDefinitionName\":\"<GUID>\",\"policyDefinitionName\":\"<GUID>\",\"policyDefinitionEffect\":\"AuditIfNotExists\",\"policyAssignmentId\":\"/subscriptions/<GUID>/providers/Microsoft.Authorization/policyAssignments/SecurityCenterBuiltIn/\",\"policyAssignmentName\":\"SecurityCenterBuiltIn\",\"policyAssignmentScope\":\"/subscriptions/<GUID>\",\"policyAssignmentSku\":{\"name\":\"A1\",\"tier\":\"Standard\"},\"policyAssignmentParameters\":{}}]"
- },
- "status": "Succeeded",
- "subStatus": "",
- "submissionTimestamp": "2019-08-25T11:12:46.1557298+00:00"
- }
-}
-```
-
-#### `monitoringService` = `Activity Log - Autoscale`
-
-**Sample values**
-```json
-{
- "alertContext": {
- "channels": "Admin, Operation",
- "claims": "{\"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/spn\":\"Microsoft.Insights/autoscaleSettings\"}",
- "caller": "Microsoft.Insights/autoscaleSettings",
- "correlationId": "<GUID>",
- "eventSource": "Autoscale",
- "eventTimestamp": "2019-08-21T16:17:47.1551167+00:00",
- "eventDataId": "<GUID>",
- "level": "Informational",
- "operationName": "Microsoft.Insights/AutoscaleSettings/Scaleup/Action",
- "operationId": "<GUID>",
- "properties": {
- "description": "The autoscale engine attempting to scale resource '/subscriptions/d<GUID>/resourceGroups/testRG/providers/Microsoft.Compute/virtualMachineScaleSets/testVMSS' from 9 instances count to 10 instances count.",
- "resourceName": "/subscriptions/<GUID>/resourceGroups/voiceassistancedemo/providers/Microsoft.Compute/virtualMachineScaleSets/alexademo",
- "oldInstancesCount": "9",
- "newInstancesCount": "10",
- "activeAutoscaleProfile": "{\r\n \"Name\": \"Auto created scale condition\",\r\n \"Capacity\": {\r\n \"Minimum\": \"1\",\r\n \"Maximum\": \"10\",\r\n \"Default\": \"1\"\r\n },\r\n \"Rules\": [\r\n {\r\n \"MetricTrigger\": {\r\n \"Name\": \"Percentage CPU\",\r\n \"Namespace\": \"microsoft.compute/virtualmachinescalesets\",\r\n \"Resource\": \"/subscriptions/<GUID>/resourceGroups/testRG/providers/Microsoft.Compute/virtualMachineScaleSets/testVMSS\",\r\n \"ResourceLocation\": \"eastus\",\r\n \"TimeGrain\": \"PT1M\",\r\n \"Statistic\": \"Average\",\r\n \"TimeWindow\": \"PT5M\",\r\n \"TimeAggregation\": \"Average\",\r\n \"Operator\": \"GreaterThan\",\r\n \"Threshold\": 0.0,\r\n \"Source\": \"/subscriptions/<GUID>/resourceGroups/testRG/providers/Microsoft.Compute/virtualMachineScaleSets/testVMSS\",\r\n \"MetricType\": \"MDM\",\r\n \"Dimensions\": [],\r\n \"DividePerInstance\": false\r\n },\r\n \"ScaleAction\": {\r\n \"Direction\": \"Increase\",\r\n \"Type\": \"ChangeCount\",\r\n \"Value\": \"1\",\r\n \"Cooldown\": \"PT1M\"\r\n }\r\n }\r\n ]\r\n}",
- "lastScaleActionTime": "Wed, 21 Aug 2019 16:17:47 GMT"
- },
- "status": "Succeeded",
- "submissionTimestamp": "2019-08-21T16:17:47.2410185+00:00"
- }
-}
-```
-
-#### `monitoringService` = `Activity Log - Security`
-
-**Sample values**
-
-```json
-{
- "alertContext": {
- "channels": "Operation",
- "correlationId": "<GUID>",
- "eventSource": "Security",
- "eventTimestamp": "2019-08-26T08:34:14+00:00",
- "eventDataId": "<GUID>",
- "level": "Informational",
- "operationName": "Microsoft.Security/locations/alerts/activate/action",
- "operationId": "<GUID>",
- "properties": {
- "threatStatus": "Quarantined",
- "category": "Virus",
- "threatID": "2147519003",
- "filePath": "C:\\AlertGeneration\\test.eicar",
- "protectionType": "Windows Defender",
- "actionTaken": "Blocked",
- "resourceType": "Virtual Machine",
- "severity": "Low",
- "compromisedEntity": "testVM",
- "remediationSteps": "[\"No user action is necessary\"]",
- "attackedResourceType": "Virtual Machine"
- },
- "status": "Active",
- "submissionTimestamp": "2019-08-26T09:28:58.3019107+00:00"
- }
-}
-```
-
-#### `monitoringService` = `ServiceHealth`
-
-**Sample values**
-
-```json
-{
- "alertContext": {
- "authorization": null,
- "channels": 1,
- "claims": null,
- "caller": null,
- "correlationId": "f3cf2430-1ee3-4158-8e35-7a1d615acfc7",
- "eventSource": 2,
- "eventTimestamp": "2019-06-24T11:31:19.0312699+00:00",
- "httpRequest": null,
- "eventDataId": "<GUID>",
- "level": 3,
- "operationName": "Microsoft.ServiceHealth/maintenance/action",
- "operationId": "<GUID>",
- "properties": {
- "title": "Azure Synapse Analytics Scheduled Maintenance Pending",
- "service": "Azure Synapse Analytics",
- "region": "East US",
- "communication": "<MESSAGE>",
- "incidentType": "Maintenance",
- "trackingId": "<GUID>",
- "impactStartTime": "2019-06-26T04:00:00Z",
- "impactMitigationTime": "2019-06-26T12:00:00Z",
- "impactedServices": "[{\"ImpactedRegions\":[{\"RegionName\":\"East US\"}],\"ServiceName\":\"Azure Synapse Analytics\"}]",
- "impactedServicesTableRows": "<tr>\r\n<td align='center' style='padding: 5px 10px; border-right:1px solid black; border-bottom:1px solid black'>Azure Synapse Analytics</td>\r\n<td align='center' style='padding: 5px 10px; border-bottom:1px solid black'>East US<br></td>\r\n</tr>\r\n",
- "defaultLanguageTitle": "Azure Synapse Analytics Scheduled Maintenance Pending",
- "defaultLanguageContent": "<MESSAGE>",
- "stage": "Planned",
- "communicationId": "<GUID>",
- "maintenanceId": "<GUID>",
- "isHIR": "false",
- "version": "0.1.1"
- },
- "status": "Active",
- "subStatus": null,
- "submissionTimestamp": "2019-06-24T11:31:31.7147357+00:00",
- "ResourceType": null
- }
-}
-```
-
-#### `monitoringService` = `Resource Health`
-
-**Sample values**
-
-```json
-{
- "alertContext": {
- "channels": "Admin, Operation",
- "correlationId": "<GUID>",
- "eventSource": "ResourceHealth",
- "eventTimestamp": "2019-06-24T15:42:54.074+00:00",
- "eventDataId": "<GUID>",
- "level": "Informational",
- "operationName": "Microsoft.Resourcehealth/healthevent/Activated/action",
- "operationId": "<GUID>",
- "properties": {
- "title": "This virtual machine is stopping and deallocating as requested by an authorized user or process",
- "details": null,
- "currentHealthStatus": "Unavailable",
- "previousHealthStatus": "Available",
- "type": "Downtime",
- "cause": "UserInitiated"
- },
- "status": "Active",
- "submissionTimestamp": "2019-06-24T15:45:20.4488186+00:00"
- }
-}
-```
-
-#### `monitoringService` = `Prometheus`
-
-**Sample values**
-
-```json
-{
- "alertContext": {
- "interval": "PT1M",
- "expression": "sql_up > 0",
- "expressionValue": "0",
- "for": "PT2M",
- "labels": {
- "Environment": "Prod",
- "cluster": "myCluster1"
- },
- "annotations": {
- "summary": "alert on SQL availability"
- },
- "ruleGroup": "/subscriptions/<subscription ID>/resourceGroups/myResourceGroup/providers/Microsoft.AlertsManagement/prometheusRuleGroups/myRuleGroup"
- }
-}
-```
--
-## Next steps
--- Learn more about the [common alert schema](./alerts-common-schema.md).-- Learn [how to create a logic app that uses the common alert schema to handle all your alerts](./alerts-common-schema-integrations.md).
azure-monitor Alerts Common Schema Test Action Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema-test-action-definitions.md
Title: Alert schema definitions in Azure Monitor for Test Action Group description: Understand the common alert schema definitions for Azure Monitor for the Test Action group.- Last updated 01/14/2022 ms.revewer: jagummersall++ # Common alert schema definitions for Test Action Group (preview)
You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to
} ```
-#### monitoringService = Budget
-
-**Sample values**
-```json
-{
- "schemaId":"AIP Budget Notification",
- "data":{
- "SubscriptionName":"test-subscription",
- "SubscriptionId":"11111111-1111-1111-1111-111111111111",
- "EnrollmentNumber":"",
- "DepartmentName":"test-budgetDepartmentName",
- "AccountName":"test-budgetAccountName",
- "BillingAccountId":"",
- "BillingProfileId":"",
- "InvoiceSectionId":"",
- "ResourceGroup":"test-RG",
- "SpendingAmount":"1111.32",
- "BudgetStartDate":"11/17/2021 5:40:29 PM -08:00",
- "Budget":"10000",
- "Unit":"USD",
- "BudgetCreator":"email@domain.com",
- "BudgetName":"test-budgetName",
- "BudgetType":"Cost",
- "NotificationThresholdAmount":"8000.0"
- }
-}
-```
-
-#### monitoringService = Actual Cost Budget
+#### `monitoringService` = `CostAlerts`
+Actual cost budget
**Sample values** ```json
You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to
"signalType": null, "monitorCondition": null, "alertTargetIDs": null,
- "configurationItems": ["budgets"],
+ "configurationItems": [
+ "budgets"
+ ],
"originAlertId": null
- },
+ },
"alertContext": { "AlertCategory": "budgets", "AlertData": {
You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to
} } ```
-#### monitoringService = Forecasted Budget
+
+#### `monitoringService` = `CostAlerts`
+Forecasted cost budget
**Sample values** ```json
You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to
"signalType": null, "monitorCondition": null, "alertTargetIDs": null,
- "configurationItems": ["budgets"],
+ "configurationItems": [
+ "budgets"
+ ],
"originAlertId": null }, "alertContext": {
You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to
} ```
-#### monitoringService = Smart Alert
+#### `monitoringService` = `Smart Alert`
**Sample values** ```json
azure-monitor Alerts Common Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema.md
description: Understand the common alert schema, why you should use it, and how
Last updated 12/22/2022 ++ # Common alert schema
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
Then you define these elements for the resulting alert actions by using:
1. (Optional) Depending on the signal type, you might see the **Split by dimensions** section.
- Dimensions are name-value pairs that contain more data about the metric value. By using dimensions, you can filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values. Dimensions can be either number or string columns.
+ Dimensions are name-value pairs that contain more data about the metric value. By using dimensions, you can filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values.
If you select more than one dimension value, each time series that results from the combination will trigger its own alert and be charged separately. For example, the transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction (for example, GetBlob, DeleteBlob, and PutPage). You can choose to have an alert fired when there's a high number of transactions in a specific API (the aggregated data). Or you can use dimensions to alert only when the number of transactions is high for specific APIs.
Then you define these elements for the resulting alert actions by using:
If you select more than one dimension value, each time series that results from the combination triggers its own alert and is charged separately. The alert payload includes the combination that triggered the alert. You can select up to six more splittings for any columns that contain text or numbers.
+
+ > [!NOTE]
+ > Dimensions can **only** be number or string columns. If for example you want to use a dynamic column as a dimension, you need to convert it to a string first.
You can also decide *not* to split when you want a condition applied to multiple resources in the scope. An example would be if you want to fire an alert if at least five machines in the resource group scope have CPU usage over 80 percent.
azure-monitor Alerts Dynamic Thresholds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-dynamic-thresholds.md
Title: Create alerts with Dynamic Thresholds in Azure Monitor description: Create alerts with machine learning-based Dynamic Thresholds.--+++ Last updated 2/23/2022
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-n
To use [managed identity authentication (preview)](container-insights-onboard.md#authentication), add the `configuration-settings` parameter as in the following: ```azurecli
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogsagent.useAADAuth=true
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.useAADAuth=true
```
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-n
If you want to tweak the default resource requests and limits, you can use the advanced configurations settings: ```azurecli
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogsagent.resources.daemonset.limits.cpu=150m amalogsagent.resources.daemonset.limits.memory=600Mi amalogsagent.resources.deployment.limits.cpu=1 amalogsagent.resources.deployment.limits.memory=750Mi
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.resources.daemonset.limits.cpu=150m amalogs.resources.daemonset.limits.memory=600Mi amalogs.resources.deployment.limits.cpu=1 amalogs.resources.deployment.limits.memory=750Mi
``` Checkout the [resource requests and limits section of Helm chart](https://github.com/microsoft/Docker-Provider/blob/ci_prod/charts/azuremonitor-containers/values.yaml) for the available configuration settings.
Checkout the [resource requests and limits section of Helm chart](https://github
If the Azure Arc-enabled Kubernetes cluster is on Azure Stack Edge, then a custom mount path `/home/data/docker` needs to be used. ```azurecli
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogsagent.logsettings.custommountpath=/home/data/docker
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.logsettings.custommountpath=/home/data/docker
```
az k8s-extension show --name azuremonitor-containers --cluster-name \<cluster-na
Enable Container insights extension with managed identity authentication option using the workspace returned in the first step. ```cli
-az k8s-extension create --name azuremonitor-containers --cluster-name \<cluster-name\> --resource-group \<resource-group\> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogsagent.useAADAuth=true logAnalyticsWorkspaceResourceID=\<workspace-resource-id\>
+az k8s-extension create --name azuremonitor-containers --cluster-name \<cluster-name\> --resource-group \<resource-group\> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.useAADAuth=true logAnalyticsWorkspaceResourceID=\<workspace-resource-id\>
``` ## [Resource Manager](#tab/migrate-arm)
azure-monitor Azure Data Explorer Monitor Cross Service Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-data-explorer-monitor-cross-service-query.md
- Title: Cross service query between Azure Monitor and Azure Data Explorer
-description: Query Azure Data Explorer data through Azure Log Analytics tools vice versa to join and analyze all your data in one place.
--- Previously updated : 03/28/2022---
-# Cross service query - Azure Monitor and Azure Data Explorer
-Create cross service queries between [Azure Data Explorer](/azure/data-explorer/), [Application Insights](../app/app-insights-overview.md), and [Log Analytics](../logs/data-platform-logs.md).
-## Azure Monitor and Azure Data Explorer cross-service querying
-This experience enables you to [create cross service queries between Azure Data Explorer and Azure Monitor](/azure/data-explorer/query-monitor-data) and to [create cross service queries between Azure Monitor and Azure Data Explorer](./azure-monitor-data-explorer-proxy.md).
-
-For example, (querying Azure Data Explorer from Log Analytics):
-```kusto
-CustomEvents | where aField == 1
-| join (adx("Help/Samples").TableName | where secondField == 3) on $left.Key == $right.key
-```
-Where the outer query is querying a table in the workspace, and then joining with another table in an Azure Data Explorer cluster (in this case, clustername=help, databasename=samples) by using a new "adx()" function, like how you can do the same to query another workspace from inside query text.
-
-## Query exported Log Analytics data from Azure Blob storage account
-
-Exporting data from Azure Monitor to an Azure storage account enables low-cost retention and the ability to reallocate logs to different regions.
-
-Use Azure Data Explorer to query data that was exported from your Log Analytics workspaces. Once configured, supported tables that are sent from your workspaces to an Azure storage account will be available as a data source for Azure Data Explorer. [Query exported data from Azure Monitor using Azure Data Explorer](../logs/azure-data-explorer-query-storage.md).
--
->[!tip]
-> To export all data from your Log Analytics workspace to an Azure storage account or event hub, use the [Log Analytics workspace data export feature](/azure/data-explorer/query-monitor-data).
-
-## Next steps
-Learn how to:
-* [Query data in Azure Monitor from Azure Data Explorer](/azure/data-explorer/query-monitor-data).
-* [Query data in Azure Data Explorer from Azure Monitor](./azure-monitor-data-explorer-proxy.md).
azure-monitor Migrate Splunk To Azure Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/migrate-splunk-to-azure-monitor-logs.md
+
+ Title: Migrate from Splunk to Azure Monitor Logs - Get started
+description: Plan the phases of your migration from Splunk to Azure Monitor Logs and get started importing, collecting, and analyzing log data.
++++ Last updated : 11/22/2022+
+#customer-intent: As an IT manager, I want to understand the steps required to migrate my Splunk deployment to Azure Monitor Logs so that I can decide whether to migrate and plan and execute my migration.
+++
+# Migrate from Splunk to Azure Monitor Logs
+
+[Azure Monitor Logs](../logs/data-platform-logs.md) is a cloud-based managed monitoring and observability service that provides many advantages in terms of cost management, scalability, flexibility, integration, and low maintenance overhead. The service is designed to handle large amounts of data and scale easily to meet the needs of organizations of all sizes.
+
+Azure Monitor Logs collects data from a wide variety of sources, including Windows Event logs, Syslog, and custom logs, to provide a unified view of all Azure and non-Azure resources. Using a sophisticated query language and curated visualization you can quickly analyze millions of records to identify, understand, and respond to critical patterns in your monitoring data.
+
+This article explains how to migrate your Splunk, Splunk Cloud, or Splunk Enterprise deployment to Azure Monitor Logs for logging and log data analysis.
+
+For information on migrating your Security Information and Event Management (SIEM) deployment from Splunk Enterprise Security to Azure Sentinel, see [Plan your migration to Microsoft Sentinel](../../sentinel/migration.md).
+
+## Compare offerings
+
+|Splunk offering|Azure offering|
+|||
+|Splunk, Splunk Cloud|[Azure Monitor](../overview.md) is an end-to-end solution for collecting, analyzing, and acting on telemetry from your cloud, multicloud, and on-premises environments.|
+|Splunk Enterprise|[Azure Monitor](../overview.md) offers enterprises a comprehensive solution for monitoring cloud, hybrid, and on-premises environments, with [network isolation](../logs/private-link-security.md), [resilience features and protection from data center failures](../logs/availability-zones.md), [reporting](../overview.md#insights-and-curated-visualizations), and [alerts and response](../overview.md#respond-to-critical-situations) capabilities. |
+|Splunk Enterprise Security|[Microsoft Sentinel](../../sentinel/overview.md) is a cloud-native solution that provides intelligent security analytics and threat intelligence across the enterprise.|
+## Introduction to key concepts
+
+|Azure Monitor Logs |Similar Splunk concept|Description|
+||||
+|[Log Analytics workspace](../logs/log-analytics-workspace-overview.md)|Namespace|A Log Analytics workspace is an environment in which you can collect log data from all Azure and non-Azure monitored resources. The data in the workspace is available for querying and analysis, Azure Monitor features, and other Azure services. Similar to a Splunk namespace, you can manage access to the data and artifacts, such as alerts and workbooks, in your Log Analytics workspace. |
+|[Table management](../logs/manage-logs-tables.md)|Indexing|Azure Monitor Logs ingests log data into tables in a managed [Azure Data Explorer](/azure/data-explorer/data-explorer-overview) database. During ingestion, the service automatically indexes and timestamps the data, which means you can store various types of data and access the data quickly using Kusto Query Language (KQL) queries.<br/>Use table properties to manage the table schema, data retention and archive, and whether to store the data for occasional auditing and troubleshooting or for ongoing analysis and use by features and services.<br/>For a comparison of Splunk and Azure Data Explorer data handling and querying concepts, see [Splunk to Kusto Query Language map](/azure/data-explorer/kusto/query/splunk-cheat-sheet).||
+|[Basic and Analytics log data plans](../logs/basic-logs-configure.md)| |Azure Monitor Logs offers two log data plans that let you reduce log ingestion and retention costs and take advantage of Azure Monitor's advanced features and analytics capabilities based on your needs.<br/>The **Analytics** plan makes log data available for interactive queries and use by features and services.<br/>The **Basic** log data plan provides a low-cost way to ingest and retain logs for troubleshooting, debugging, auditing, and compliance. |
+|[Archiving and quick access to archived data](../logs/data-retention-archive.md)|Data bucket states (hot, warm, cold, thawed), archiving, Dynamic Data Active Archive (DDAA) |The cost-effective archive option keeps your logs in your Log Analytics workspace and lets you access archived log data immediately, when you need it. Archive configuration changes are effective immediately because data isn't physically transferred to external storage. You can [restore archived data](../logs/restore.md) or run a [search job](../logs/search-jobs.md) to make a specific time range of archived data available for real-time analysis. |
+|[Access control](../logs/manage-access.md)|Role-based user access, permissions |Role-based access control lets you define which people in your organization have access to read, write, and perform operations in a Log Analytics workspace. You can configure permissions at the workspace level, at the resource level, and at the table level, so you have granular control over specific resources and log types.|
+|[Data transformations](../essentials/data-collection-transformations.md)|Transforms, field extractions |Transformations let you filter or modify incoming data before it's sent to a Log Analytics workspace. Use transformations to remove sensitive data, enrich data in your Log Analytics workspace, perform calculations, and filter out data you don't need to reduce data costs. |
+|[Data collection rules](../essentials/data-collection-rule-overview.md)|Data inputs, data pipeline|Define which data to collect, how to transform that data, and where to send the data. |
+|[Kusto Query Language (KQL)](/azure/kusto/query/)|Splunk Search Processing Language (SPL)|Azure Monitor Logs uses a large subset of KQL that's suitable for simple log queries but also includes advanced functionality such as aggregations, joins, and smart analytics. Use the [Splunk to Kusto Query Language map](/azure/data-explorer/kusto/query/splunk-cheat-sheet) to translate your Splunk SPL knowledge to KQL. You can also [learn KQL with tutorials](../logs/get-started-queries.md) and [KQL training modules](/training/modules/analyze-logs-with-kql/).|
+|[Log Analytics](../logs/log-analytics-tutorial.md)|Splunk Web, Search app, Pivot tool|A tool in the Azure portal for editing and running log queries in Azure Monitor Logs. Log Analytics also provides a rich set of tools for exploring and visualizing data without using KQL.|
+|[Cost optimization](../../azure-monitor/best-practices-cost.md)||Azure Monitor provides [tools and best practices to help you understand, monitor, and optimize your costs](../../azure-monitor/best-practices-cost.md) based on your needs. |
+
+## 1. Understand your current usage
+
+Your current usage in Splunk will help you decide which [pricing tier](../logs/change-pricing-tier.md) to select in Azure Monitor and estimate your future costs:
+
+- [Follow Splunk guidance](https://docs.splunk.com/Documentation/Splunk/latest/Admin/AboutSplunksLicenseUsageReportView) to view your usage report.
+- [Estimate Azure Monitor usage and costs](../usage-estimated-costs.md#estimate-azure-monitor-usage-and-costs) using the [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor).
+
+## 2. Set up a Log Analytics workspace
+
+Your Log Analytics workspace is where you collect log data from all of your monitored resources. You can retain data in a Log Analytics workspace for up to seven years. Low-cost data archiving within the workspace lets you access archived data quickly and easily when you need it, without the overhead of managing an external data store.
+
+We recommend collecting all of your log data in a single Log Analytics workspace for ease of management. If you're considering using multiple workspaces, see [Design a Log Analytics workspace architecture](../logs/workspace-design.md).
+
+To set up a Log Analytics workspace for data collection:
+
+1. [Create a Log Analytics workspace](../logs/quick-create-workspace.md).
+
+ Azure Monitor Logs creates Azure tables in your workspace automatically based on Azure services you use and [data collection settings](#4-collect-data) you define for Azure resources.
+
+1. Configure your Log Analytics workspace, including:
+ 1. [Pricing tier](../logs/change-pricing-tier.md).
+ 1. [Link your Log Analytics workspace to a dedicated cluster](../logs/availability-zones.md) to take advantage of advanced capabilities, if you're eligible, based on pricing tier.
+ 1. [Daily cap](../logs/daily-cap.md).
+ 1. [Data retention](../logs/data-retention-archive.md).
+ 1. [Network isolation](../logs/private-link-security.md).
+ 1. [Access control](../logs/manage-access.md).
+
+1. Use [table-level configuration settings](../logs/manage-logs-tables.md) to:
+ 1. [Define each table's log data plan](../logs/basic-logs-configure.md).
+
+ The default log data plan is Analytics, which lets you take advantage of Azure Monitor's rich monitoring and analytics capabilities. If youYou can
+
+ 1. [Set a data retention and archiving policy for specific tables](../logs/data-retention-archive.md), if you need them to be different from the workspace-level data retention and archiving policy.
+ 1. [Modify the table schema](../logs/create-custom-table.md) based on your data model.
+
+## 3. Migrate Splunk artifacts to Azure Monitor
+
+To migrate most Splunk artifacts, you need to translate Splunk Processing Language (SPL) to Kusto Query Language (KQL). For more information, see the [Splunk to Kusto Query Language map](/azure/data-explorer/kusto/query/splunk-cheat-sheet) and [Get started with log queries in Azure Monitor](../logs/get-started-queries.md).
+
+This table lists Splunk artifacts and links to guidance for setting up the equivalent artifacts in Azure Monitor:
+
+|Splunk artifact| Azure Monitor artifact|
+|||
+|Alerts|[Alert rules](../alerts/alerts-create-new-alert-rule.md)|
+|Alert actions|[Action groups](../alerts/action-groups.md)|
+|Apps|[Azure Monitor Insights](../insights/insights-overview.md) are a set of ready-to-use, curated monitoring experiences with pre-configured data inputs, searches, alerts, and visualizations to get you started analyzing data quickly and effectively. |
+|Dashboards|[Workbooks](../visualize/workbooks-overview.md)|
+|Lookups|Azure Monitor provides various ways to enrich data, including:<br>- [Data collection rules](../essentials/data-collection-rule-overview.md), which let you send data from multiple sources to a Log Analytics workspace, and perform calculations and transformations before ingesting the data.<br>- KQL operators, such as the [join operator](/data-explorer/kusto/query/joinoperator?pivots=azuremonitor), which combines data from different tables, and the [externaldata operator](/azure/data-explorer/kusto/query/externaldata-operator?pivots=azuremonitor), which returns data from external storage.<br>- Integration with services, such as [Azure Machine Learning](/azure/machine-learning/overview-what-is-azure-machine-learning) or [Azure Event Hubs](/azure/event-hubs/event-hubs-about), to apply advanced machine learning and stream in additional data.|
+|Namespaces|You can grant or limit permission to artifacts in Azure Monitor based on [access control](../logs/manage-access.md) you define on your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) or [Azure resource groups](../../azure-resource-manager/management/manage-resource-groups-portal.md).|
+|Permissions|[Access management](../logs/manage-access.md)|
+|Reports|Azure Monitor offers a range of options for analyzing, visualizing, and sharing data, including:<br>- [Integration with Grafana](../visualize/grafana-plugin.md)<br>- [Insights](../insights/insights-overview.md)<br>- [Workbooks](../visualize/workbooks-overview.md)<br>- [Dashboards](../visualize/tutorial-logs-dashboards.md)<br>- [Integration with Power BI](../logs/log-powerbi.md)<br>- [Integration with Excel](../logs/log-excel.md)|
+|Searches|[Queries](../logs/log-query-overview.md)|
+|Source types|[Define your data model in your Log Analytics workspace](../logs/manage-logs-tables.md). Use [ingestion-time transformations](../essentials/data-collection-transformations.md) to filter, format, or modify incoming data.|
+|Data collections methods| See [Collect data](#4-collect-data) for Azure Monitor tools designed for specific resources.|
+
+For information on migrating Splunk SIEM artifacts, including detection rules and SOAR automation, see [Plan your migration to Microsoft Sentinel](../../sentinel/migration.md).
+## 4. Collect data
+
+Azure Monitor provides tools for collecting data from log [data sources](../data-sources.md) on Azure and non-Azure resources in your environment.
+
+To collect data from a resource:
+
+1. Set up the relevant data collection tool based on the table below.
+1. Decide which data you need to collect from the resource.
+1. Use [transformations](../essentials/data-collection-transformations.md) to remove sensitive data, enrich data or perform calculations, and filter out data you don't need, to reduce costs.
+
+This table lists the tools Azure Monitor provides for collecting data from various resource types.
+
+| Resource type | Data collection tool |Similar Splunk tool| Collected data |
+| | | |
+| **Azure** | [Diagnostic settings](../essentials/diagnostic-settings.md) | | **Azure tenant** - Azure Active Directory Audit Logs provide sign-in activity history and audit trail of changes made within a tenant.<br/>**Azure resources** - Logs and performance counters.<br/>**Azure subscription** - Service health records along with records on any configuration changes made to the resources in your Azure subscription. |
+| **Application** | [Application insights](../app/app-insights-overview.md) |Splunk Application Performance Monitoring| Application performance monitoring data. |
+| **Container** |[Container insights](../containers/container-insights-overview.md)|Splunk App for Infrastructure| Container performance data. |
+| **Operating system** | [Azure Monitor Agent](../vm/monitor-virtual-machine-agent.md) |Universal Forwarder, Heavy Forwarder | Monitoring data from the guest operating system of Azure and non-Azure virtual machines.|
+| **Non-Azure source** | [Logs Ingestion API](../logs/logs-ingestion-api-overview.md) |HTTP Event Collector (HEC)| File-based logs and any data you send to a [data collection endpoint](../essentials/data-collection-endpoint-overview.md) on a monitored resource.|
++
+## 5. Transition to Azure Monitor Logs
+
+A common approach is to transition to Azure Monitor Logs gradually, while maintaining historical data in Splunk. During this period, you can:
+
+- Use the [Log ingestion API](../logs/logs-ingestion-api-overview.md) to ingest data from Splunk.
+- Use [Log Analytics workspace data export](../logs/logs-data-export.md) to export data out of Azure Monitor.
+
+To export your historical data from Splunk:
+
+1. Use one of the [Splunk export methods](https://docs.splunk.com/Documentation/Splunk/8.2.5/Search/Exportsearchresults) to export data in CSV format.
+1. To collect the exported data:
+ 1. Use Azure Monitor Agent to collect the data you export from Splunk, as described in [Collect text logs with Azure Monitor Agent](../agents/data-collection-text-log.md).
+
+ or
+
+ 1. Collect the exported data directly with the Logs Ingestion API, as described in [Send data to Azure Monitor Logs by using a REST API](../logs/tutorial-logs-ingestion-api.md).
++
+## Next steps
+
+- Learn more about using [Log Analytics](../logs/log-analytics-overview.md) and the [Log Analytics Query API](../logs/api/overview.md).
+- [Enable Sentinel on your Log Analytics workspace](../../sentinel/quickstart-onboard.md).
+- Take the [Analyze logs in Azure Monitor with KQL training module](/training/modules/analyze-logs-with-kql/).
+++
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Title: Move operation support by resource type description: Lists the Azure resource types that can be moved to a new resource group, subscription, or region. Previously updated : 01/05/2023 Last updated : 01/23/2023 # Move operation support for resources
Before starting your move operation, review the [checklist](./move-resource-grou
> | remoterenderingaccounts | **Yes** | **Yes** | No | > | spatialanchorsaccounts | **Yes** | **Yes** | No |
+## Microsoft.MobileNetwork
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Resource group | Subscription | Region move |
+> | - | -- | - | - |
+> | mobileNetworks | No | No | Yes<br><br>[Move your private mobile network resources to a different region](../../private-5g-core/region-move-private-mobile-network-resources.md) |
+> | mobileNetworks / dataNetworks | No | No | Yes<br><br>[Move your private mobile network resources to a different region](../../private-5g-core/region-move-private-mobile-network-resources.md) |
+> | mobileNetworks / simPolicies | No | No | Yes<br><br>[Move your private mobile network resources to a different region](../../private-5g-core/region-move-private-mobile-network-resources.md) |
+> | mobileNetworks / sites | No | No | Yes<br><br>[Move your private mobile network resources to a different region](../../private-5g-core/region-move-private-mobile-network-resources.md) |
+> | mobileNetworks / slices | No | No | Yes<br><br>[Move your private mobile network resources to a different region](../../private-5g-core/region-move-private-mobile-network-resources.md) |
+> | packetCoreControlPlanes | No | No | Yes<br><br>[Move your private mobile network resources to a different region](../../private-5g-core/region-move-private-mobile-network-resources.md) |
+> | packetCoreControlPlanes / packetCoreDataPlanes | No | No | Yes<br><br>[Move your private mobile network resources to a different region](../../private-5g-core/region-move-private-mobile-network-resources.md) |
+> | packetCoreControlPlanes / packetCoreDataPlanes / attachedDataNetworks | No | No | Yes<br><br>[Move your private mobile network resources to a different region](../../private-5g-core/region-move-private-mobile-network-resources.md) |
+> | sims | No | No | Yes<br><br>[Move your private mobile network resources to a different region](../../private-5g-core/region-move-private-mobile-network-resources.md) |
+> | simGroups | No | No | Yes<br><br>[Move your private mobile network resources to a different region](../../private-5g-core/region-move-private-mobile-network-resources.md) |
+> | simGroups / sims | No | No | Yes<br><br>[Move your private mobile network resources to a different region](../../private-5g-core/region-move-private-mobile-network-resources.md) |
+> | packetCoreControlPlaneVersions | No | No | Yes<br><br>[Move your private mobile network resources to a different region](../../private-5g-core/region-move-private-mobile-network-resources.md) |
+ ## Microsoft.NetApp > [!div class="mx-tableFixed"]
azure-video-indexer Import Content From Trial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/import-content-from-trial.md
-# Import your content from the trial account
+# Import content from your trial account to a regular account
-When creating a new ARM-based account, you have an option to import your content from the trial account into the new ARM-based account free of charge.
+If you would like to transition from the Video Indexer trial account experience to that of a regular paid account, Video Indexer allows you at not cost to import the content in your trial content to your new regular account.
-## Considerations
+When might you want to switch from a trial to a regular account?
-Review the following considerations.
+* If you have used up the free trial minutes and want to continue indexing.
+* You are ready to start using Video Indexer for production workloads.
+* You want an experience which doesn't have minute, support, or SLA limitations.
+## Create a new ARM account for the import
+
+* First you need to create an account. The regular account needs to have been already created and available before performing the import. Azure Video Indexer accounts are Azure Resource Manager (ARM) based and account creation can be performed through the Azure portal (see [Create an account with the Azure portal](create-account-portal.md)) or API (see [Create accounts with API](/rest/api/videoindexer/stable/accounts)).
+* The target ARM-based account has to be an empty account that has not yet been used to index any media files.
* Import from trial can be performed only once per trial account.
-* The target ARM-based account needs to be created and available before import is assigned.
-* Target ARM-based account has to be an empty account (never indexed any media files).
## Import your data
To import your data, follow the steps:
3. Click the **Import content to an ARM-based account**. 4. From the dropdown menu choose the ARM-based account you wish to import the data to.
- * If the account ID isn't showing, you can copy and paste the account ID from Azure portal or the account list, on the side blade in the Azure Video Indexer Portal.
+ * If the account ID isn't showing, you can copy and paste the account ID from the Azure portal or from the list of accounts under the User account blade at the top right of the Azure Video Indexer Portal.
+
5. Click **Import content** :::image type="content" alt-text="Screenshot that shows how to import your data." source="./media/create-account/import-to-arm-account.png":::
-All media and content model customizations will be copied from the trial account into the new ARM-based account.
-
-## Next steps
+All media and as well as your customized content model will be copied from the trial account into the new ARM-based account.
-You can programmatically interact with your trial account and/or with your Azure Video Indexer accounts that are connected to Azure by following the instructions in: [Use APIs](video-indexer-use-apis.md).
-You should use the same Azure AD user you used when connecting to Azure.
azure-video-indexer Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/network-security.md
Azure Video Indexer is a service hosted on Azure. In some cases the service needs to interact with other services in order to index video files (for example, a Storage account) or when you orchestrate indexing jobs against Azure Video Indexer API endpoint using your own service hosted on Azure (for example, AKS, Web Apps, Logic Apps, Functions). > [!NOTE]
-> If you are already using "AzureVideoAnalyzerForMedia" Network Service Tag you may experience issues with your networking security group starting 9 January 2023. This is because we are moving to a new Security Tag label "VideoIndexer" that was unfortunately not launched to GA in the UI before removing the preceding "AzureVideoAnalyzerForMedia" tag. The mitigatation is to remove the old tag from your configuration. We will update this document page + release notes once the new tag will be available.
+> If you are already using "AzureVideoAnalyzerForMedia" Network Service Tag you may experience issues with your networking security group starting 9 January 2023. This is because we are moving to a new Security Tag label "VideoIndexer". The mitigatation is to remove the old "AzureVideoAnalyzerForMedia" tag from your configuration and deployment scripts and start using the "VideoIndexer" tag going forward.
Use [Network Security Groups with Service Tags](../virtual-network/service-tags-overview.md) to limit access to your resources on a network level. A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
Use [Network Security Groups with Service Tags](../virtual-network/service-tags-
Currently we support the global service tag option for using service tags in your network security groups:
-**Use a single global AzureVideoAnalyzerForMedia service tag**: This option opens your virtual network to all IP addresses that the Azure Video Indexer service uses across all regions we offer our service. This method will allow for all IP addresses owned and used by Azure Video Indexer to reach your network resources behind the NSG.
+**Use a single global VideoIndexer service tag**: This option opens your virtual network to all IP addresses that the Azure Video Indexer service uses across all regions we offer our service. This method will allow for all IP addresses owned and used by Azure Video Indexer to reach your network resources behind the NSG.
> [!NOTE] > Currently we do not support IPs allocated to our services in the Switzerland North Region. These will be added soon. If your account is located in this region you cannot use Service Tags in your NSG today since these IPs are not in the Service Tag list and will be rejected by the NSG rule. ## Use a single global Azure Video Indexer service tag
-The easiest way to begin using service tags with your Azure Video Indexer account is to add the global tag `AzureVideoAnalyzerForMedia` to an NSG rule.
+The easiest way to begin using service tags with your Azure Video Indexer account is to add the global tag `VideoIndexer` to an NSG rule.
1. From the [Azure portal](https://portal.azure.com/), select your network security group. 1. Under **Settings**, select **Inbound security rules**, and then select **+ Add**. 1. From the **Source** drop-down list, select **Service Tag**.
-1. From the **Source service tag** drop-down list, select **AzureVideoAnalyzerForMedia**.
+1. From the **Source service tag** drop-down list, select **VideoIndexer**.
:::image type="content" source="./media/network-security/nsg-service-tag.png" alt-text="Add a service tag from the Azure portal":::
This tag contains the IP addresses of Azure Video Indexer services for all regio
## Using Azure CLI
-You can also use Azure CLI to create a new or update an existing NSG rule and add the **AzureVideoAnalyzerForMedia** service tag using the `--source-address-prefixes`. For a full list of CLI commands and parameters see [az network nsg](/cli/azure/network/nsg/rule?view=azure-cli-latest&preserve-view=true)
+You can also use Azure CLI to create a new or update an existing NSG rule and add the **VideoIndexer** service tag using the `--source-address-prefixes`. For a full list of CLI commands and parameters see [az network nsg](/cli/azure/network/nsg/rule?view=azure-cli-latest&preserve-view=true)
Example of a security rule using service tags. For more details, visit https://aka.ms/servicetags
-`az network nsg rule create -g MyResourceGroup --nsg-name MyNsg -n MyNsgRuleWithTags --priority 400 --source-address-prefixes AzureVideoAnalyzerForMedia --destination-address-prefixes '*' --destination-port-ranges '*' --direction Inbound --access Allow --protocol Tcp --description "Allow from VideoAnalyzerForMedia"`
+`az network nsg rule create -g MyResourceGroup --nsg-name MyNsg -n MyNsgRuleWithTags --priority 400 --source-address-prefixes VideoIndexer --destination-address-prefixes '*' --destination-port-ranges '*' --direction Inbound --access Allow --protocol Tcp --description "Allow traffic from Video Indexer"`
## Next steps
backup Offline Backup Azure Data Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/offline-backup-azure-data-box.md
Title: Offline backup by using Azure Data Box description: Learn how you can use Azure Data Box to seed large initial backup data offline from the MARS Agent to a Recovery Services vault. - Previously updated : 1/27/2020+ Last updated : 1/23/2023++++ # Azure Backup offline backup by using Azure Data Box
+This article describes how you can use Azure Data Box to seed large initial backup data offline from the MARS Agent to a Recovery Services vault.
+ You can use [Azure Data Box](../databox/data-box-overview.md) to seed your large initial Microsoft Azure Recovery Services (MARS) backups offline (without using network) to a Recovery Services vault. This process saves time and network bandwidth that would otherwise be consumed moving large amounts of backup data online over a high-latency network. Offline backup based on Azure Data Box provides two distinct advantages over [offline backup based on the Azure Import/Export service](./backup-azure-backup-import-export.md): - There's no need to procure your own Azure-compatible disks and connectors. Azure Data Box ships the disks associated with the selected [Data Box SKU](https://azure.microsoft.com/services/databox/data/). - Azure Backup (MARS Agent) can directly write backup data onto the supported SKUs of Azure Data Box. This capability eliminates the need for you to provision a staging location for your initial backup data. You also don't need utilities to format and copy that data onto the disks.
-## Azure Data Box with the MARS Agent
+## Support matrix
-This article explains how you can use Azure Data Box to seed large initial backup data offline from the MARS Agent to a Recovery Services vault.
+This section explains the supported scenarios.
-## Supported platforms
+### Supported platforms
The process to seed data from the MARS Agent by using Azure Data Box is supported on the following Windows SKUs.
The process to seed data from the MARS Agent by using Azure Data Box is supporte
| Windows Server 2008 R2 SP1 64 bit | Standard, Enterprise, Datacenter, Foundation | | Windows Server 2008 SP2 64 bit | Standard, Enterprise, Datacenter |
-## Backup data size and supported Data Box SKUs
+### Backup data size and supported Data Box SKUs
| Backup data size (post-compression by MARS)* per server | Supported Azure Data Box SKU | | | |
The offline backup process using MARS and Azure Data Box requires the Data Box d
> [!IMPORTANT] > Don't select *BlobStorage* for the **Account kind**. The MARS Agent requires an account that supports page blobs, which isn't supported when *BlobStorage* is selected. Select **Storage V2 (general purpose v2)** as the **Account kind** when you create the target storage account for your Azure Data Box job.
-![Choose account kind in instance details](./media/offline-backup-azure-data-box/instance-details.png)
+![Screenshot shows how to choose account kind in instance details.](./media/offline-backup-azure-data-box/instance-details.png)
## Install and set up the MARS Agent
To ensure you can mount your Data Box device as a Local System by using the NFS
1. Open the **Microsoft Azure Backup** application on your server. 1. On the **Actions** pane, select **Schedule Backup**.
- ![Select Schedule Backup](./media/offline-backup-azure-data-box/schedule-backup.png)
+ ![Screenshot shows how to select schedule backup.](./media/offline-backup-azure-data-box/schedule-backup.png)
1. Follow the steps in the **Schedule Backup Wizard**. 1. Add items by selecting the **Add Items** button. Keep the total size of the items within the [size limits supported by the Azure Data Box SKU](#backup-data-size-and-supported-data-box-skus) that you ordered and received.
- ![Add items to backup](./media/offline-backup-azure-data-box/add-items.png)
+ ![Screenshot shows how to add items to backup.](./media/offline-backup-azure-data-box/add-items.png)
1. Select the appropriate backup schedule and retention policy for **Files and Folders** and **System State**. System state is applicable only for Windows Servers and not for Windows clients. 1. On the **Choose Initial Backup Type (Files and Folders)** page of the wizard, select the option **Transfer using Microsoft Azure Data Box disks** and select **Next**.
- ![Choose initial backup type](./media/offline-backup-azure-data-box/initial-backup-type.png)
+ ![Screenshot shows how to choose initial backup type.](./media/offline-backup-azure-data-box/initial-backup-type.png)
1. Sign in to Azure when prompted by using the user credentials that have Owner access on the Azure subscription. After you succeed in doing so, you should see a page that resembles this one.
- ![Create resources and apply required permissions](./media/offline-backup-azure-data-box/creating-resources.png)
+ ![Screenshot shows how to create resources and apply required permissions.](./media/offline-backup-azure-data-box/creating-resources.png)
The MARS Agent then fetches the Data Box jobs in the subscription that are in the Delivered state.
- ![Fetch Data Box jobs for subscription ID](./media/offline-backup-azure-data-box/fetching-databox-jobs.png)
+ ![Screenshot shows how to fetch Data Box jobs for subscription ID.](./media/offline-backup-azure-data-box/fetching-databox-jobs.png)
1. Select the correct Data Box order for which you've unpacked, connected, and unlocked your Data Box disk. Select **Next**.
- ![Select Data Box orders](./media/offline-backup-azure-data-box/select-databox-order.png)
+ ![Screenshot shows how to select Data Box orders.](./media/offline-backup-azure-data-box/select-databox-order.png)
1. Select **Detect Device** on the **Data Box Device Detection** page. This action makes the MARS Agent scan for locally attached Azure Data Box disks and detect them.
- ![Data Box Device Detection](./media/offline-backup-azure-data-box/databox-device-detection.png)
+ ![Screenshot shows the Data Box Device Detection.](./media/offline-backup-azure-data-box/databox-device-detection.png)
If you connected the Azure Data Box instance as a network share (because of unavailability of USB ports or because you ordered and mounted the 100-TB Data Box device), detection fails at first. You're given the option to enter the network path to the Data Box device.
- ![Enter the network path](./media/offline-backup-azure-data-box/enter-network-path.png)
+ ![Screenshot shows how to enter the network path.](./media/offline-backup-azure-data-box/enter-network-path.png)
>[!IMPORTANT] > Provide the network path to the root directory of the Azure Data Box disk. This directory must contain a directory by the name *PageBlob*. >
- >![Root directory of Azure Data Box disk](./media/offline-backup-azure-data-box/root-directory.png)
+ >![Screenshot shows the root directory of Azure Data Box disk.](./media/offline-backup-azure-data-box/root-directory.png)
> >For example, if the path of the disk is `\\mydomain\myserver\disk1\` and *disk1* contains a directory called *PageBlob*, the path you enter on the MARS Agent wizard page is `\\mydomain\myserver\disk1\`. >
To ensure you can mount your Data Box device as a Local System by using the NFS
The following page confirms that the policy is saved successfully.
- ![Policy is saved successfully](./media/offline-backup-azure-data-box/policy-saved.png)
+ ![Screenshot shows that policy is saved successfully.](./media/offline-backup-azure-data-box/policy-saved.png)
1. Select **Close** on the previous page. 1. Select **Back Up Now** in the **Actions** pane of the MARS Agent console. Select **Back Up** on the wizard page.
- ![Back Up Now Wizard](./media/offline-backup-azure-data-box/backup-now.png)
+ ![Screenshot shows the Back Up Now wizard.](./media/offline-backup-azure-data-box/backup-now.png)
The MARS Agent starts backing up the data you selected to the Azure Data Box device. This process might take from several hours to a few days. The amount of time depends on the number of files and connection speed between the server with the MARS Agent and the Azure Data Box disk. After the backup of the data is finished, you'll see a page on the MARS Agent that resembles this one.
-![Backup progress shown](./media/offline-backup-azure-data-box/backup-progress.png)
+![Screenshot shows the Backup progress.](./media/offline-backup-azure-data-box/backup-progress.png)
## Post-backup steps
To see if your problem is the same as the one previously described, do one of th
Check to see if the following error message appears in the MAB console when you configured offline backup.
-![Unable to create Offline Backup policy for the current Azure account](./media/offline-backup-azure-data-box/unable-to-create-policy.png)
+![Screenshot shows that Offline Backup policy for the current Azure account isn't getting created.](./media/offline-backup-azure-data-box/unable-to-create-policy.png)
#### Step 2 of verification
From the server you're trying to configure for offline backup, perform the follo
3. Go to the Azure offline backup application mentioned in step 2. Select **Settings** > **Keys** > **Upload Public Key**. Upload the certificate you exported in the previous step.
- ![Upload public key](./media/offline-backup-azure-data-box/upload-public-key.png)
+ ![Screenshot shows the public key is uploaded.](./media/offline-backup-azure-data-box/upload-public-key.png)
4. In the server, open the registry by entering **regedit** in the run window.
From the server you're trying to configure for offline backup, perform the follo
7. To get the value of the thumbprint, double-click the certificate. Select the **Details** tab, and scroll down until you see the thumbprint field. Select **Thumbprint**, and copy the value.
- ![Thumbprint field of certificate](./media/offline-backup-azure-data-box/thumbprint-field.png)
+ ![Screenshot shows the thumbprint field of certificate.](./media/offline-backup-azure-data-box/thumbprint-field.png)
## Questions
cloud-services Cloud Services Dotnet Install Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-install-dotnet.md
To install .NET on your web and worker roles, include the .NET web installer as
## Add the .NET installer to your project To download the web installer for the .NET Framework, choose the version that you want to install:
+* [.NET Framework 4.8.1 web installer](https://go.microsoft.com/fwlink/?linkid=2215256)
* [.NET Framework 4.8 Web installer](https://go.microsoft.com/fwlink/?LinkId=2150985) * [.NET Framework 4.7.2 web installer](https://go.microsoft.com/fwlink/?LinkId=863262) * [.NET Framework 4.6.2 web installer](https://dotnet.microsoft.com/download/dotnet-framework/net462)
You can use startup tasks to perform operations before a role starts. Installing
REM ***** To install .NET 4.7.1 set the variable netfx to "NDP471" ***** https://go.microsoft.com/fwlink/?LinkId=852095 REM ***** To install .NET 4.7.2 set the variable netfx to "NDP472" ***** https://go.microsoft.com/fwlink/?LinkId=863262 REM ***** To install .NET 4.8 set the variable netfx to "NDP48" ***** https://dotnet.microsoft.com/download/thank-you/net48
+ REM ***** To install .NET 4.8.1 set the variable netfx to "NDP481" ***** https://go.microsoft.com/fwlink/?linkid=2215256
set netfx="NDP48" REM ***** Set script start timestamp *****
You can use startup tasks to perform operations before a role starts. Installing
set TEMP=%PathToNETFXInstall% REM ***** Setup .NET filenames and registry keys *****
+ if %netfx%=="NDP481" goto NDP481
if %netfx%=="NDP48" goto NDP48 if %netfx%=="NDP472" goto NDP472 if %netfx%=="NDP471" goto NDP471
You can use startup tasks to perform operations before a role starts. Installing
set netfxregkey="0x80EA8" goto logtimestamp
+ :NDP481
+ set "netfxinstallfile=NDP481-Web.exe"
+ set netfxregkey="0x82348"
+ goto logtimestamp
+
:logtimestamp REM ***** Setup LogFile with timestamp ***** md "%PathToNETFXInstall%\log"
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
The models used by the Azure OpenAI service use natural language instructions an
There are three main approaches for in-context learning: Few-shot, one-shot and zero-shot. These approaches vary based on the amount of task-specific data that is given to the model:
-**Few-shot**: In this case, a user includes several examples in the call prompt that demonstrate the expected answer format and content. The following example shows a few-shot prompt where we provide multiple examples:
+**Few-shot**: In this case, a user includes several examples in the call prompt that demonstrate the expected answer format and content. The following example shows a few-shot prompt where we provide multiple examples (the model will generate the last answer):
``` Convert the questions to a command:
- Q: Ask Constance if we need some bread
+ Q: Ask Constance if we need some bread.
A: send-msg `find constance` Do we need some bread? Q: Send a message to Greg to figure out if things are ready for Wednesday. A: send-msg `find greg` Is everything ready for Wednesday?
- Q: Ask Ilya if we're still having our meeting this evening
+ Q: Ask Ilya if we're still having our meeting this evening.
A: send-msg `find ilya` Are we still having a meeting this evening?
- Q: Contact the ski store and figure out if I can get my skis fixed before I leave on Thursday
+ Q: Contact the ski store and figure out if I can get my skis fixed before I leave on Thursday.
A: send-msg `find ski store` Would it be possible to get my skis fixed before I leave on Thursday?
- Q: Thank Nicolas for lunch
+ Q: Thank Nicolas for lunch.
A: send-msg `find nicolas` Thank you for lunch! Q: Tell Constance that I won't be home before 19:30 tonight ΓÇö unmovable meeting. A: send-msg `find constance` I won't be home before 19:30 tonight. I have a meeting I can't move.
- Q: Tell John that I need to book an appointment at 10:30
+ Q: Tell John that I need to book an appointment at 10:30.
A: ```
communication-services Call Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-flows.md
Signaling flows through the signaling controller. Media flows through the Media
:::image type="content" source="./media/call-flows/teams-communication-services-meeting.png" alt-text="Diagram showing Communication Services SDK and Teams Client in a scheduled Teams meeting.":::
+### Case 6: Early media
+Refers to media (e.g., audio and video) that is exchanged before a particular session is accepted by the called user. If there is early media flow, the SBC must latch to the first endpoint that starts streaming media; media flow can start before candidates are nominated. The SBC should have support for sending DTMF during this phase to enable IVR/voicemail scenarios. The SBC should use the highest priority path on which it has received checks if nominations have not completed.
## Next steps
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
The tables below summarize current availability:
| :-- | :- | :- | :- | : | : | | UK | Toll-Free | - | - | Public Preview | Public Preview\* | | UK | Local | - | - | Public Preview | Public Preview\* |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | Public Preview | Public Preview\* |
-| USA & Puerto Rico | Local | - | - | Public Preview | Public Preview\* |
+| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
+| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
| Canada | Toll-Free | Public Preview | Public Preview | Public Preview | Public Preview\* | | Canada | Local | - | - | Public Preview | Public Preview\* |
The tables below summarize current availability:
| :- | :-- | :- | :- | :- | : | | Ireland | Toll-Free | - | - | Public Preview | Public Preview\* | | Ireland | Local | - | - | Public Preview | Public Preview\* |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | Public Preview | Public Preview\* |
-| USA & Puerto Rico | Local | - | - | Public Preview | Public Preview\* |
+| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
+| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
| Canada | Toll-Free | Public Preview | Public Preview | Public Preview | Public Preview\* | | Canada | Local | - | - | Public Preview | Public Preview\* | | UK | Toll-Free | - | - | Public Preview | Public Preview\* |
The tables below summarize current availability:
| :- | :-- | :- | :- | :- | : | | Denmark | Toll-Free | - | - | Public Preview | Public Preview\* | | Denmark | Local | - | - | Public Preview | Public Preview\* |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | Public Preview | Public Preview\* |
-| USA & Puerto Rico | Local | - | - | Public Preview | Public Preview\* |
+| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
+| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
| Canada | Toll-Free | Public Preview | Public Preview | Public Preview | Public Preview\* | | Canada | Local | - | - | Public Preview | Public Preview\* | | UK | Toll-Free | - | - | Public Preview | Public Preview\* |
The tables below summarize current availability:
| :- | :-- | :- | :- | :- | : | | Canada | Toll-Free | Public Preview | Public Preview | Public Preview | Public Preview\* | | Canada | Local | - | - | Public Preview | Public Preview\* |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | Public Preview | Public Preview\* |
-| USA & Puerto Rico | Local | - | - | Public Preview | Public Preview\* |
+| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
+| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
| UK | Toll-Free | - | - | Public Preview | Public Preview\* | | UK | Local | - | - | Public Preview | Public Preview\* |
The tables below summarize current availability:
| Sweden | Local | - | - | Public Preview | Public Preview\* | | Canada | Toll-Free | Public Preview | Public Preview | Public Preview | Public Preview\* | | Canada | Local | - | - | Public Preview | Public Preview\* |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | Public Preview | Public Preview\* |
-| USA & Puerto Rico | Local | - | - | Public Preview | Public Preview\* |
+| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
+| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
container-apps Azure Arc Create Container App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-create-container-app.md
Next, add the required Azure CLI extensions.
```azurecli-interactive az extension add --upgrade --yes --name customlocation
-az extension remove --name containerapps
+az extension remove --name containerapp
az extension add -s https://download.microsoft.com/download/5/c/2/5c2ec3fc-bd2a-4615-a574-a1b7c8e22f40/containerapp-0.0.1-py2.py3-none-any.whl --yes ```
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
Scaling is driven by three different categories of triggers:
With an HTTP scaling rule, you have control over the threshold of concurrent HTTP requests that determines how your container app revision scales.
-In the following example, the revision scales out up to five replicas and can scale in to zero. The scaling threshold is set to 100 concurrent requests.
+In the following example, the revision scales out up to five replicas and can scale in to zero. The scaling property is set to 100 concurrent requests per second.
### Example
If you don't create a scale rule, the default scale rule is applied to your cont
| HTTP | 0 | 10 | > [!IMPORTANT]
-> Make sure you create a scale rule or set `minReplicas` to 1 or more if you don't enable ingress. If ingress is disabled and all you have is the default limits and rule, then your container app will scale to zero and have no way of starting back up.
+> Make sure you create a scale rule or set `minReplicas` to 1 or more if you don't enable ingress. If ingress is disabled and you don't define a `minReplicas` or a custom scale rule, then your container app will scale to zero and have no way of starting back up.
## Considerations
cosmos-db How To Delete By Partition Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-delete-by-partition-key.md
This article explains how to use the Azure Cosmos DB SDKs to delete all items by logical partition key value.
+> [!IMPORTANT]
+> Delete items by partition key value is in public preview.
+> This feature is provided without a service level agreement, and it's not recommended for production workloads.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ ## Feature overview The delete by partition key feature is an asynchronous, background operation that allows you to delete all documents with the same logical partition key value, using the Comsos SDK.
-Because the number of documents to be deleted may be large, the operation runs in the background. Though the physical deletion operation runs in the background, the effects will be available immediately, as the documents to be deleted will not appear in the results of queries or read operations.
+Because the number of documents to be deleted may be large, the operation runs in the background. Though the physical deletion operation runs in the background, the effects will be available immediately, as the documents to be deleted won't appear in the results of queries or read operations.
To help limit the resources used by this background task, the delete by partition key operation is constrained to consume at most 10% of the total available RU/s on the container each second.
When the delete by partition key operation is issued, only the documents that ex
### How is the delete by partition key operation prioritized among other operations against the container? By default, the delete by partition key value operation can consume up to a reserved fraction - 0.1, or 10% - of the overall RU/s on the resource. Any Request Units (RUs) in this bucket that are unused will be available for other non-background operations, such as reads, writes, and queries.
-For example, suppose you have provisioned 1000 RU/s on a container. There is an ongoing delete by partition key operation that consumes 100 RUs each second for 5 seconds. During each of these 5 seconds, there are 900 RUs available for non-background database operations. Once the delete operation is complete, all 1000 RU/s are now available again.
+For example, suppose you've provisioned 1000 RU/s on a container. There's an ongoing delete by partition key operation that consumes 100 RUs each second for 5 seconds. During each of these 5 seconds, there are 900 RUs available for non-background database operations. Once the delete operation is complete, all 1000 RU/s are now available again.
### Known issues
-For certain scenarios, the effects of a delete by partition key operation is not guaranteed to be immediately reflected. The effect may be partially seen as the operation progresses.
+For certain scenarios, the effects of a delete by partition key operation isn't guaranteed to be immediately reflected. The effect may be partially seen as the operation progresses.
- [Aggregate queries](query/aggregate-functions.md) that use the index - for example, COUNT queries - that are issued during an ongoing delete by partition key operation may contain the results of the documents to be deleted. This may occur until the delete operation is fully complete. - Queries issued against the [analytical store](../analytical-store-introduction.md) during an ongoing delete by partition key operation may contain the results of the documents to be deleted. This may occur until the delete operation is fully complete.
See the following articles to learn about more SDK operations in Azure Cosmos DB
- [Query an Azure Cosmos DB container ](how-to-query-container.md) - [Transactional batch operations in Azure Cosmos DB using the .NET SDK
-](transactional-batch.md)
+](transactional-batch.md)
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-nodejs.md
In the `index.js`, add the following code to use the resource **endpoint** and *
Add the following code to use the [``CosmosClient.Databases.createDatabaseIfNotExists``](/javascript/api/@azure/cosmos/databases#@azure-cosmos-databases-createifnotexists) method to create a new database if it doesn't already exist. This method will return a reference to the existing or newly created database. ### Create a container
cost-management-billing Add Change Subscription Administrator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/add-change-subscription-administrator.md
tags: billing
Previously updated : 12/06/2022 Last updated : 01/23/2023 # Add or change Azure subscription administrators - To manage access to Azure resources, you must have the appropriate administrator role. Azure has an authorization system called [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) with several built-in roles you can choose from. You can assign these roles at different scopes, such as management group, subscription, or resource group. By default, the person who creates a new Azure subscription can assign other users administrative access to a subscription.
-This article describes how add or change the administrator role for a user using Azure RBAC at the subscription scope.
+This article describes how add or change the administrator role for a user using Azure RBAC at the subscription scope.
+
+This article applies to a Microsoft Online Service Program (pay-as-you-go) account or a Visual Studio account. If you have a Microsoft Customer Agreement (Azure plan) account, see [Understand Microsoft Customer Agreement administrative roles in Azure](understand-mca-roles.md).
Microsoft recommends that you manage access to resources using Azure RBAC. However, if you are still using the classic deployment model and managing the classic resources by using [Azure Service Management PowerShell Module](/powershell/module/servicemanagement/azure.service), you'll need to use a classic administrator.
The billing administrator is the person who has permission to manage billing for
To identify accounts for which you're a billing administrator, visit the [Cost Management + Billing page in Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/ModernBillingMenuBlade/Overview). Then select **All billing scopes** from the left-hand pane. The subscriptions page shows all the subscriptions where you're a billing administrator.
-If you're not sure who the account administrator is for a subscription, visit the [Subscriptions page in Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade). Then select the subscription you want to check, and then look under **Settings**. Select **Properties** and the account administrator of the subscription is shown in the **Account Admin** box.
+If you're not sure who the account administrator is for a subscription, visit the [Subscriptions page in Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade). Then select the subscription you want to check, and then look under **Settings**. Select **Properties** and the account administrator of the subscription is shown in the **Account Admin** box.
+
+If you don't see **Account Admin**, you have a Microsoft Customer Agreement account. Instead, [check your access to a Microsoft Customer Agreement](understand-mca-roles.md#check-access-to-a-microsoft-customer-agreement).
## Assign a subscription administrator
data-factory Change Data Capture Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/change-data-capture-troubleshoot.md
+
+ Title: Troubleshoot the change data capture resource
+
+description: Learn how to troubleshoot issues with the change data capture resource in Azure Data Factory.
++++ Last updated : 01/19/2023++++
+# Troubleshoot the Change data capture resource in Azure Data Factory
+
+This article provides suggestions on how to troubleshoot common problems with the change data capture resource in Azure Data Factory.
+
+## Issue: Trouble enabling native CDC in my SQL source.
+
+For sources in SQL, two sets of tables are available: tables with native SQL CDC enabled and tables with time-based incremental columns.
+
+Follow these steps to configure native CDC for a specific source table in your SQL database.
+
+Consider you have following table, with ID as the Primary Key. If a Primary Key is present in the schema, supports_net_changes is set to true by default. If not, configure it using the script in Query 3.
+
+**Query 1**
+```sql
+
+CREATE TABLE Persons (
+ ID int,
+ LastName varchar(255) NOT NULL,
+ FirstName varchar(255),
+ Age int,
+ Last_login DATETIME,
+ PRIMARY KEY (ID));
+
+```
+
+ > [!NOTE]
+ > Currently the ADF CDC resource only loads net changes for insert, update and delete operations.
+
+To enable CDC at the database level, execute the following query:
+
+**Query 2**
+
+```sql
+EXEC sys.sp_cdc_enable_db
+```
+To enable CDC at the table level, execute the following query:
+
+**Query 3**
+
+```sql
+EXEC sys.sp_cdc_enable_table
+ @source_schema = N'dbo'
+ , @source_name = N'Persons'
+ , @role_name = N'cdc_admin'
+ , @supports_net_changes = 1
+ , @captured_column_list = N'ID';
+```
+
+## Issue: Tables are unavailable to select in the CDC resource configuration process.
+
+If your SQL source doesn't have SQL Server CDC with net_changed enabled or doesn't have any time-based incremental columns, then the tables in your source will be unavailable for selection.
+
+## Issue: The debug cluster is not available from a warm pool.
+
+The debug cluster is not available from a warm pool. There will be a wait time in the order of 1+ minutes.
+
+## Issue: My CDC resource has both source and target linked services that use custom integration runtimes and it won't work.
+
+In factories with virtual networks, CDC resources will work fine if either the source or target linked service is tied to an auto-resolve integration runtime. If both the source and target linked services use custom integration runtimes, the CDC resource will not work.
+
+In non-virtual network factories, CDC resources requiring a virtual network will not work. This fix is in progress.
+
+## Issue: Creating a new linked service pointing to an Azure Key Vault linked service causes an error.
+
+If you create a new linked service using the CDC fly-out process that points to an Azure Key Vault linked service, the CDC resource will break. This fix is in progress.
+
+## Next steps
+- [Learn more about the change data capture resource](concepts-change-data-capture-resource.md)
+- [Set up a change data capture resource](how-to-change-data-capture-resource.md)
data-factory Concepts Change Data Capture Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture-resource.md
+
+ Title: Change Data Capture Resource
+
+description: Learn more about the change data capture resource in Azure Data Factory.
+++++++ Last updated : 01/20/2023++
+# Change data capture resource overview
+
+Adapting to the cloud-first big data world can be incredibly challenging for data engineers who are responsible for building complex data integration and ETL pipelines.
+
+Azure Data Factory is introducing a new mechanism to make the life of a data engineer easier.
+
+By automatically detecting data changes at the source without requiring complex designing or coding, ADF is making it a breeze to scale these processes. Change Data Capture will now exist as a **new native top-level resource** in the Azure Data Factory studio where data engineers can quickly configure continuously running jobs to process big data at scale with extreme efficiency.
+
+The new Change Data Capture resource in ADF allows for full fidelity change data capture that continuously runs in near real-time through a guided configuration experience.
++
+## Supported data sources
+
+* Avro
+* Azure Cosmos DB (SQL API)
+* Azure SQL Database
+* Delimited Text
+* JSON
+* ORC
+* Parquet
+* SQL Server
+* XML
+
+## Supported targets
+
+* Avro
+* Azure SQL Database
+* Azure Synapse Analytics
+* Delimited Text
+* Delta
+* JSON
+* ORC
+* Parquet
+
+## Known limitations
+* Currently, when creating source/target mappings, each source and target is only allowed to be used once.
+* Continuous, real-time streaming is coming soon.
+* Allow schema drift is coming soon.
+
+For more information on known limitations and troubleshooting assistance, please reference [this troubleshooting guide](change-data-capture-troubleshoot.md).
++
+## Next steps
+- [Learn how to set up a change data capture resource](how-to-change-data-capture-resource.md).
data-factory Concepts Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture.md
Title: Change Data Capture
+ Title: Change data capture
description: Learn about change data capture in Azure Data Factory and Azure Synapse Analytics.
Previously updated : 01/04/2023 Last updated : 01/23/2023 # Change data capture in Azure Data Factory and Azure Synapse Analytics
To learn more, see [Azure Data Factory overview](introduction.md) or [Azure Syna
When you perform data integration and ETL processes in the cloud, your jobs can perform much better and be more effective when you only read the source data that has changed since the last time the pipeline ran, rather than always querying an entire dataset on each run. ADF provides multiple different ways for you to easily get delta data only from the last run.
+### Change Data Capture factory resource
+
+The easiest and quickest way to get started in data factory with CDC is through the factory level Change Data Capture resource. From the main pipeline designer, click on New under Factory Resources to create a new Change Data Capture. The CDC factory resource will provide a configuration walk-through experience where you will point to your sources and destinations, apply optional transformations, and then click start to begin your data capture. With the CDC resource, you will not beed to design pipelines or data flow activities and the only billing will be 4 cores of General Purpose data flows while your data in being processed. You set a latency which ADF will use to wake-up and look for changed data. That is the only time you will be billed. The top-level CDC resource is also the ADF method of running your processes continuously. Pipelines in ADF are batch only. But the CDC resource can run continuously.
+ ### Native change data capture in mapping data flow The changed data including inserted, updated and deleted rows can be automatically detected and extracted by ADF mapping data flow from the source databases. No timestamp or ID columns are required to identify the changes since it uses the native change data capture technology in the databases. By simply chaining a source transform and a sink transform reference to a database dataset in a mapping data flow, you will see the changes happened on the source database to be automatically applied to the target database, so that you can easily synchronize data between two tables. You can also add any transformations in between for any business logic to process the delta data. When defining your sink data destination, you can set insert, update, upsert, and delete operations in your sink without the need of an Alter Row transformation because ADF is able to automatically detect the row makers.
You can always build your own delta data extraction pipeline for all ADF support
## Best Practices
-**Change data capture from databases:**
+**Change data capture from databases**
- Native change data capture is always recommended as the simplest way for you to get change data. It also brings much less burden on your source database when ADF extracts the change data for further processing. - If your database stores are not part of the ADF connector list with native change data capture support, we recommend you to check the auto incremental extraction option where you only need to input incremental column to capture the changes. ADF will take care of the rest including creating a dynamic query for delta loading and managing the checkpoint for each activity run. - Customer managed delta data extraction in pipeline covers all the ADF supported databases and give you the flexibility to control everything by yourself.
-**Change files capture from file based storages:**
+**Change files capture from file based storages**
- When you want to load data from Azure Blob Storage, Azure Data Lake Storage Gen2 or Azure Data Lake Storage Gen1, mapping data flow provides you the opportunity to get new or updated files only by simple one click. It is the simplest and recommended way for you to achieve delta load from these file based storages in mapping data flow. - You can get more [best practices](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/best-practices-of-how-to-use-adf-copy-activity-to-copy-new-files/ba-p/1532484).
The followings are the templates to use the change data capture in Azure Data Fa
## Next steps - [Learn how to use the checkpoint key in the data flow activity](control-flow-execute-data-flow-activity.md).
+- [Learn about the ADF Change Data Capture resource](concepts-change-data-capture-resource.md).
+- [Walk through building a top-level CDC artifact](how-to-change-data-capture-resource.md).
data-factory How To Change Data Capture Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-change-data-capture-resource.md
+
+ Title: Capture changed data with a change data capture resource
+description: This tutorial provides step-by-step instructions on how to capture changed data from ADLS Gen2 to SQL DB using a Change data capture resource.
+++++++ Last updated : 01/20/2023++
+# How to capture changed data from ADLS Gen2 to SQL DB using a Change data capture resource
+
+In this tutorial, you will use the Azure Data Factory user interface (UI) to create a new Change data capture resource that picks up changed data from an Azure Data Lake Storage (ADLS) Gen2 source to a SQL Database. The configuration pattern in this tutorial can be modified and expanded upon.
+
+In this tutorial, you follow these steps:
+* Create a change data capture resource.
+* Monitor change data capture activity.
+
+## Pre-requisites
+
+* **Azure subscription.** If you don't have an Azure subscription, create a free Azure account before you begin.
+* **Azure storage account.** You use ADLS storage as a source data store. If you don't have a storage account, see Create an Azure storage account for steps to create one.
+* **Azure SQL Database.** You will use Azure SQL DB as a target data store. If you donΓÇÖt have a SQL DB, please create one in the Azure portal first before continuing the tutorial.
++
+## Create a change data capture artifact
+
+1. Navigate to the **Author** blade in your data factory. You will see a new top-level artifact under **Pipelines** called **Change data capture (preview)**.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-2.png" alt-text="Screenshot of new top level artifact shown under Factory resources panel.":::
+
+2. To create a new **Change data capture**, hover over **Change data capture (preview)** until you see 3 dots appear. Click on the **Change data capture actions**.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-3.png" alt-text="Screenshot of Change data capture (preview) Actions after hovering on the new top-level artifact.":::
+
+3. Select **New change data capture (preview)**. This will open a flyout to begin the guided process.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-4.png" alt-text="Screenshot of a list of change data capture actions.":::
+
+4. You will then be prompted to name your CDC resource. By default, the name will be set to ΓÇ£adfcdcΓÇ¥ and continue to increment up by 1. You can replace this default name with your own.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-5.png" alt-text="Screenshot of the text box to update the name of the resource.":::
+
+5. Use the drop-down selection list to choose your data source. For this tutorial, we will use **DelimitedText**.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-6.png" alt-text="Screenshot of the guided process flyout with source options in a drop-down selection menu.":::
+
+6. You will then be prompted to select a linked service. Create a new linked service or select an existing one.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-7.png" alt-text="Screenshot of the selection box to choose or create a new linked service.":::
+
+7. Use the **Browse** button to select your source data folder.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-8.png" alt-text="Screenshot of a folder icon to browse for a folder path.":::
+
+8. Once youΓÇÖve selected a folder path, click **Continue** to set your data target.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-9.png" alt-text="Screenshot of the continue button in the guided process to proceed to select data targets.":::
+
+> [!NOTE]
+> You can choose to add multiple source folders with the **+** button. The other sources must also use the same linked service that youΓÇÖve already selected.
+
+9. Then, select a **Target type** using the drop-down selection. For this tutorial, we will select **Azure SQL Database**.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-10.png" alt-text="Screenshot of a drop-down selection menu of all data target types.":::
+
+10. You will then be prompted to select a linked service. Create a new linked service or select an existing one.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-11.png" alt-text="Screenshot of the selection box to choose or create a new linked service to your data target.":::
+
+11. Create new **Target table(s)** or select an existing **Target table(s)**. Use the checkbox to make your selection(s). The **Preview** button will allow you to view your table data.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-12.png" alt-text="Screenshot of the create new tables button and the selection boxes to choose tables for your target.":::
+
+12. Click **Continue** when you have finalized your selection(s).
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-13.png" alt-text="Screenshot of the continue button in the guided process to proceed to the next step.":::
+
+> [!NOTE]
+> You can choose multiple target tables from your SQL DB. Use the check boxes to select all targets.
+
+13. You will automatically land in a new change data capture tab, where you can configure your new resource.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-14.png" alt-text="Screenshot of the change data capture studio.":::
+
+14. A new mapping will automatically be created for you. You can update the **Source** and **Target** selections for your mapping by using the drop-down selection lists.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-15.png" alt-text="Screenshot of the source to target mapping in the change data capture studio.":::
+
+15. Once youΓÇÖve selected your tables, you should see that there are columns mapped. Select the **Column mappings** button to view the column mappings.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-16.png" alt-text="Screenshot of the mapping icon to view column mappings.":::
+
+16. Here you can view your column mappings. Use the drop-down lists to edit your column mappings for **Mapping method**, **Source column**, and **Target** column.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-17.png" alt-text="Screenshot of the column mappings.":::
+
+ You can add additional column mappings using the **New mapping** button. Use the drop-down lists to select the **Mapping method**, **Source column**, and **Target** column.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-18.png" alt-text="Screenshot of the Add new mapping icon to add new column mappings.":::
+
+17. When your mapping is complete, click the back arrow to return to the main canvas.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-19.png" alt-text="Screenshot of the arrow icon to return to the main change data capture canvas.":::
+
+> [!NOTE]
+> You can add additional source to target mappings in one CDC artifact. Use the edit button to select more data sources and targets. Then, click **New mapping** and use the drop-down lists to set a new source and target mapping.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-20.png" alt-text="Screenshot of the edit button to add new sources.":::
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-21.png" alt-text="Screenshot of the new mapping button to set a new source to target mapping.":::
+
+18. Once your mapping complete, set your frequency using the **Set Latency** button.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-22.png" alt-text="Screenshot of the set frequency button at the top of the canvas.":::
+
+19. Select the cadence of your change data capture and click **Apply** to make the changes. By default, it will be set to 15 minutes.
+
+For example, if you select 30 minutes, every 30 minutes, your change data capture will process your source data and pick up any changed data since the last processed time.
++
+> [!NOTE]
+> The option to select Real-time to enable streaming data integration is coming soon.
+
+20. Once everything has been finalized, publish your changes.
++
+> [!NOTE]
+> If you do not publish your changes, you will not be able to start your CDC resource. The start button will be grayed out.
+
+21. Click **Start** to start running your **Change data capture**.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-25.png" alt-text="Screenshot of the start button at the top of the canvas.":::
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-26.png" alt-text="Screenshot of an actively running change data capture resource.":::
+
+
+## Monitor your Change data capture
+
+1. To monitor your change data capture, navigate to the **Monitor** blade or click the monitoring icon from the CDC designer.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-27.png" alt-text="Screenshot of the monitoring blade.":::
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-28.png" alt-text="Screenshot of the monitoring button at the top of the change data capture canvas.":::
+
+2. Select **Change data capture** to view your CDC resources.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-29.png" alt-text="Screenshot of the Change data capture monitoring section.":::
+
+3. Here you can see the **Source**, **Target**, **Status**, and **Last processed** time of your change data capture.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-30.png" alt-text="Screenshot of an overview of the change data capture monitoring page.":::
+
+4. Click the name of your CDC to see more details. You can see how many rows were read and written and other diagnostic information.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-31.png" alt-text="Screenshot of the detailed monitoring of a selected change data capture.":::
+
+> [!NOTE]
+> If you have multiple mappings set up in your Change data capture, each mapping will show as a different color. Click on the bar to see specific details for each mapping or use the Diagnostics at the bottom of the screen.
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-32.png" alt-text="Screenshot of the detailed monitoring page of a change data capture with multiple sources to target mappings.":::
+
+ :::image type="content" source="media/adf-cdc/change-data-capture-resource-33.png" alt-text="Screenshot of a detailed breakdown of each mapping in the change data capture artifact.":::
+
+
+## Next steps
+- [Learn more about the change data capture resource](concepts-change-data-capture-resource.md)
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
Check out our [What's New video archive](https://www.youtube.com/playlist?list=P
### Data flow
-SQL change data capture (CDC) incremental extract - supports numeric columns in mapping dataflow
+SQL change data capture (CDC) incremental extract - supports numeric columns in mapping dataflow [Learn more](connector-azure-sql-database.md?tabs=data-factory#source-transformation)
### Data movement
data-lake-analytics Data Lake Analytics Add Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-add-users.md
Title: Add users to an Azure Data Lake Analytics account description: Learn how to correctly add users to your Data Lake Analytics account using the Add User Wizard and Azure PowerShell. -+ Previously updated : 05/24/2018 Last updated : 01/20/2023 # Adding a user in the Azure portal
Last updated 05/24/2018
[!INCLUDE [retirement-flag](includes/retirement-flag.md)] ## Start the Add User Wizard
-1. Open your Azure Data Lake Analytics via https://portal.azure.com.
-2. Click **Add User Wizard**.
-3. In the **Select user** step, find the user you want to add. Click **Select**.
-4. the **Select role** step, pick **Data Lake Analytics Developer**. This role has the minimum set of permissions required to submit/monitor/manage U-SQL jobs. Assign to this role if the group is not intended for managing Azure services.
-5. In the **Select catalog permissions** step, select any additional databases that user will need access to. Read and Write Access to the default static database called "master" is required to submit jobs. When you are done, click **OK**.
-6. In the final step called **Assign selected permissions** review the changes the wizard will make. Click **OK**.
+1. Open your Azure Data Lake Analytics via https://portal.azure.com.
+2. Select **Add User Wizard**.
+3. In the **Select user** step, find the user you want to add. Select **Select**.
+4. the **Select role** step, pick **Data Lake Analytics Developer**. This role has the minimum set of permissions required to submit/monitor/manage U-SQL jobs. Assign to this role if the group isn't intended for managing Azure services.
+5. In the **Select catalog permissions** step, select any other databases that user will need access to. Read and Write Access to the default static database called "master" is required to submit jobs. When you're done, select **OK**.
+6. In the final step called **Assign selected permissions** review the changes the wizard will make. Select **OK**.
## Configure ACLs for data folders+ Grant "R-X" or "RWX", as needed, on folders containing input data and output data.
+## Optionally, add the user to the Azure Data Lake Storage Gen1 role **Reader** role
-## Optionally, add the user to the Azure Data Lake Storage Gen1 role **Reader** role.
-1. Find your Azure Data Lake Storage Gen1 account.
-2. Click on **Users**.
-3. Click **Add**.
-4. Select an Azure role to assign this group.
-5. Assign to Reader role. This role has the minimum set of permissions required to browse/manage data stored in ADLSGen1. Assign to this role if the Group is not intended for managing Azure services.
-6. Type in the name of the Group.
-7. Click **OK**.
+1. Find your Azure Data Lake Storage Gen1 account.
+2. Select **Users**.
+3. Select **Add**.
+4. Select an Azure role to assign this group.
+5. Assign to Reader role. This role has the minimum set of permissions required to browse/manage data stored in ADLSGen1. Assign to this role if the Group isn't intended for managing Azure services.
+6. Type in the name of the Group.
+7. Select **OK**.
## Adding a user using PowerShell 1. Follow the instructions in this guide: [How to install and configure Azure PowerShell](/powershell/azure/). 2. Download the [Add-AdlaJobUser.ps1](https://github.com/Azure/AzureDataLake/blob/master/Samples/PowerShell/ADLAUsers/Add-AdlaJobUser.ps1) PowerShell script.
-3. Run the PowerShell script.
+3. Run the PowerShell script.
The sample command to give user access to submit jobs, view new job metadata, and view old metadata is: `Add-AdlaJobUser.ps1 -Account myadlsaccount -EntityToAdd 546e153e-0ecf-417b-ab7f-aa01ce4a7bff -EntityType User -FullReplication` - ## Next steps * [Overview of Azure Data Lake Analytics](data-lake-analytics-overview.md)
data-lake-analytics Data Lake Analytics Analyze Weblogs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-analyze-weblogs.md
Title: Analyze Website logs using Azure Data Lake Analytics description: Learn how to analyze website logs using Azure Data Lake Analytics to run U-SQL functions and queries.-+ Previously updated : 12/05/2016 Last updated : 01/20/2023 # Analyze Website logs using Azure Data Lake Analytics Learn how to analyze website logs using Data Lake Analytics, especially on finding out which referrers ran into errors when they tried to visit the website.
Learn how to analyze website logs using Data Lake Analytics, especially on findi
* **Visual Studio 2015 or Visual Studio 2013**. * **[Data Lake Tools for Visual Studio](https://aka.ms/adltoolsvs)**.
- Once Data Lake Tools for Visual Studio is installed, you will see a **Data Lake** item in the **Tools** menu in Visual Studio:
+ Once Data Lake Tools for Visual Studio is installed, you'll see a **Data Lake** item in the **Tools** menu in Visual Studio:
![U-SQL Visual Studio menu](./media/data-lake-analytics-data-lake-tools-get-started/data-lake-analytics-data-lake-tools-menu.png) * **Basic knowledge of Data Lake Analytics and the Data Lake Tools for Visual Studio**. To get started, see: * [Develop U-SQL script using Data Lake tools for Visual Studio](data-lake-analytics-data-lake-tools-get-started.md). * **A Data Lake Analytics account.** See [Create an Azure Data Lake Analytics account](data-lake-analytics-get-started-portal.md).
-* **Install the sample data.** In the Azure Portal, open you Data Lake Analytics account and click **Sample Scripts** on the left menu, then click **Copy Sample Data**.
+* **Install the sample data.** In the Azure portal, open your Data Lake Analytics account and select **Sample Scripts** on the left menu, then select **Copy Sample Data**.
## Connect to Azure Before you can build and test any U-SQL scripts, you must first connect to Azure.
Before you can build and test any U-SQL scripts, you must first connect to Azure
### To connect to Data Lake Analytics 1. Open Visual Studio.
-2. Click **Data Lake > Options and Settings**.
-3. Click **Sign In**, or **Change User** if someone has signed in, and follow the instructions.
-4. Click **OK** to close the Options and Settings dialog.
+2. Select **Data Lake > Options and Settings**.
+3. Select **Sign In**, or **Change User** if someone has signed in, and follow the instructions.
+4. Select **OK** to close the Options and Settings dialog.
### To browse your Data Lake Analytics accounts 1. From Visual Studio, open **Server Explorer** by press **CTRL+ALT+S**.
-2. From **Server Explorer**, expand **Azure**, and then expand **Data Lake Analytics**. You shall see a list of your Data Lake Analytics accounts if there are any. You cannot create Data Lake Analytics accounts from the studio. To create an account, see [Get Started with Azure Data Lake Analytics using Azure Portal](data-lake-analytics-get-started-portal.md) or [Get Started with Azure Data Lake Analytics using Azure PowerShell](data-lake-analytics-get-started-powershell.md).
+2. From **Server Explorer**, expand **Azure**, and then expand **Data Lake Analytics**. You shall see a list of your Data Lake Analytics accounts if there are any. You can't create Data Lake Analytics accounts from the studio. To create an account, see [Get Started with Azure Data Lake Analytics using Azure portal](data-lake-analytics-get-started-portal.md) or [Get Started with Azure Data Lake Analytics using Azure PowerShell](data-lake-analytics-get-started-powershell.md).
## Develop U-SQL application A U-SQL application is mostly a U-SQL script. To learn more about U-SQL, see [Get started with U-SQL](data-lake-analytics-u-sql-get-started.md).
You can add addition user-defined operators to the application. For more inform
### To create and submit a Data Lake Analytics job
-1. Click the **File > New > Project**.
+1. Select the **File > New > Project**.
2. Select the U-SQL Project type. ![new U-SQL Visual Studio project](./media/data-lake-analytics-data-lake-tools-get-started/data-lake-analytics-data-lake-tools-new-project.png)
-3. Click **OK**. Visual studio creates a solution with a Script.usql file.
+3. Select **OK**. Visual studio creates a solution with a Script.usql file.
4. Enter the following script into the Script.usql file:
You can add addition user-defined operators to the application. For more inform
6. Switch back to the first U-SQL script and next to the **Submit** button, specify your Analytics account.
-7. From **Solution Explorer**, right click **Script.usql**, and then click **Build Script**. Verify the results in the Output pane.
+7. From **Solution Explorer**, right select **Script.usql**, and then select **Build Script**. Verify the results in the Output pane.
-8. From **Solution Explorer**, right click **Script.usql**, and then click **Submit Script**.
+8. From **Solution Explorer**, right select **Script.usql**, and then select **Submit Script**.
-9. Verify the **Analytics Account** is the one where you want to run the job, and then click **Submit**. Submission results and job link are available in the Data Lake Tools for Visual Studio Results window when the submission is completed.
+9. Verify the **Analytics Account** is the one where you want to run the job, and then select **Submit**. Submission results and job link are available in the Data Lake Tools for Visual Studio Results window when the submission is completed.
-10. Wait until the job is completed successfully. If the job failed, it is most likely missing the source file. Please see the Prerequisite section of this tutorial. For additional troubleshooting information, see [Monitor and troubleshoot Azure Data Lake Analytics jobs](data-lake-analytics-monitor-and-troubleshoot-jobs-tutorial.md).
+10. Wait until the job is completed successfully. If the job failed, it's most likely missing the source file. See the Prerequisite section of this tutorial. For more troubleshooting information, see [Monitor and troubleshoot Azure Data Lake Analytics jobs](data-lake-analytics-monitor-and-troubleshoot-jobs-tutorial.md).
When the job is completed, you shall see the following screen:
You can add addition user-defined operators to the application. For more inform
### To see the job output
-1. From **Server Explorer**, expand **Azure**, expand **Data Lake Analytics**, expand your Data Lake Analytics account, expand **Storage Accounts**, right-click the default Data Lake Storage account, and then click **Explorer**.
+1. From **Server Explorer**, expand **Azure**, expand **Data Lake Analytics**, expand your Data Lake Analytics account, expand **Storage Accounts**, right-click the default Data Lake Storage account, and then select **Explorer**.
2. Double-click **Samples** to open the folder, and then double-click **Outputs**. 3. Double-click **UnsuccessfulResponses.log**. 4. You can also double-click the output file inside the graph view of the job in order to navigate directly to the output.
You can add addition user-defined operators to the application. For more inform
## Next steps To get started with Data Lake Analytics using different tools, see:
-* [Get started with Data Lake Analytics using Azure Portal](data-lake-analytics-get-started-portal.md)
+* [Get started with Data Lake Analytics using Azure portal](data-lake-analytics-get-started-portal.md)
* [Get started with Data Lake Analytics using Azure PowerShell](data-lake-analytics-get-started-powershell.md) * [Get started with Data Lake Analytics using .NET SDK](./data-lake-analytics-get-started-cli.md)
data-lake-analytics Data Lake Analytics Cicd Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-cicd-overview.md
Title: How to set up a CI/CD pipeline for Azure Data Lake Analytics
description: Learn how to set up continuous integration and continuous deployment for Azure Data Lake Analytics. Previously updated : 09/14/2018 Last updated : 01/20/2023 # How to set up a CI/CD pipeline for Azure Data Lake Analytics
Learn more about [U-SQL database project](data-lake-analytics-data-lake-tools-de
### Build a U-SQL project with the MSBuild command line
-First migrate the project and get the NuGet package. Then call the standard MSBuild command line with the following additional arguments to build your U-SQL project:
+First migrate the project and get the NuGet package. Then call the standard MSBuild command line with the following arguments to build your U-SQL project:
```console msbuild USQLBuild.usqlproj /p:USQLSDKPath=packages\Microsoft.Azure.DataLake.USQL.SDK.1.3.180615\build\runtime;USQLTargetType=SyntaxCheck;DataRoot=datarootfolder;/p:EnableDeployment=true
To add the NuGet package reference, right-click the solution in Visual Studio So
### Build U-SQL a database project with the MSBuild command line
-To build your U-SQL database project, call the standard MSBuild command line and pass the U-SQL SDK NuGet package reference as an additional argument. See the following example:
+To build your U-SQL database project, call the standard MSBuild command line and pass the U-SQL SDK NuGet package reference as another argument. See the following example:
```console msbuild DatabaseProject.usqldbproj /p:USQLSDKPath=packages\Microsoft.Azure.DataLake.USQL.SDK.1.3.180615\build\runtime
In addition to the command line, you can use Visual Studio Build or an MSBuild t
### U-SQL database project build output
-The build output for a U-SQL database project is a U-SQL database deployment package, named with the suffix `.usqldbpack`. The `.usqldbpack` package is a zip file that includes all DDL statements in a single U-SQL script in a DDL folder. It includes all **.dlls** and additional files for assembly in a temp folder.
+The build output for a U-SQL database project is a U-SQL database deployment package, named with the suffix `.usqldbpack`. The `.usqldbpack` package is a zip file that includes all DDL statements in a single U-SQL script in a DDL folder. It includes all **.dlls** and other files for assembly in a temp folder.
## Test table-valued functions and stored procedures
Take the following steps to set up a database deployment task in Azure Pipelines
|AzureSDKPath|The path to search dependent assemblies in the Azure SDK.|null|true| |Interactive|Whether or not to use interactive mode for authentication.|false|false| |ClientId|The Azure AD application ID required for non-interactive authentication.|null|Required for non-interactive authentication.|
-|Secrete|The secrete or password for non-interactive authentication. It should be used only in a trusted and secure environment.|null|Required for non-interactive authentication, or else use SecreteFile.|
-|SecreteFile|The file saves the secrete or password for non-interactive authentication. Make sure to keep it readable only by the current user.|null|Required for non-interactive authentication, or else use Secrete.|
-|CertFile|The file saves X.509 certification for non-interactive authentication. The default is to use client secrete authentication.|null|false|
+|Secret|The secret or password for non-interactive authentication. It should be used only in a trusted and secure environment.|null|Required for non-interactive authentication, or else use SecreteFile.|
+|SecretFile|The file saves the secret or password for non-interactive authentication. Make sure to keep it readable only by the current user.|null|Required for non-interactive authentication, or else use Secret.|
+|CertFile|The file saves X.509 certification for non-interactive authentication. The default is to use client secret authentication.|null|false|
| JobPrefix | The prefix for database deployment of a U-SQL DDL job. | Deploy_ + DateTime.Now | false | ## Next steps
data-lake-analytics Data Lake Analytics Cicd Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-cicd-test.md
Title: How to test your Azure Data Lake Analytics code
description: 'Learn how to add test cases for U-SQL and extended C# code for Azure Data Lake Analytics.' Previously updated : 08/30/2019 Last updated : 01/20/2023 # Test your Azure Data Lake Analytics code
For a C# UDO test, make sure to reference the following assemblies, which are ne
- Microsoft.Analytics.Types - Microsoft.Analytics.UnitTest
-If you reference them through [the Nuget package Microsoft.Azure.DataLake.USQL.Interfaces](https://www.nuget.org/packages/Microsoft.Azure.DataLake.USQL.Interfaces/), make sure you add a NuGet Restore task in your build pipeline.
+If you reference them through [the NuGet package Microsoft.Azure.DataLake.USQL.Interfaces](https://www.nuget.org/packages/Microsoft.Azure.DataLake.USQL.Interfaces/), make sure you add a NuGet Restore task in your build pipeline.
## Next steps
data-lake-analytics Data Lake Analytics Data Lake Tools Data Skew Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-data-skew-solutions.md
Title: Resolve data-skew - Azure Data Lake Tools for Visual Studio description: Troubleshooting potential solutions for data-skew problems by using Azure Data Lake Tools for Visual Studio.-+ Previously updated : 12/16/2016 Last updated : 01/20/2023 # Resolve data-skew problems by using Azure Data Lake Tools for Visual Studio
Last updated 12/16/2016
## What is data skew?
-Briefly stated, data skew is an over-represented value. Imagine that you have assigned 50 tax examiners to audit tax returns, one examiner for each US state. The Wyoming examiner, because the population there is small, has little to do. In California, however, the examiner is kept very busy because of the state's large population.
- ![Data-skew problem example](./media/data-lake-analytics-data-lake-tools-data-skew-solutions/data-skew-problem.png)
+Briefly stated, data skew is an over-represented value. Imagine that you've assigned 50 tax examiners to audit tax returns, one examiner for each US state. The Wyoming examiner, because the population there is small, has little to do. In California, however, the examiner is kept busy because of the state's large population.
+ In our scenario, the data is unevenly distributed across all tax examiners, which means that some examiners must work more than others. In your own job, you frequently experience situations like the tax-examiner example here. In more technical terms, one vertex gets much more data than its peers, a situation that makes the vertex work more than the others and that eventually slows down an entire job. What's worse, the job might fail, because vertices might have, for example, a 5-hour runtime limitation and a 6-GB memory limitation.
Azure Data Lake Tools for Visual Studio can help detect whether your job has a d
### Option 1: Filter the skewed key value in advance
-If it does not affect your business logic, you can filter the higher-frequency values in advance. For example, if there are a lot of 000-000-000 in column GUID, you might not want to aggregate that value. Before you aggregate, you can write ΓÇ£WHERE GUID != ΓÇ£000-000-000ΓÇ¥ΓÇ¥ to filter the high-frequency value.
+If it doesn't affect your business logic, you can filter the higher-frequency values in advance. For example, if there are many 000-000-000 in column GUID, you might not want to aggregate that value. Before you aggregate, you can write ΓÇ£WHERE GUID != ΓÇ£000-000-000ΓÇ¥ΓÇ¥ to filter the high-frequency value.
### Option 2: Pick a different partition or distribution key
In the preceding example, if you want only to check the tax-audit workload all o
### Option 3: Add more partition or distribution keys
-Instead of using only _State_ as a partition key, you can use more than one key for partitioning. For example, consider adding _ZIP Code_ as an additional partition key to reduce data-partition sizes and distribute the data more evenly.
+Instead of using only _State_ as a partition key, you can use more than one key for partitioning. For example, consider adding _ZIP Code_ as another partition key to reduce data-partition sizes and distribute the data more evenly.
### Option 4: Use round-robin distribution
-If you cannot find an appropriate key for partition and distribution, you can try to use round-robin distribution. Round-robin distribution treats all rows equally and randomly puts them into corresponding buckets. The data gets evenly distributed, but it loses locality information, a drawback that can also reduce job performance for some operations. Additionally, if you are doing aggregation for the skewed key anyway, the data-skew problem will persist. To learn more about round-robin distribution, see the U-SQL Table Distributions section in [CREATE TABLE (U-SQL): Creating a Table with Schema](/u-sql/ddl/tables/create/managed/create-table-u-sql-creating-a-table-with-schema#dis_sch).
+If you can't find an appropriate key for partition and distribution, you can try to use round-robin distribution. Round-robin distribution treats all rows equally and randomly puts them into corresponding buckets. The data gets evenly distributed, but it loses locality information, a drawback that can also reduce job performance for some operations. Additionally, if you're doing aggregation for the skewed key anyway, the data-skew problem will persist. To learn more about round-robin distribution, see the U-SQL Table Distributions section in [CREATE TABLE (U-SQL): Creating a Table with Schema](/u-sql/ddl/tables/create/managed/create-table-u-sql-creating-a-table-with-schema#dis_sch).
## Solution 2: Improve the query plan ### Option 1: Use the CREATE STATISTICS statement
-U-SQL provides the CREATE STATISTICS statement on tables. This statement gives more information to the query optimizer about the data characteristics, such as value distribution, that are stored in a table. For most queries, the query optimizer already generates the necessary statistics for a high-quality query plan. Occasionally, you might need to improve query performance by creating additional statistics with CREATE STATISTICS or by modifying the query design. For more information, see the [CREATE STATISTICS (U-SQL)](/u-sql/ddl/statistics/create-statistics) page.
+U-SQL provides the CREATE STATISTICS statement on tables. This statement gives more information to the query optimizer about the data characteristics (for example, value distribution) that are stored in a table. For most queries, the query optimizer already generates the necessary statistics for a high-quality query plan. Occasionally, you might need to improve query performance by creating more statistics with CREATE STATISTICS or by modifying the query design. For more information, see the [CREATE STATISTICS (U-SQL)](/u-sql/ddl/statistics/create-statistics) page.
Code example:
CREATE STATISTICS IF NOT EXISTS stats_SampleTable_date ON SampleDB.dbo.SampleTab
If you want to sum the tax for each state, you must use GROUP BY state, an approach that doesn't avoid the data-skew problem. However, you can provide a data hint in your query to identify data skew in keys so that the optimizer can prepare an execution plan for you.
-Usually, you can set the parameter as 0.5 and 1, with 0.5 meaning not much skew and 1 meaning heavy skew. Because the hint affects execution-plan optimization for the current statement and all downstream statements, be sure to add the hint before the potential skewed key-wise aggregation.
+Usually, you can set the parameter as 0.5 and 1, with 0.5 meaning not much skew and one meaning heavy skew. Because the hint affects execution-plan optimization for the current statement and all downstream statements, be sure to add the hint before the potential skewed key-wise aggregation.
```usql SKEWFACTOR (columns) = x ```
-Provides a hint that the given columns have a skew factor x from 0 (no skew) through 1 (very heavy skew).
+Provides a hint that the given columns have a skew factor x from 0 (no skew) through 1 (heavy skew).
Code example:
Code example:
``` ### Option 3: Use ROWCOUNT
-In addition to SKEWFACTOR, for specific skewed-key join cases, if you know that the other joined row set is small, you can tell the optimizer by adding a ROWCOUNT hint in the U-SQL statement before JOIN. This way, optimizer can choose a broadcast join strategy to help improve performance. Be aware that ROWCOUNT does not resolve the data-skew problem, but it can offer some additional help.
+In addition to SKEWFACTOR, for specific skewed-key join cases, if you know that the other joined row set is small, you can tell the optimizer by adding a ROWCOUNT hint in the U-SQL statement before JOIN. This way, optimizer can choose a broadcast join strategy to help improve performance. Be aware that ROWCOUNT doesn't resolve the data-skew problem, but it can offer some extra help.
```usql OPTION(ROWCOUNT = n)
By default, a user-defined reducer runs in non-recursive mode, which means that
To improve performance, you can add an attribute in your code to define reducer to run in recursive mode. Then, the huge data sets can be distributed to multiple vertices and run in parallel, which speeds up your job.
-To change a non-recursive reducer to recursive, you need to make sure that your algorithm is associative. For example, the sum is associative, and the median is not. You also need to make sure that the input and output for reducer keep the same schema.
+To change a non-recursive reducer to recursive, you need to make sure that your algorithm is associative. For example, the sum is associative, and the median isn't. You also need to make sure that the input and output for reducer keep the same schema.
Attribute of recursive reducer:
public class TopNReducer : IReducer
### Option 2: Use row-level combiner mode, if possible
-Similar to the ROWCOUNT hint for specific skewed-key join cases, combiner mode tries to distribute huge skewed-key value sets to multiple vertices so that the work can be executed concurrently. Combiner mode canΓÇÖt resolve data-skew issues, but it can offer some additional help for huge skewed-key value sets.
+Similar to the ROWCOUNT hint for specific skewed-key join cases, combiner mode tries to distribute huge skewed-key value sets to multiple vertices so that the work can be executed concurrently. Combiner mode canΓÇÖt resolve data-skew issues, but it can offer some extra help for huge skewed-key value sets.
-By default, the combiner mode is Full, which means that the left row set and right row set cannot be separated. Setting the mode as Left/Right/Inner enables row-level join. The system separates the corresponding row sets and distributes them into multiple vertices that run in parallel. However, before you configure the combiner mode, be careful to ensure that the corresponding row sets can be separated.
+By default, the combiner mode is Full, which means that the left row set and right row set can't be separated. Setting the mode as Left/Right/Inner enables row-level join. The system separates the corresponding row sets and distributes them into multiple vertices that run in parallel. However, before you configure the combiner mode, be careful to ensure that the corresponding row sets can be separated.
The example that follows shows a separated left row set. Each output row depends on a single input row from the left, and it potentially depends on all rows from the right with the same key value. If you set the combiner mode as left, the system separates the huge left-row set into small ones and assigns them to multiple vertices.
-![Combiner mode illustration](./media/data-lake-analytics-data-lake-tools-data-skew-solutions/combiner-mode-illustration.png)
>[!NOTE] >If you set the wrong combiner mode, the combination is less efficient, and the results might be wrong.
data-lake-analytics Data Lake Analytics Data Lake Tools Develop Usql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-develop-usql-database.md
Title: Develop a U-SQL database project - Azure Data Lake description: Learn how to develop a U-SQL database using Azure Data Lake Tools for Visual Studio.---+ Previously updated : 07/03/2018 Last updated : 01/20/2023 # Use a U-SQL database project to develop a U-SQL database for Azure Data Lake
Right-click the U-SQL database project. The select **Add > New item**. You can f
For a non-assembly object (for example, a table-valued function), a new U-SQL script is created after you add a new item. You can start to develop the DDL statement for that object in the editor.
-For an assembly object, the tool provides a user-friendly UI editor that helps you register the assembly and deploy DLL files and other additional files. The following steps show you how to add an assembly object definition to the U-SQL database project:
+For an assembly object, the tool provides a user-friendly UI editor that helps you register the assembly and deploy DLL files and other files. The following steps show you how to add an assembly object definition to the U-SQL database project:
1. Add references to the C# project that include the UDO/UDAG/UDF for the U-SQL database project.
You can deploy a U-SQL database through a U-SQL database project or a .usqldbpac
1. Open **Server Explorer**. Then expand the **Azure Data Lake Analytics account** to which you want to deploy the database.
-1. Right click **U-SQL Databases**, and then choose **Deploy Database**.
+1. Right-click or press and hold **U-SQL Databases**, and then choose **Deploy Database**.
1. Set **Database Source** to the U-SQL database deployment package (.usqldbpack file) path.
-1. Enter the **Database Name** to create a database. If there is a database with the same name that already exists in the target Azure Data Lake Analytics account, all objects that are defined in the database project are created without recreating the database.
+1. Enter the **Database Name** to create a database. If there's a database with the same name that already exists in the target Azure Data Lake Analytics account, all objects that are defined in the database project are created without recreating the database.
![Data Lake Tools for Visual Studio--Deploy U-SQL database package](./media/data-lake-analytics-data-lake-tools-develop-usql-database/data-lake-tools-deploy-usql-database-package.png)
You can deploy a U-SQL database through a U-SQL database project or a .usqldbpac
### Deploy U-SQL database by using the SDK
-`PackageDeploymentTool.exe` provides the programming and command-line interfaces that help to deploy U-SQL databases. The SDK is included in the [U-SQL SDK Nuget package](https://www.nuget.org/packages/Microsoft.Azure.DataLake.USQL.SDK/), located at `build/runtime/PackageDeploymentTool.exe`.
+`PackageDeploymentTool.exe` provides the programming and command-line interfaces that help to deploy U-SQL databases. The SDK is included in the [U-SQL SDK NuGet package](https://www.nuget.org/packages/Microsoft.Azure.DataLake.USQL.SDK/), located at `build/runtime/PackageDeploymentTool.exe`.
[Learn more about the SDK and how to set up CI/CD pipeline for U-SQL database deployment](data-lake-analytics-cicd-overview.md).
data-lake-analytics Data Lake Analytics Data Lake Tools For Vscode Access Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-for-vscode-access-resource.md
Title: Accessing resources with Data Lake Tools description: Learn how to use Azure Data Lake Tools for accessing Azure Data Lake Analytics resources. -+ Previously updated : 02/09/2018 Last updated : 01/23/2023 # Accessing resources with Azure Data Lake Tools
Expand your Azure subscription. Under the **U-SQL Databases** node, you can brow
Expand **U-SQL Databases**. You can create a database, schema, table, table type, index, or statistic by right-clicking the corresponding node, and then selecting **Script to Create** on the shortcut menu. On the opened script page, edit the script according to your needs. Then submit the job by right-clicking it and selecting **ADL: Submit Job**.
-After you finish creating the item, right-click the node and then select **Refresh** to show the item. You can also delete the item by right-clicking it and then selecting **Delete**.
+After you finish creating the item, right-click the node, and then select **Refresh** to show the item. You can also delete the item by right-clicking it and then selecting **Delete**.
!["Script to Create" command on the shortcut menu in the Data Lake explorer](./media/data-lake-analytics-data-lake-tools-for-vscode/data-lake-tools-for-vscode-code-explorer-script-create.png)
Browse to Blob storage:
Data Lake Tools opens the Azure Storage path in the Azure portal. You can find the path and preview the file from the web.
-## Additional features
+## More features
Data Lake Tools for VS Code supports the following features:
Data Lake Tools for VS Code supports the following features:
- **IntelliSense autocomplete on Data Lake Analytics metadata**: Data Lake Tools downloads the Data Lake Analytics metadata information locally. The IntelliSense feature automatically populates objects from the Data Lake Analytics metadata. These objects include the database, schema, table, view, table-valued function, procedures, and C# assemblies.
- ![IntelliSense metadata](./media/data-lake-analytics-data-lake-tools-for-vscode/data-lake-tools-for-vscode-auto-complete-metastore.png)
- - **IntelliSense error marker**: Data Lake Tools underlines editing errors for U-SQL and C#. - **Syntax highlights**: Data Lake Tools uses colors to differentiate items like variables, keywords, data types, and functions.
data-lake-analytics Data Lake Analytics Data Lake Tools Local Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-local-run.md
Title: Run Azure Data Lake U-SQL scripts on your local machine
description: Learn how to use Azure Data Lake Tools for Visual Studio to run U-SQL jobs on your local machine. Previously updated : 07/03/2018 Last updated : 01/20/2023 # Run U-SQL scripts on your local machine
data-lake-analytics Data Lake Analytics Data Lake Tools View Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-view-jobs.md
Title: Use Job Browser & Job View - Azure Data Lake Analytics description: This article describes how to use Job Browser and Job View for Azure Data Lake Analytics jobs. -- Previously updated : 08/02/2017 Last updated : 01/20/2023 # Use Job Browser and Job View for Azure Data Lake Analytics
data-lake-analytics Data Lake Analytics Debug U Sql Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-debug-u-sql-jobs.md
Title: Debug C# code for Azure Data Lake U-SQL jobs description: This article describes how to debug a U-SQL failed vertex using Azure Data Lake Tools for Visual Studio. -+ Previously updated : 11/30/2017 Last updated : 01/20/2023 # Debug user-defined C# code for failed U-SQL jobs [!INCLUDE [retirement-flag](includes/retirement-flag.md)]
-U-SQL provides an extensibility model using C#. In U-SQL scripts, it is easy to call C# functions and perform analytic functions that SQL-like declarative language does not support. To learn more for U-SQL extensibility, see [U-SQL programmability guide](./data-lake-analytics-u-sql-programmability-guide.md#use-user-defined-functions-udf).
+U-SQL provides an extensibility model using C#. In U-SQL scripts, it's easy to call C# functions and perform analytic functions that SQL-like declarative language doesn't support. To learn more for U-SQL extensibility, see [U-SQL programmability guide](./data-lake-analytics-u-sql-programmability-guide.md#use-user-defined-functions-udf).
-In practice, any code may need debugging, but it is hard to debug a distributed job with custom code on the cloud with limited log files. [Azure Data Lake Tools for Visual Studio](https://aka.ms/adltoolsvs) provides a feature called **Failed Vertex Debug**, which helps you more easily debug the failures that occur in your custom code. When U-SQL job fails, the service keeps the failure state and the tool helps you to download the cloud failure environment to the local machine for debugging. The local download captures the entire cloud environment, including any input data and user code.
+In practice, any code may need debugging, but it's hard to debug a distributed job with custom code on the cloud with limited log files. [Azure Data Lake Tools for Visual Studio](https://aka.ms/adltoolsvs) provides a feature called **Failed Vertex Debug**, which helps you more easily debug the failures that occur in your custom code. When U-SQL job fails, the service keeps the failure state and the tool helps you to download the cloud failure environment to the local machine for debugging. The local download captures the entire cloud environment, including any input data and user code.
The following video demonstrates Failed Vertex Debug in Azure Data Lake Tools for Visual Studio.
The following video demonstrates Failed Vertex Debug in Azure Data Lake Tools fo
When you open a failed job in Azure Data Lake Tools for Visual Studio, you see a yellow alert bar with detailed error messages in the error tab.
-1. Click **Download** to download all the required resources and input streams. If the download doesn't complete, click **Retry**.
+1. Select **Download** to download all the required resources and input streams. If the download doesn't complete, select **Retry**.
-2. Click **Open** after the download completes to generate a local debugging environment. A new debugging solution will be opened, and if you have existing solution opened in Visual Studio, please make sure to save and close it before debugging.
+2. Select **Open** after the download completes to generate a local debugging environment. A new debugging solution will be opened, and if you have existing solution opened in Visual Studio, please make sure to save and close it before debugging.
-![Azure Data Lake Analytics U-SQL debug visual studio download vertex](./media/data-lake-analytics-debug-u-sql-jobs/data-lake-analytics-download-vertex.png)
## Configure the debugging environment > [!NOTE] > Before debugging, be sure to check **Common Language Runtime Exceptions** in the Exception Settings window (**Ctrl + Alt + E**).
-![Azure Data Lake Analytics U-SQL debug visual studio setting](./media/data-lake-analytics-debug-u-sql-jobs/data-lake-analytics-clr-exception-setting.png)
In the new launched Visual Studio instance, you may or may not find the user-defined C# source code: 1. [I can find my source code in the solution](#source-code-is-included-in-debugging-solution)
-2. [I cannot find my source code in the solution](#source-code-is-not-included-in-debugging-solution)
+2. [I can't find my source code in the solution](#source-code-is-not-included-in-debugging-solution)
### Source code is included in debugging solution
There are two cases that the C# source code is captured:
If the source code is imported to the solution, you can use the Visual Studio debugging tools (watch, variables, etc.) to troubleshoot the problem:
-1. Press **F5** to start debugging. The code runs until it is stopped by an exception.
+1. Press **F5** to start debugging. The code runs until it's stopped by an exception.
2. Open the source code file and set breakpoints, then press **F5** to debug the code step by step.
- ![Azure Data Lake Analytics U-SQL debug exception](./media/data-lake-analytics-debug-u-sql-jobs/data-lake-analytics-debug-exception.png)
+ :::image type="content" source="./media/data-lake-analytics-debug-u-sql-jobs/data-lake-analytics-debug-exception.png" alt-text="Screenshot of user-defined code with a breakpoint set, showing an exception at the highlighted line." lightbox="./media/data-lake-analytics-debug-u-sql-jobs/data-lake-analytics-debug-exception.png":::
### Source code is not included in debugging solution
-If the user code is not included in code-behind file, or you did not register the assembly with **debug info**, then the source code is not included automatically in the debugging solution. In this case, you need extra steps to add your source code:
+If the user code isn't included in code-behind file, or you didn't register the assembly with **debug info**, then the source code isn't included automatically in the debugging solution. In this case, you need extra steps to add your source code:
1. Right-click **Solution 'VertexDebug' > Add > Existing Project...** to find the assembly source code and add the project to the debugging solution.
- ![Azure Data Lake Analytics U-SQL debug add project](./media/data-lake-analytics-debug-u-sql-jobs/data-lake-analytics-add-project-to-debug-solution.png)
+ :::image type="content" source="./media/data-lake-analytics-debug-u-sql-jobs/data-lake-analytics-add-project-to-debug-solution.png" alt-text="Screenshot of the solution explorer in Visual Studio, showing the VertexDebug Solution.":::
-2. Get the project folder path for **FailedVertexDebugHost** project.
+2. Get the project folder path for **FailedVertexDebugHost** project.
3. Right-Click **the added assembly source code project > Properties**, select the **Build** tab at left, and paste the copied path ending with \bin\debug as **Output > Output path**. The final output path is like `<DataLakeTemp path>\fd91dd21-776e-4729-a78b-81ad85a4fba6\loiu0t1y.mfo\FailedVertexDebug\FailedVertexDebugHost\bin\Debug\`.
- ![Azure Data Lake Analytics U-SQL debug set pdb path](./media/data-lake-analytics-debug-u-sql-jobs/data-lake-analytics-set-pdb-path.png)
+ :::image type="content" source="./media/data-lake-analytics-debug-u-sql-jobs/data-lake-analytics-set-pdb-path.png" alt-text="Screenshot of build tab in Visual Studio code, with the outbound path highlighted under Output." lightbox="./media/data-lake-analytics-debug-u-sql-jobs/data-lake-analytics-set-pdb-path.png":::
After these settings, start debugging with **F5** and breakpoints. You can also use the Visual Studio debugging tools (watch, variables, etc.) to troubleshoot the problem.
After debugging, if the project completes successfully the output window shows t
`The Program 'LocalVertexHost.exe' has exited with code 0 (0x0).`
-![Azure Data Lake Analytics U-SQL debug succeed](./media/data-lake-analytics-debug-u-sql-jobs/data-lake-analytics-debug-succeed.png)
To resubmit the failed job:
data-lake-analytics Data Lake Analytics Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-disaster-recovery.md
Title: Disaster recovery guidance for Azure Data Lake Analytics description: Learn how to plan disaster recovery for your Azure Data Lake Analytics accounts.-+ Previously updated : 06/03/2019 Last updated : 01/20/2023 # Disaster recovery guidance for Azure Data Lake Analytics [!INCLUDE [retirement-flag](includes/retirement-flag.md)]s
-Azure Data Lake Analytics is an on-demand analytics job service that simplifies big data. Instead of deploying, configuring, and tuning hardware, you write queries to transform your data and extract valuable insights. The analytics service can handle jobs of any scale instantly by setting the dial for how much power you need. You only pay for your job when it is running, making it cost-effective. This article provides guidance on how to protect your jobs from rare region-wide outages or accidental deletions.
+Azure Data Lake Analytics is an on-demand analytics job service that simplifies big data. Instead of deploying, configuring, and tuning hardware, you write queries to transform your data and extract valuable insights. The analytics service can handle jobs of any scale instantly by setting the dial for how much power you need. You only pay for your job when it's running, making it cost-effective. This article provides guidance on how to protect your jobs from rare region-wide outages or accidental deletions.
## Disaster recovery guidance
-When using Azure Data Lake Analytics, it's critical for you to prepare your own disaster recovery plan. This article helps guide you to build a disaster recovery plan. There are additional resources that can help you create your own plan:
+When using Azure Data Lake Analytics, it's critical for you to prepare your own disaster recovery plan. This article helps guide you to build a disaster recovery plan. There are more resources that can help you create your own plan:
+ - [Failure and disaster recovery for Azure applications](/azure/architecture/reliability/disaster-recovery) - [Azure resiliency technical guidance](/azure/architecture/checklist/resiliency-per-service) ## Best practices and scenario guidance
-You can run a recurring U-SQL job in an ADLA account in a region that reads and writes U-SQL tables as well as unstructured data. Prepare for a disaster by taking these steps:
+You can run a recurring U-SQL job in an ADLA account in a region that reads and writes U-SQL tables and unstructured data. Prepare for a disaster by taking these steps:
1. Create ADLA and ADLS accounts in the secondary region that will be used during an outage.
data-lake-analytics Data Lake Analytics Get Started Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-get-started-cli.md
Title: Create and query Azure Data Lake Analytics - Azure CLI description: Learn how to use the Azure CLI to create an Azure Data Lake Analytics account and submit a U-SQL job. -+ Previously updated : 06/18/2017 Last updated : 01/20/2023 # Get started with Azure Data Lake Analytics using Azure CLI
This article describes how to use the Azure CLI command-line interface to create
Before you begin, you need the following items: * **An Azure subscription**. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
-* This article requires that you are running the Azure CLI version 2.0 or later. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+* This article requires that you're running the Azure CLI version 2.0 or later. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
## Sign in to Azure
To sign in to your Azure subscription:
az login ```
-You are requested to browse to a URL, and enter an authentication code. And then follow the instructions to enter your credentials.
+You're requested to browse to a URL, and enter an authentication code. And then follow the instructions to enter your credentials.
-Once you have logged in, the login command lists your subscriptions.
+Once you've logged in, the login command lists your subscriptions.
To use a specific subscription:
This U-SQL script reads the source data file using **Extractors.Tsv()**, and the
Don't modify the two paths unless you copy the source file into a different location. Data Lake Analytics creates the output folder if it doesn't exist.
-It is simpler to use relative paths for files stored in default Data Lake Store accounts. You can also use absolute paths. For example:
+It's simpler to use relative paths for files stored in default Data Lake Store accounts. You can also use absolute paths. For example:
```usql adl://<Data LakeStorageAccountName>.azuredatalakestore.net:443/Samples/Data/SearchLog.tsv
data-lake-analytics Data Lake Analytics Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-get-started-powershell.md
Title: Create & query Azure Data Lake Analytics - PowerShell description: Use Azure PowerShell to create an Azure Data Lake Analytics account and submit a U-SQL job. -+ Previously updated : 05/04/2017 Last updated : 01/20/2023 # Get started with Azure Data Lake Analytics using Azure PowerShell
Before you begin this tutorial, you must have the following information:
## Log in to Azure
-This tutorial assumes you are already familiar with using Azure PowerShell. In particular, you need to know how to log in to Azure. See the [Get started with Azure PowerShell](/powershell/azure/get-started-azureps) if you need help.
+This tutorial assumes you're already familiar with using Azure PowerShell. In particular, you need to know how to log in to Azure. See the [Get started with Azure PowerShell](/powershell/azure/get-started-azureps) if you need help.
To log in with a subscription name:
To log in with a subscription name:
Connect-AzAccount -SubscriptionName "ContosoSubscription" ```
-Instead of the subscription name, you can also use a subscription id to log in:
+Instead of the subscription name, you can also use a subscription ID to log in:
```powershell Connect-AzAccount -SubscriptionId "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
Export-AdlStoreItem -Account $adls -Path "/data.csv" -Destination "C:\data.csv"
## See also
-* To see the same tutorial using other tools, click the tab selectors on the top of the page.
+* To see the same tutorial using other tools, select the tab selectors on the top of the page.
* To learn U-SQL, see [Get started with Azure Data Lake Analytics U-SQL language](data-lake-analytics-u-sql-get-started.md). * For management tasks, see [Manage Azure Data Lake Analytics using Azure portal](data-lake-analytics-manage-use-portal.md).
data-lake-analytics Data Lake Analytics Manage Use Dotnet Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-manage-use-dotnet-sdk.md
Title: Manage Azure Data Lake Analytics using Azure .NET SDK description: This article describes how to use the Azure .NET SDK to write apps that manage Data Lake Analytics jobs, data sources, & users.-+ Previously updated : 06/18/2017 Last updated : 01/20/2023 # Manage Azure Data Lake Analytics a .NET app
data-lake-analytics Data Lake Analytics Manage Use Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-manage-use-powershell.md
Title: Manage Azure Data Lake Analytics using Azure PowerShell description: This article describes how to use Azure PowerShell to manage Data Lake Analytics accounts, data sources, users, & jobs. -+ Previously updated : 06/29/2018 Last updated : 01/20/2023
Get-AdlJob -Account $adla -State Accepted,Compiling,New,Paused,Scheduling,Start
Use the `-Result` parameter to detect whether ended jobs completed successfully. It has these values:
-* Cancelled
+* Canceled
* Failed * None * Succeeded
data-lake-analytics Data Lake Analytics Manage Use Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-manage-use-python-sdk.md
Title: Manage Azure Data Lake Analytics using Python description: This article describes how to use Python to manage Data Lake Analytics accounts, data sources, users, & jobs. -+ Previously updated : 06/08/2018 Last updated : 01/20/2023 # Manage Azure Data Lake Analytics using Python
Run this script to verify that the modules can be imported.
### Interactive user authentication with a pop-up
-This method is not supported.
+This method isn't supported.
### Interactive user authentication with a device code
credentials = DefaultAzureCredential()
### Noninteractive authentication with API and a certificate
-This method is not supported.
+This method isn't supported.
## Common script variables
adlaAcctClient.compute_policies.create_or_update(
## Next steps -- To see the same tutorial using other tools, click the tab selectors on the top of the page.
+- To see the same tutorial using other tools, select the tab selectors on the top of the page.
- To learn U-SQL, see [Get started with Azure Data Lake Analytics U-SQL language](data-lake-analytics-u-sql-get-started.md). - For management tasks, see [Manage Azure Data Lake Analytics using Azure portal](data-lake-analytics-manage-use-portal.md).
data-lake-analytics Data Lake Analytics Monitor And Troubleshoot Jobs Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-monitor-and-troubleshoot-jobs-tutorial.md
Title: Monitor Azure Data Lake Analytics - Azure portal description: This article describes how to use the Azure portal to troubleshoot Azure Data Lake Analytics jobs. -+ Previously updated : 12/05/2016 Last updated : 01/20/2023
-# Monitor jobs in Azure Data Lake Analytics using the Azure Portal
+# Monitor jobs in Azure Data Lake Analytics using the Azure portal
[!INCLUDE [retirement-flag](includes/retirement-flag.md)] ## To see all the jobs
-1. From the Azure portal, click **Microsoft Azure** in the upper left corner.
+1. From the Azure portal, select **Microsoft Azure** in the upper left corner.
-2. Click the tile with your Data Lake Analytics account name. The job summary is shown on the **Job Management** tile.
+2. Select the tile with your Data Lake Analytics account name. The job summary is shown on the **Job Management** tile.
![Azure Data Lake Analytics job management](./media/data-lake-analytics-monitor-and-troubleshoot-tutorial/data-lake-analytics-job-management.png) The job Management gives you a glance of the job status. Notice there is a failed job.
-3. Click the **Job Management** tile to see the jobs. The jobs are categorized in **Running**, **Queued**, and **Ended**. You shall see your failed job in the **Ended** section. It shall be first one in the list. When you have a lot of jobs, you can click **Filter** to help you to locate jobs.
+3. Select the **Job Management** tile to see the jobs. The jobs are categorized in **Running**, **Queued**, and **Ended**. You shall see your failed job in the **Ended** section. It shall be first one in the list. When you have many jobs, you can select **Filter** to help you to locate jobs.
![Azure Data Lake Analytics filter jobs](./media/data-lake-analytics-monitor-and-troubleshoot-tutorial/data-lake-analytics-filter-jobs.png)
-4. Click the failed job from the list to open the job details:
+4. Select the failed job from the list to open the job details:
![Azure Data Lake Analytics failed job](./media/data-lake-analytics-monitor-and-troubleshoot-tutorial/data-lake-analytics-failed-job.png) Notice the **Resubmit** button. After you fix the problem, you can resubmit the job.
-5. Click highlighted part from the previous screenshot to open the error details. You shall see something like:
+5. Select highlighted part from the previous screenshot to open the error details. You shall see something like:
![Azure Data Lake Analytics failed job details](./media/data-lake-analytics-monitor-and-troubleshoot-tutorial/data-lake-analytics-failed-job-details.png)
- It tells you the source folder is not found.
+ It tells you the source folder isn't found.
-6. Click **Duplicate Script**.
+6. Select **Duplicate Script**.
7. Update the **FROM** path to: `/Samples/Data/SearchLog.tsv`
-8. Click **Submit Job**.
+8. Select **Submit Job**.
## Next steps
data-lake-analytics Data Lake Analytics Quota Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-quota-limits.md
Title: Adjust quotas and limits in Azure Data Lake Analytics
description: Learn how to adjust and increase quotas and limits in Azure Data Lake Analytics (ADLA) accounts. Previously updated : 03/15/2018 Last updated : 01/20/2023 # Adjust quotas and limits in Azure Data Lake Analytics
Learn how to adjust and increase the quota and limits in Azure Data Lake Analyti
**Maximum number of ADLA accounts per subscription per region:** 5
-If you try to create a sixth ADLA account, you will get an error "You have reached the maximum number of Data Lake Analytics accounts allowed (5) in region under subscription name".
+If you try to create a sixth ADLA account, you'll get an error "You have reached the maximum number of Data Lake Analytics accounts allowed (5) in region under subscription name".
If you want to go beyond this limit, you can try these options:
This is the maximum number of jobs that can run concurrently in your account. Ab
1. Sign on to the [Azure portal](https://portal.azure.com). 2. Choose an existing ADLA account.
-3. Click **Properties**.
+3. Select **Properties**.
4. Adjust the values for **Maximum AUs**, **Maximum number of running jobs**, and **Job submission limits** to suit your needs. ## Increase maximum quota limits
You can find more information about Azure limits in the [Azure service-specific
2. Select the issue type **Quota**.
-3. Select your **Subscription** (make sure it is not a "trial" subscription).
+3. Select your **Subscription** (make sure it isn't a "trial" subscription).
4. Select quota type **Data Lake Analytics**.
data-lake-analytics Data Lake Analytics Schedule Jobs Ssis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-schedule-jobs-ssis.md
Title: Schedule Azure Data Lake Analytics U-SQL jobs using SSIS description: Learn how to use SQL Server Integration Services to schedule U-SQL jobs with inline script or from U-SQL query files.-+ Previously updated : 07/17/2018 Last updated : 01/20/2023 # Schedule U-SQL jobs using SQL Server Integration Services (SSIS)
In SSIS package design view, add an **Azure Data Lake Store File System Task**,
![Screenshot that shows the File Connection Manager Editor with "Existing file" selected for "Usage type".](./media/data-lake-analytics-schedule-jobs-ssis/configure-file-connection-for-foreach-loop-container.png)
- 3. In **Connection Managers** view, right-click the file connection created just now, and choose **Properties**.
+ 3. In **Connection Managers** view, right-click the file connection created, and choose **Properties**.
4. In the **Properties** window, expand **Expressions**, and set **ConnectionString** to the variable defined in Foreach Loop Container, for example, `@[User::FileName]`.
You can use U-SQL files in Azure Blob Storage by using **Azure Blob Download Tas
The steps are similar with [Scenario 2: Use U-SQL files in Azure Data Lake Store](#scenario-2-use-u-sql-files-in-azure-data-lake-store). Change the Azure Data Lake Store File System Task to Azure Blob Download Task. [Learn more about Azure Blob Download Task](/sql/integration-services/control-flow/azure-blob-download-task).
-The control flow is like below.
+The control flow is like this:
![Use U-SQL files in Azure Data Lake Store](./media/data-lake-analytics-schedule-jobs-ssis/use-u-sql-files-in-azure-blob-storage.png)
Besides of using U-SQL files stored on cloud, you can also use files on your loc
1. Right-click **Connection Managers** in SSIS project and choose **New Connection Manager**.
-2. Select **File** type and click **Add...**.
+2. Select **File** type and select **Add...**.
3. Set **Usage type** to **Existing file**, and set the **File** to the file on the local machine.
Besides of using U-SQL files stored on cloud, you can also use files on your loc
4. Add **Azure Data Lake Analytics** Task and: 1. Set **SourceType** to **FileConnection**.
- 2. Set **FileConnection** to the File Connection created just now.
+ 2. Set **FileConnection** to the File Connection created.
5. Finish other configurations for Azure Data Lake Analytics Task.
In some cases, you may need to dynamically generate the U-SQL statements. You ca
3. Add **Azure Data Lake Analytics Task** and: 1. Set **SourceType** to **Variable**.
- 2. Set **SourceVariable** to the SSIS Variable created just now.
+ 2. Set **SourceVariable** to the SSIS Variable created now.
4. Finish other configurations for Azure Data Lake Analytics Task. ## Scenario 6-Pass parameters to U-SQL script
-In some cases, you may want to dynamically set the U-SQL variable value in the U-SQL script. **Parameter Mapping** feature in Azure Data Lake Analytics Task help with this scenario. There are usually two typical user cases:
+In some cases, you may want to dynamically set the U-SQL variable value in the U-SQL script. **Parameter Mapping** feature in Azure Data Lake Analytics Task helps with this scenario. There are usually two typical user cases:
- Set the input and output file path variables dynamically based on current date and time. - Set the parameter for stored procedures.
data-lake-analytics Data Lake Analytics U Sql Develop With Python R Csharp In Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-develop-with-python-r-csharp-in-vscode.md
Last updated 11/22/2017
# Develop U-SQL with Python, R, and C# for Azure Data Lake Analytics in Visual Studio Code+ Learn how to use Visual Studio Code (VSCode) to write Python, R and C# code behind with U-SQL and submit jobs to Azure Data Lake service. For more information about Azure Data Lake Tools for VSCode, see [Use the Azure Data Lake Tools for Visual Studio Code](data-lake-analytics-data-lake-tools-for-vscode.md). Before writing code-behind custom code, you need to open a folder or a workspace in VSCode. ## Prerequisites for Python and R+ Register Python and, R extensions assemblies for your ADL account. 1. Open your account in portal. - Select **Overview**.
- - Click **Sample Script**.
-2. Click **More**.
+ - Select **Sample Script**.
+2. Select **More**.
3. Select **Install U-SQL Extensions**. 4. Confirmation message is displayed after the U-SQL extensions are installed.
- ![Set up the environment for Python and R](./media/data-lake-analytics-data-lake-tools-for-vscode/setup-the-enrionment-for-python-and-r.png)
+ :::image type="content" source="./media/data-lake-analytics-data-lake-tools-for-vscode/setup-the-enrionment-for-python-and-r.png" alt-text="Screenshots showing selecting Sample Scripts in Overview, selecting More and Install U-SQL Extensions." lightbox="./media/data-lake-analytics-data-lake-tools-for-vscode/setup-the-enrionment-for-python-and-r.png":::
> [!Note] > For best experiences on Python and R language service, please install VSCode Python and R extension. ## Develop Python file
-1. Click the **New File** in your workspace.
+
+1. Select the **New File** in your workspace.
2. Write your code in U-SQL. The following is a code sample. ```U-SQL REFERENCE ASSEMBLY [ExtPython];
Register Python and, R extensions assemblies for your ADL account.
TO "/tweetmentions.csv" USING Outputters.Csv(); ```
-
+ 3. Right-click a script file, and then select **ADL: Generate Python Code Behind File**. 4. The **xxx.usql.py** file is generated in your working folder. Write your code in Python file. The following is a code sample.
Register Python and, R extensions assemblies for your ADL account.
del df['tweet'] return df ```
-5. Right-click in **USQL** file, you can click **Compile Script** or **Submit Job** to running job.
+5. Right-click in **USQL** file, you can select **Compile Script** or **Submit Job** to running job.
## Develop R file
-1. Click the **New File** in your workspace.
+
+1. Select the **New File** in your workspace.
2. Write your code in U-SQL file. The following is a code sample. ```U-SQL DEPLOY RESOURCE @"/usqlext/samples/R/my_model_LM_Iris.rda";
Register Python and, R extensions assemblies for your ADL account.
load("my_model_LM_Iris.rda") outputToUSQL=data.frame(predict(lm.fit, inputFromUSQL, interval="confidence")) ```
-5. Right-click in **USQL** file, you can click **Compile Script** or **Submit Job** to running job.
+5. Right-click in **USQL** file, you can select **Compile Script** or **Submit Job** to running job.
## Develop C# file+ A code-behind file is a C# file associated with a single U-SQL script. You can define a script dedicated to UDO, UDA, UDT, and UDF in the code-behind file. The UDO, UDA, UDT, and UDF can be used directly in the script without registering the assembly first. The code-behind file is put in the same folder as its peering U-SQL script file. If the script is named xxx.usql, the code-behind is named as xxx.usql.cs. If you manually delete the code-behind file, the code-behind feature is disabled for its associated U-SQL script. For more information about writing customer code for U-SQL script, see [Writing and Using Custom Code in U-SQL: User-Defined Functions]( https://blogs.msdn.microsoft.com/visualstudio/2015/10/28/writing-and-using-custom-code-in-u-sql-user-defined-functions/).
-1. Click the **New File** in your workspace.
+1. Select the **New File** in your workspace.
2. Write your code in U-SQL file. The following is a code sample. ```U-SQL @a =
A code-behind file is a C# file associated with a single U-SQL script. You can d
} } ```
-5. Right-click in **USQL** file, you can click **Compile Script** or **Submit Job** to running job.
+5. Right-click in **USQL** file, you can select **Compile Script** or **Submit Job** to running job.
## Next steps
-* [Use the Azure Data Lake Tools for Visual Studio Code](data-lake-analytics-data-lake-tools-for-vscode.md)
-* [U-SQL local run and local debug with Visual Studio Code](data-lake-tools-for-vscode-local-run-and-debug.md)
-* [Get started with Data Lake Analytics using PowerShell](data-lake-analytics-get-started-powershell.md)
-* [Get started with Data Lake Analytics using the Azure portal](data-lake-analytics-get-started-portal.md)
-* [Use Data Lake Tools for Visual Studio for developing U-SQL applications](data-lake-analytics-data-lake-tools-get-started.md)
-* [Use Data Lake Analytics(U-SQL) catalog](./data-lake-analytics-u-sql-get-started.md)
+
+- [Use the Azure Data Lake Tools for Visual Studio Code](data-lake-analytics-data-lake-tools-for-vscode.md)
+- [U-SQL local run and local debug with Visual Studio Code](data-lake-tools-for-vscode-local-run-and-debug.md)
+- [Get started with Data Lake Analytics using PowerShell](data-lake-analytics-get-started-powershell.md)
+- [Get started with Data Lake Analytics using the Azure portal](data-lake-analytics-get-started-portal.md)
+- [Use Data Lake Tools for Visual Studio for developing U-SQL applications](data-lake-analytics-data-lake-tools-get-started.md)
+- [Use Data Lake Analytics(U-SQL) catalog](./data-lake-analytics-u-sql-get-started.md)
data-lake-analytics Data Lake Analytics U Sql Programmability Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-programmability-guide.md
Title: U-SQL programmability guide for Azure Data Lake
-description: Learn about the U-SQL overview and UDF programmability in Azure Data Lake Analytics to enable you create good USQL script.
+description: Learn about the U-SQL overview and UDF programmability in Azure Data Lake Analytics to enable you to create good USQL scripts.
-+ Previously updated : 06/30/2017 Last updated : 01/20/2023 # U-SQL programmability guide overview [!INCLUDE [retirement-flag](includes/retirement-flag.md)]
-U-SQL is a query language that's designed for big data-type of workloads. One of the unique features of U-SQL is the combination of the SQL-like declarative language with the extensibility and programmability that's provided by C#. In this guide, we concentrate on the extensibility and programmability of the U-SQL language that's enabled by C#.
+U-SQL is a query language that's designed for big data type of workloads. One of the unique features of U-SQL is the combination of the SQL-like declarative language with the extensibility and programmability that's provided by C#. In this guide, we concentrate on the extensibility and programmability of the U-SQL language that's enabled by C#.
## Requirements
Currently, U-SQL uses the .NET Framework version 4.7.2. So ensure that your own
As mentioned earlier, U-SQL runs code in a 64-bit (x64) format. So make sure that your code is compiled to run on x64. Otherwise you get the incorrect format error shown earlier.
-Each uploaded assembly DLL and resource file, such as a different runtime, a native assembly, or a config file, can be at most 400 MB. The total size of deployed resources, either via DEPLOY RESOURCE or via references to assemblies and their additional files, cannot exceed 3 GB.
+Each uploaded assembly DLL and resource file, such as a different runtime, a native assembly, or a config file, can be at most 400 MB. The total size of deployed resources, either via DEPLOY RESOURCE or via references to assemblies and their other files, can't exceed 3 GB.
-Finally, note that each U-SQL database can only contain one version of any given assembly. For example, if you need both version 7 and version 8 of the NewtonSoft Json.NET library, you need to register them in two different databases. Furthermore, each script can only refer to one version of a given assembly DLL. In this respect, U-SQL follows the C# assembly management and versioning semantics.
+Finally, each U-SQL database can only contain one version of any given assembly. For example, if you need both version 7 and version 8 of the NewtonSoft Json.NET library, you need to register them in two different databases. Furthermore, each script can only refer to one version of a given assembly DLL. In this respect, U-SQL follows the C# assembly management and versioning semantics.
## Use user-defined functions: UDF U-SQL user-defined functions, or UDF, are programming routines that accept parameters, perform an action (such as a complex calculation), and return the result of that action as a value. The return value of UDF can only be a single scalar. U-SQL UDF can be called in U-SQL base script like any other C# scalar function.
public static string GetFiscalPeriod(DateTime dt)
It simply calculates fiscal month and quarter and returns a string value. For June, the first month of the first fiscal quarter, we use "Q1:P1". For July, we use "Q1:P2", and so on.
-This is a regular C# function that we are going to use in our U-SQL project.
+This is a regular C# function that we're going to use in our U-SQL project.
-Here is how the code-behind section looks in this scenario:
+Here's how the code-behind section looks in this scenario:
```usql using Microsoft.Analytics.Interfaces;
namespace USQL_Programmability
} ```
-Now we are going to call this function from the base U-SQL script. To do this, we have to provide a fully qualified name for the function, including the namespace, which in this case is NameSpace.Class.Function(parameter).
+Now we're going to call this function from the base U-SQL script. To do this, we have to provide a fully qualified name for the function, including the namespace, which in this case is NameSpace.Class.Function(parameter).
```usql USQL_Programmability.CustomFunctions.GetFiscalPeriod(dt) ```
To solve this problem, we use a global variable inside a code-behind section: `s
This global variable is applied to the entire rowset during our script execution.
-Here is the code-behind section of our U-SQL program:
+Here's the code-behind section of our U-SQL program:
```csharp using Microsoft.Analytics.Interfaces;
data-lake-analytics Data Lake Analytics U Sql Python Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-python-extensions.md
Title: Extend U-SQL scripts with Python in Azure Data Lake Analytics description: Learn how to run Python code in U-SQL scripts using Azure Data Lake Analytics -+ Previously updated : 06/20/2017 Last updated : 01/20/2023 + # Extend U-SQL scripts with Python code in Azure Data Lake Analytics [!INCLUDE [retirement-flag](includes/retirement-flag.md)]
Before you begin, ensure the Python extensions are installed in your Azure Data Lake Analytics account.
-* Navigate to you Data Lake Analytics Account in the Azure portal
-* In the left menu, under **GETTING STARTED** click on **Sample Scripts**
-* Click **Install U-SQL Extensions** then **OK**
+* Navigate to your Data Lake Analytics Account in the Azure portal
+* In the left menu, under **GETTING STARTED** select **Sample Scripts**
+* Select **Install U-SQL Extensions** then **OK**
## Overview
OUTPUT @m
### Schemas
-* Index vectors in Pandas are not supported in U-SQL. All input data frames in the Python function always have a 64-bit numerical index from 0 through the number of rows minus 1.
-* U-SQL datasets cannot have duplicate column names
-* U-SQL datasets column names that are not strings.
+* Index vectors in Pandas aren't supported in U-SQL. All input data frames in the Python function always have a 64-bit numerical index from 0 through the number of rows minus 1.
+* U-SQL datasets can't have duplicate column names
+* U-SQL datasets column names that aren't strings.
### Python Versions
Only Python 3.5.1 (compiled for Windows) is supported.
All the standard Python modules are included.
-### Additional Python modules
+### More Python modules
Besides the standard Python libraries, several commonly used Python libraries are included:
Currently, an exception in Python code shows up as generic vertex failure. In th
### Input and Output size limitations
-Every vertex has a limited amount of memory assigned to it. Currently, that limit is 6 GB for an AU. Because the input and output DataFrames must exist in memory in the Python code, the total size for the input and output cannot exceed 6 GB.
+Every vertex has a limited amount of memory assigned to it. Currently, that limit is 6 GB for an AU. Because the input and output DataFrames must exist in memory in the Python code, the total size for the input and output can't exceed 6 GB.
## Next steps
data-lake-analytics Data Lake Analytics U Sql Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-sdk.md
Title: Run U-SQL jobs locally - Azure Data Lake U-SQL SDK
description: Learn how to run and test U-SQL jobs locally using the command line and programming interfaces on your local workstation. Previously updated : 03/01/2017 Last updated : 01/20/2023 # Run and test U-SQL with Azure Data Lake U-SQL SDK [!INCLUDE [retirement-flag](includes/retirement-flag.md)]
-When developing U-SQL script, it is common to run and test U-SQL script locally before submit it to cloud. Azure Data Lake provides a Nuget package called Azure Data Lake U-SQL SDK for this scenario, through which you can easily scale U-SQL run and test. It is also possible to integrate this U-SQL test with CI (Continuous Integration) system to automate the compile and test.
+When developing U-SQL script, it's common to run and test U-SQL script locally before submit it to cloud. Azure Data Lake provides a NuGet package called Azure Data Lake U-SQL SDK for this scenario, through which you can easily scale U-SQL run and test. It's also possible to integrate this U-SQL test with CI (Continuous Integration) system to automate the compile and test.
If you care about how to manually local run and debug U-SQL script with GUI tooling, then you can use Azure Data Lake Tools for Visual Studio for that. You can learn more from [here](data-lake-analytics-data-lake-tools-local-run.md).
The Data Lake U-SQL SDK requires the following dependencies:
- [Microsoft .NET Framework 4.6 or newer](https://www.microsoft.com/download/details.aspx?id=17851). - Microsoft Visual C++ 14 and Windows SDK 10.0.10240.0 or newer (which is called CppSDK in this article). There are two ways to get CppSDK:
- - Install [Visual Studio Community Edition](https://developer.microsoft.com/downloads/vs-thankyou). You'll have a \Windows Kits\10 folder under the Program Files folder--for example, C:\Program Files (x86)\Windows Kits\10\. You'll also find the Windows 10 SDK version under \Windows Kits\10\Lib. If you donΓÇÖt see these folders, reinstall Visual Studio and be sure to select the Windows 10 SDK during the installation. If you have this installed with Visual Studio, the U-SQL local compiler will find it automatically.
+ - Install [Visual Studio Community Edition](https://developer.microsoft.com/downloads/vs-thankyou). You'll have a \Windows Kits\10 folder under the Program Files folder--for example, C:\Program Files (x86)\Windows Kits\10\. You'll also find the Windows 10 SDK version under \Windows Kits\10\Lib. If you donΓÇÖt see these folders, reinstall Visual Studio and be sure to select the Windows 10 SDK during the installation. If you've this installed with Visual Studio, the U-SQL local compiler will find it automatically.
![Data Lake Tools for Visual Studio local-run Windows 10 SDK](./media/data-lake-analytics-data-lake-tools-local-run/data-lake-tools-for-visual-studio-local-run-windows-10-sdk.png) - Install [Data Lake Tools for Visual Studio](https://aka.ms/adltoolsvs). You can find the prepackaged Visual C++ and Windows SDK files at `C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\Extensions\Microsoft\ADL Tools\X.X.XXXX.X\CppSDK.`
- In this case, the U-SQL local compiler cannot find the dependencies automatically. You need to specify the CppSDK path for it. You can either copy the files to another location or use it as is.
+ In this case, the U-SQL local compiler can't find the dependencies automatically. You need to specify the CppSDK path for it. You can either copy the files to another location or use it as is.
## Understand basic concepts
When running the U-SQL script locally, a working directory is created during com
### Command-line interface of the helper application
-Under SDK directory\build\runtime, LocalRunHelper.exe is the command-line helper application that provides interfaces to most of the commonly used local-run functions. Note that both the command and the argument switches are case-sensitive. To invoke it:
+Under SDK directory\build\runtime, LocalRunHelper.exe is the command-line helper application that provides interfaces to most of the commonly used local-run functions. Both the command and the argument switches are case-sensitive. To invoke it:
```console LocalRunHelper.exe <command> <Required-Command-Arguments> [Optional-Command-Arguments]
The helper application returns **0** for success and **-1** for failure. By defa
### Environment variable configuring
-U-SQL local run needs a specified data root as local storage account, as well as a specified CppSDK path for dependencies. You can both set the argument in command-line or set environment variable for them.
+U-SQL local run needs a specified data root as local storage account, and a specified CppSDK path for dependencies. You can both set the argument in command-line or set environment variable for them.
- Set the **SCOPE_CPP_SDK** environment variable.
Compile a U-SQL script:
LocalRunHelper compile -Script d:\test\test1.usql ```
-Compile a U-SQL script and set the data-root folder. Note that this will overwrite the set environment variable.
+Compile a U-SQL script and set the data-root folder. This will overwrite the set environment variable.
```console LocalRunHelper compile -Script d:\test\test1.usql ΓÇôDataRoot c:\DataRoot
LocalRunHelper execute -Algebra d:\test\workdir\C6A101DDCB470506\Script_66AE4909
## Use the SDK with programming interfaces
-The programming interfaces are all located in the LocalRunHelper.exe. You can use them to integrate the functionality of the U-SQL SDK and the C# test framework to scale your U-SQL script local test. In this article, I will use the standard C# unit test project to show how to use these interfaces to test your U-SQL script.
+The programming interfaces are all located in the LocalRunHelper.exe. You can use them to integrate the functionality of the U-SQL SDK and the C# test framework to scale your U-SQL script local test. In this article, I'll use the standard C# unit test project to show how to use these interfaces to test your U-SQL script.
### Step 1: Create C# unit test project and configuration - Create a C# unit test project through File > New > Project > Visual C# > Test > Unit Test Project.-- Add LocalRunHelper.exe as a reference for the project. The LocalRunHelper.exe is located at \build\runtime\LocalRunHelper.exe in Nuget package.
+- Add LocalRunHelper.exe as a reference for the project. The LocalRunHelper.exe is located at \build\runtime\LocalRunHelper.exe in NuGet package.
![Azure Data Lake U-SQL SDK Add Reference](./media/data-lake-analytics-u-sql-sdk/data-lake-analytics-u-sql-sdk-add-reference.png) -- U-SQL SDK **only** support x64 environment, make sure to set build platform target as x64. You can set that through Project Property > Build > Platform target.
+- U-SQL SDK **only** supports x64 environment, make sure to set build platform target as x64. You can set that through Project Property > Build > Platform target.
![Azure Data Lake U-SQL SDK Configure x64 Project](./media/data-lake-analytics-u-sql-sdk/data-lake-analytics-u-sql-sdk-configure-x64.png)
The programming interfaces are all located in the LocalRunHelper.exe. You can us
![Azure Data Lake U-SQL SDK Configure x64 Test Environment](./media/data-lake-analytics-u-sql-sdk/data-lake-analytics-u-sql-sdk-configure-test-x64.png) -- Make sure to copy all dependency files under NugetPackage\build\runtime\ to project working directory which is usually under ProjectFolder\bin\x64\Debug.
+- Make sure to copy all dependency files under NugetPackage\build\runtime\ to project working directory, which is usually under ProjectFolder\bin\x64\Debug.
### Step 2: Create U-SQL script test case
public LocalRunHelper([System.IO.TextWriter messageOutput = null])
|Property|Type|Description| |--|-|--| |AlgebraPath|string|The path to algebra file (algebra file is one of the compilation results)|
-|CodeBehindReferences|string|If the script has additional code behind references, specify the paths separated with ';'|
+|CodeBehindReferences|string|If the script has other code behind references, specify the paths separated with ';'|
|CppSdkDir|string|CppSDK directory| |CurrentDir|string|Current directory| |DataRoot|string|Data root path|
public LocalRunHelper([System.IO.TextWriter messageOutput = null])
E_CSC_SYSTEM_INTERNAL: Internal error! Could not load file or assembly 'ScopeEngineManaged.dll' or one of its dependencies. The specified module could not be found.
-Please check the following:
+Check the following:
- Make sure you have x64 environment. The build target platform and the test environment should be x64, refer to **Step 1: Create C# unit test project and configuration** above. - Make sure you have copied all dependency files under NugetPackage\build\runtime\ to project working directory.
data-lake-analytics Data Lake Tools For Vscode Local Run And Debug https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-tools-for-vscode-local-run-and-debug.md
Title: Debug U-SQL jobs - Azure Data Lake Tools for Visual Studio Code description: Learn how to use Azure Data Lake Tools for Visual Studio Code to run and debug U-SQL jobs locally. -+ Previously updated : 07/14/2017 Last updated : 01/23/2023 # Run U-SQL and debug locally in Visual Studio Code
Last updated 07/14/2017
This article describes how to run U-SQL jobs on a local development machine to speed up early coding phases or to debug code locally in Visual Studio Code. For instructions on Azure Data Lake Tool for Visual Studio Code, see [Use Azure Data Lake Tools for Visual Studio Code](data-lake-analytics-data-lake-tools-for-vscode.md).
-Only Windows installations of the Azure Data Lake Tools for Visual Studio support the action to run U-SQL locally and debug U-SQL locally. Installations on macOS and Linux-based operating systems do not support this feature.
+Only Windows installations of the Azure Data Lake Tools for Visual Studio support the action to run U-SQL locally and debug U-SQL locally. Installations on macOS and Linux-based operating systems don't support this feature.
## Set up the U-SQL local run environment
Only Windows installations of the Azure Data Lake Tools for Visual Studio suppor
![Download the ADL LocalRun Dependency packages](./media/data-lake-analytics-data-lake-tools-for-vscode-local-run-and-debug/downloadtheadllocalrunpackage.png)
-2. Locate the dependency packages from the path shown in the **Output** pane, and then install BuildTools and Win10SDK 10240. Here is an example path:
+2. Locate the dependency packages from the path shown in the **Output** pane, and then install BuildTools and Win10SDK 10240. Here's an example path:
`C:\Users\xxx\AppData\Roaming\LocalRunDependency` ![Locate the dependency packages](./media/data-lake-analytics-data-lake-tools-for-vscode-local-run-and-debug/LocateDependencyPath.png)
- 2.1 To install **BuildTools**, click visualcppbuildtools_full.exe in the LocalRunDependency folder, then follow the wizard instructions.
+ 2.1 To install **BuildTools**, select visualcppbuildtools_full.exe in the LocalRunDependency folder, then follow the wizard instructions.
![Install BuildTools](./media/data-lake-analytics-data-lake-tools-for-vscode-local-run-and-debug/InstallBuildTools.png)
- 2.2 To install **Win10SDK 10240**, click sdksetup.exe in the LocalRunDependency/Win10SDK_10.0.10240_2 folder, then follow the wizard instructions.
+ 2.2 To install **Win10SDK 10240**, select sdksetup.exe in the LocalRunDependency/Win10SDK_10.0.10240_2 folder, then follow the wizard instructions.
![Install Win10SDK 10240](./media/data-lake-analytics-data-lake-tools-for-vscode-local-run-and-debug/InstallWin10SDK.png)
For the first-time user, use **ADL: Download Local Run Package** to download loc
2. Select **Accept** to accept the Microsoft Software License Terms for the first time. ![Accept the Microsoft Software License Terms](./media/data-lake-analytics-data-lake-tools-for-vscode-local-run-and-debug/AcceptEULA.png)
-3. The cmd console opens. For first-time users, you need to enter **3**, and then locate the local folder path for your data input and output. If you are unsuccessful defining the path with backslashes, try forward slashes. For other options, you can use the default values.
+3. The cmd console opens. For first-time users, you need to enter **3**, and then locate the local folder path for your data input and output. If you're unsuccessful defining the path with backslashes, try forward slashes. For other options, you can use the default values.
![Data Lake Tools for Visual Studio Code local run cmd](./medi.png) 4. Select Ctrl+Shift+P to open the command palette, enter **ADL: Submit Job**, and then select **Local** to submit the job to your local account.
For the first-time user:
2. Install .NET Core SDK 2.0 as suggested in the message box, if not installed.   ![reminder installs Dotnet](./media/data-lake-analytics-data-lake-tools-for-vscode-local-run-and-debug/remind-install-dotnet.png)
-3. Install C# for Visual Studio Code as suggested in the message box if not installed. Click **Install** to continue, and then restart VSCode.
+3. Install C# for Visual Studio Code as suggested in the message box if not installed. Select **Install** to continue, and then restart VSCode.
![Reminder to install C#](./media/data-lake-analytics-data-lake-tools-for-vscode-local-run-and-debug/install-csharp.png)
data-lake-analytics Understand Spark Code Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/understand-spark-code-concepts.md
description: This article describes Apache Spark concepts to help U-SQL develope
Previously updated : 05/17/2022 Last updated : 01/20/2023 # Understand Apache Spark code for U-SQL developers
This section provides high-level guidance on transforming U-SQL Scripts to Apach
## Understand the U-SQL and Spark language and processing paradigms
-Before you start migrating Azure Data Lake Analytics' U-SQL scripts to Spark, it is useful to understand the general language and processing philosophies of the two systems.
+Before you start migrating Azure Data Lake Analytics' U-SQL scripts to Spark, it's useful to understand the general language and processing philosophies of the two systems.
U-SQL is a SQL-like declarative query language that uses a data-flow paradigm and allows you to easily embed and scale out user-code written in .NET (for example C#), Python, and R. The user-extensions can implement simple expressions or user-defined functions, but can also provide the user the ability to implement so called user-defined operators that implement custom operators to perform rowset level transformations, extractions and writing output.
-Spark is a scale-out framework offering several language bindings in Scala, Java, Python, .NET etc. where you primarily write your code in one of these languages, create data abstractions called resilient distributed datasets (RDD), dataframes, and datasets and then use a LINQ-like domain-specific language (DSL) to transform them. It also provides SparkSQL as a declarative sublanguage on the dataframe and dataset abstractions. The DSL provides two categories of operations, transformations and actions. Applying transformations to the data abstractions will not execute the transformation but instead build-up the execution plan that will be submitted for evaluation with an action (for example, writing the result into a temporary table or file, or printing the result).
+Spark is a scale-out framework offering several language bindings in Scala, Java, Python, .NET etc. where you primarily write your code in one of these languages, create data abstractions called resilient distributed datasets (RDD), dataframes, and datasets and then use a LINQ-like domain-specific language (DSL) to transform them. It also provides SparkSQL as a declarative sublanguage on the dataframe and dataset abstractions. The DSL provides two categories of operations, transformations and actions. Applying transformations to the data abstractions won't execute the transformation but instead build-up the execution plan that will be submitted for evaluation with an action (for example, writing the result into a temporary table or file, or printing the result).
-Thus when translating a U-SQL script to a Spark program, you will have to decide which language you want to use to at least generate the data frame abstraction (which is currently the most frequently used data abstraction) and whether you want to write the declarative dataflow transformations using the DSL or SparkSQL. In some more complex cases, you may need to split your U-SQL script into a sequence of Spark and other steps implemented with Azure Batch or Azure Functions.
+Thus when translating a U-SQL script to a Spark program, you'll have to decide which language you want to use to at least generate the data frame abstraction (which is currently the most frequently used data abstraction) and whether you want to write the declarative dataflow transformations using the DSL or SparkSQL. In some more complex cases, you may need to split your U-SQL script into a sequence of Spark and other steps implemented with Azure Batch or Azure Functions.
-Furthermore, Azure Data Lake Analytics offers U-SQL in a serverless job service environment where resources are allocated for each job, while Azure Synapse Spark, Azure Databricks and Azure HDInsight offer Spark either in form of a cluster service or with so-called Spark pool templates. When transforming your application, you will have to take into account the implications of now creating, sizing, scaling, and decommissioning the clusters or pools.
+Furthermore, Azure Data Lake Analytics offers U-SQL in a serverless job service environment where resources are allocated for each job, while Azure Synapse Spark, Azure Databricks and Azure HDInsight offer Spark either in form of a cluster service or with so-called Spark pool templates. When transforming your application, you'll have to take into account the implications of now creating, sizing, scaling, and decommissioning the clusters or pools.
## Transform U-SQL scripts U-SQL scripts follow the following processing pattern:
-1. Data gets read from either unstructured files, using the `EXTRACT` statement, a location or file set specification, and the built-in or user-defined extractor and desired schema, or from U-SQL tables (managed or external tables). It is represented as a rowset.
+1. Data gets read from either unstructured files, using the `EXTRACT` statement, a location or file set specification, and the built-in or user-defined extractor and desired schema, or from U-SQL tables (managed or external tables). It's represented as a rowset.
2. The rowsets get transformed in multiple U-SQL statements that apply U-SQL expressions to the rowsets and produce new rowsets. 3. Finally, the resulting rowsets are output into either files using the `OUTPUT` statement that specifies the location(s) and a built-in or user-defined outputter, or into a U-SQL table.
Spark programs are similar in that you would use Spark connectors to read the da
## Transform .NET code
-U-SQL's expression language is C# and it offers a variety of ways to scale out custom .NET code with user-defined functions, user-defined operators and user-defined aggregators.
+U-SQL's expression language is C# and it offers various ways to scale out custom .NET code with user-defined functions, user-defined operators and user-defined aggregators.
Azure Synapse and Azure HDInsight Spark both now natively support executing .NET code with .NET for Apache Spark. This means that you can potentially reuse some or all of your [.NET user-defined functions with Spark](#transform-user-defined-scalar-net-functions-and-user-defined-aggregators). Note though that U-SQL uses the .NET Framework while .NET for Apache Spark is based on .NET Core 3.1 or later. [U-SQL user-defined operators (UDOs)](#transform-user-defined-operators-udos) are using the U-SQL UDO model to provide scaled-out execution of the operator's code. Thus, UDOs will have to be rewritten into user-defined functions to fit into the Spark execution model.
-.NET for Apache Spark currently does not support user-defined aggregators. Thus, [U-SQL user-defined aggregators](#transform-user-defined-scalar-net-functions-and-user-defined-aggregators) will have to be translated into Spark user-defined aggregators written in Scala.
+.NET for Apache Spark currently doesn't support user-defined aggregators. Thus, [U-SQL user-defined aggregators](#transform-user-defined-scalar-net-functions-and-user-defined-aggregators) will have to be translated into Spark user-defined aggregators written in Scala.
-If you do not want to take advantage of the .NET for Apache Spark capabilities, you will have to rewrite your expressions into an equivalent Spark, Scala, Java, or Python expression, function, aggregator or connector.
+If you don't want to take advantage of the .NET for Apache Spark capabilities, you'll have to rewrite your expressions into an equivalent Spark, Scala, Java, or Python expression, function, aggregator or connector.
In any case, if you have a large amount of .NET logic in your U-SQL scripts, please contact us through your Microsoft Account representative for further guidance.
U-SQL provides ways to call arbitrary scalar .NET functions and to call user-def
Spark also offers support for user-defined functions and user-defined aggregators written in most of its hosting languages that can be called from Spark's DSL and SparkSQL.
-As mentioned above, .NET for Apache Spark supports user-defined functions written in .NET, but does not support user-defined aggregators. So for user-defined functions, .NET for Apache Spark can be used, while user-defined aggregators have to be authored in Scala for Spark.
+As mentioned above, .NET for Apache Spark supports user-defined functions written in .NET, but doesn't support user-defined aggregators. So for user-defined functions, .NET for Apache Spark can be used, while user-defined aggregators have to be authored in Scala for Spark.
### Transform user-defined operators (UDOs) U-SQL provides several categories of user-defined operators (UDOs) such as extractors, outputters, reducers, processors, appliers, and combiners that can be written in .NET (and - to some extent - in Python and R).
-Spark does not offer the same extensibility model for operators, but has equivalent capabilities for some.
+Spark doesn't offer the same extensibility model for operators, but has equivalent capabilities for some.
-The Spark equivalent to extractors and outputters is Spark connectors. For many U-SQL extractors, you may find an equivalent connector in the Spark community. For others, you will have to write a custom connector. If the U-SQL extractor is complex and makes use of several .NET libraries, it may be preferable to build a connector in Scala that uses interop to call into the .NET library that does the actual processing of the data. In that case, you will have to deploy the .NET Core runtime to the Spark cluster and make sure that the referenced .NET libraries are .NET Standard 2.0 compliant.
+The Spark equivalent to extractors and outputters is Spark connectors. For many U-SQL extractors, you may find an equivalent connector in the Spark community. For others, you'll have to write a custom connector. If the U-SQL extractor is complex and makes use of several .NET libraries, it may be preferable to build a connector in Scala that uses interop to call into the .NET library that does the actual processing of the data. In that case, you'll have to deploy the .NET Core runtime to the Spark cluster and make sure that the referenced .NET libraries are .NET Standard 2.0 compliant.
-The other types of U-SQL UDOs will need to be rewritten using user-defined functions and aggregators and the semantically appropriate Spark DLS or SparkSQL expression. For example, a processor can be mapped to a SELECT of a variety of UDF invocations, packaged as a function that takes a dataframe as an argument and returns a dataframe.
+The other types of U-SQL UDOs will need to be rewritten using user-defined functions and aggregators and the semantically appropriate Spark DLS or SparkSQL expression. For example, a processor can be mapped to a SELECT of various UDF invocations, packaged as a function that takes a dataframe as an argument and returns a dataframe.
### Transform U-SQL's optional libraries
If you need to transform a script referencing the cognitive services libraries,
## Transform typed values
-Because U-SQL's type system is based on the .NET type system and Spark has its own type system, that is impacted by the host language binding, you will have to make sure that the types you are operating on are close and for certain types, the type ranges, precision and/or scale may be slightly different. Furthermore, U-SQL and Spark treat `null` values differently.
+Because U-SQL's type system is based on the .NET type system and Spark has its own type system that is impacted by the host language binding you'll have to make sure that the types you're operating on are close and for certain types, the type ranges, precision and/or scale may be slightly different. Furthermore, U-SQL and Spark treat `null` values differently.
### Data types
U-SQL's core language is transforming rowsets and is based on SQL. The following
- Set expressions `UNION`/`OUTER UNION`/`INTERSECT`/`EXCEPT`
-In addition, U-SQL provides a variety of SQL-based scalar expressions such as
+In addition, U-SQL provides various SQL-based scalar expressions such as
- `OVER` windowing expressions-- a variety of built-in aggregators and ranking functions (`SUM`, `FIRST` etc.)
+- various built-in aggregators and ranking functions (`SUM`, `FIRST` etc.)
- Some of the most familiar SQL scalar expressions: `CASE`, `LIKE`, (`NOT`) `IN`, `AND`, `OR` etc. Spark offers equivalent expressions in both its DSL and SparkSQL form for most of these expressions. Some of the expressions not supported natively in Spark will have to be rewritten using a combination of the native Spark expressions and semantically equivalent patterns. For example, `OUTER UNION` will have to be translated into the equivalent combination of projections and unions.
-Due to the different handling of NULL values, a U-SQL join will always match a row if both of the columns being compared contain a NULL value, while a join in Spark will not match such columns unless explicit null checks are added.
+Due to the different handling of NULL values, a U-SQL join will always match a row if both of the columns being compared contain a NULL value, while a join in Spark won't match such columns unless explicit null checks are added.
## Transform other U-SQL concepts
-U-SQL also offers a variety of other features and concepts, such as federated queries against SQL Server databases, parameters, scalar, and lambda expression variables, system variables, `OPTION` hints.
+U-SQL also offers various other features and concepts, such as federated queries against SQL Server databases, parameters, scalar, and lambda expression variables, system variables, `OPTION` hints.
### Federated Queries against SQL Server databases/external tables
-U-SQL provides data source and external tables as well as direct queries against Azure SQL Database. While Spark does not offer the same object abstractions, it provides [Spark connector for Azure SQL Database](/azure/azure-sql/database/spark-connector) that can be used to query SQL databases.
+U-SQL provides data source and external tables as well as direct queries against Azure SQL Database. While Spark doesn't offer the same object abstractions, it provides [Spark connector for Azure SQL Database](/azure/azure-sql/database/spark-connector) that can be used to query SQL databases.
### U-SQL parameters and variables
U-SQL offers several syntactic ways to provide hints to the query optimizer and
- an `OPTION` clause associated with the rowset expression to provide a data or plan hint - a join hint in the syntax of the join expression (for example, `BROADCASTLEFT`)
-Spark's cost-based query optimizer has its own capabilities to provide hints and tune the query performance. Please refer to the corresponding documentation.
+Spark's cost-based query optimizer has its own capabilities to provide hints and tune the query performance. Refer to the corresponding documentation.
## Next steps
data-lake-analytics Understand Spark Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/understand-spark-data-formats.md
Title: Understand Apache Spark data formats for Azure Data Lake Analytics U-SQL developers. description: This article describes Apache Spark concepts to help U_SQL developers understand differences between U-SQL and Spark data formats.-+ Previously updated : 01/31/2019 Last updated : 01/20/2022 # Understand differences between U-SQL and Spark data formats
In addition to moving your files, you'll also want to make your data, stored in
Data stored in files can be moved in various ways: - Write an [Azure Data Factory](../data-factory/introduction.md) pipeline to copy the data from [Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-overview.md) account to the [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) account.-- Write a Spark job that reads the data from the [Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-overview.md) account and writes it to the [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) account. Based on your use case, you may want to write it in a different format such as Parquet if you do not need to preserve the original file format.
+- Write a Spark job that reads the data from the [Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-overview.md) account and writes it to the [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) account. Based on your use case, you may want to write it in a different format such as Parquet if you don't need to preserve the original file format.
We recommend that you review the article [Upgrade your big data analytics solutions from Azure Data Lake Storage Gen1 to Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-migrate-gen1-to-gen2.md) ## Move data stored in U-SQL tables
-U-SQL tables are not understood by Spark. If you have data stored in U-SQL tables, you'll run a U-SQL job that extracts the table data and saves it in a format that Spark understands. The most appropriate format is to create a set of Parquet files following the Hive metastore's folder layout.
+U-SQL tables aren't understood by Spark. If you have data stored in U-SQL tables, you'll run a U-SQL job that extracts the table data and saves it in a format that Spark understands. The most appropriate format is to create a set of Parquet files following the Hive metastore's folder layout.
The output can be achieved in U-SQL with the built-in Parquet outputter and using the dynamic output partitioning with file sets to create the partition folders. [Process more files than ever and use Parquet](/archive/blogs/azuredatalake/process-more-files-than-ever-and-use-parquet-with-azure-data-lake-analytics) provides an example of how to create such Spark consumable data.
After this transformation, you copy the data as outlined in the chapter [Move da
Furthermore, if you're copying typed data (from tables), then Parquet and Spark may have different precision and scale for some of the typed values (for example, a float) and may treat null values differently. For example, U-SQL has the C# semantics for null values, while Spark has a three-valued logic for null values. - Data organization (partitioning)
- U-SQL tables provide two level partitioning. The outer level (`PARTITIONED BY`) is by value and maps mostly into the Hive/Spark partitioning scheme using folder hierarchies. You will need to ensure that the null values are mapped to the right folder. The inner level (`DISTRIBUTED BY`) in U-SQL offers 4 distribution schemes: round robin, range, hash, and direct hash.
- Hive/Spark tables only support value partitioning or hash partitioning, using a different hash function than U-SQL. When you output your U-SQL table data, you will probably only be able to map into the value partitioning for Spark and may need to do further tuning of your data layout depending on your final Spark queries.
+ U-SQL tables provide two level partitioning. The outer level (`PARTITIONED BY`) is by value and maps mostly into the Hive/Spark partitioning scheme using folder hierarchies. You'll need to ensure that the null values are mapped to the right folder. The inner level (`DISTRIBUTED BY`) in U-SQL offers four distribution schemes: round robin, range, hash, and direct hash.
+ Hive/Spark tables only support value partitioning or hash partitioning, using a different hash function than U-SQL. When you output your U-SQL table data, you'll probably only be able to map into the value partitioning for Spark and may need to do further tuning of your data layout depending on your final Spark queries.
## Next steps
data-lake-analytics Understand Spark For Usql Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/understand-spark-for-usql-developers.md
Title: Understand Apache Spark for Azure Data Lake Analytics U-SQL developers. description: This article describes Apache Spark concepts to help you differences between U-SQL developers.-+ Previously updated : 10/15/2019 Last updated : 01/20/2023 # Understand Apache Spark for U-SQL developers [!INCLUDE [retirement-flag](includes/retirement-flag.md)]
-Microsoft supports several Analytics services such as [Azure Databricks](/azure/databricks/scenarios/what-is-azure-databricks) and [Azure HDInsight](../hdinsight/hdinsight-overview.md) as well as Azure Data Lake Analytics. We hear from developers that they have a clear preference for open-source-solutions as they build analytics pipelines. To help U-SQL developers understand Apache Spark, and how you might transform your U-SQL scripts to Apache Spark, we've created this guidance.
+Microsoft supports several Analytics services such as [Azure Databricks](/azure/databricks/scenarios/what-is-azure-databricks) and [Azure HDInsight](../hdinsight/hdinsight-overview.md) and Azure Data Lake Analytics. We hear from developers that they have a clear preference for open-source-solutions as they build analytics pipelines. To help U-SQL developers understand Apache Spark, and how you might transform your U-SQL scripts to Apache Spark, we've created this guidance.
-It includes a number of steps you can take, and several alternatives.
+It includes the steps you can take, and several alternatives.
## Steps to transform U-SQL to Apache Spark
It includes a number of steps you can take, and several alternatives.
If you use [Azure Data Factory](../data-factory/introduction.md) to orchestrate your Azure Data Lake Analytics scripts, you'll have to adjust them to orchestrate the new Spark programs. 2. Understand the differences between how U-SQL and Spark manage data
- If you want to move your data from [Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-overview.md) to [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md), you will have to copy both the file data and the catalog maintained data. Note that Azure Data Lake Analytics only supports Azure Data Lake Storage Gen1. See [Understand Spark data formats](understand-spark-data-formats.md)
+ If you want to move your data from [Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-overview.md) to [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md), you'll have to copy both the file data and the catalog maintained data. Azure Data Lake Analytics only supports Azure Data Lake Storage Gen1. See [Understand Spark data formats](understand-spark-data-formats.md)
3. Transform your U-SQL scripts to Spark
- Before transforming your U-SQL scripts, you will have to choose an analytics service. Some of the available compute services available are:
+ Before transforming your U-SQL scripts, you'll have to choose an analytics service. Some of the available compute services available are:
- [Azure Data Factory DataFlow](../data-factory/concepts-data-flow-overview.md) Mapping data flows are visually designed data transformations that allow data engineers to develop a graphical data transformation logic without writing code. While not suited to execute complex user code, they can easily represent traditional SQL-like dataflow transformations - [Azure HDInsight Hive](../hdinsight/hadoop/apache-hadoop-using-apache-hive-as-an-etl-tool.md)
- Apache Hive on HDInsight is suited to Extract, Transform, and Load (ETL) operations. This means you are going to translate your U-SQL scripts to Apache Hive.
+ Apache Hive on HDInsight is suited to Extract, Transform, and Load (ETL) operations. This means you're going to translate your U-SQL scripts to Apache Hive.
- Apache Spark Engines such as [Azure HDInsight Spark](../hdinsight/spark/apache-spark-overview.md) or [Azure Databricks](/azure/databricks/scenarios/what-is-azure-databricks)
- This means you are going to translate your U-SQL scripts to Spark. For more information, see [Understand Spark data formats](understand-spark-data-formats.md)
+ This means you're going to translate your U-SQL scripts to Spark. For more information, see [Understand Spark data formats](understand-spark-data-formats.md)
> [!CAUTION] > Both [Azure Databricks](/azure/databricks/scenarios/what-is-azure-databricks) and [Azure HDInsight Spark](../hdinsight/spark/apache-spark-overview.md) are cluster services and not serverless jobs like Azure Data Lake Analytics. You will have to consider how to provision the clusters to get the appropriate cost/performance ratio and how to manage their lifetime to minimize your costs. These services are have different performance characteristics with user code written in .NET, so you will have to either write wrappers or rewrite your code in a supported language. For more information, see [Understand Spark data formats](understand-spark-data-formats.md), [Understand Apache Spark code concepts for U-SQL developers](understand-spark-code-concepts.md), [.Net for Apache Spark](https://dotnet.microsoft.com/apps/data/spark)
databox Data Box Deploy Copy Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-copy-data.md
Previously updated : 11/18/2022 Last updated : 01/13/2023 # Customer intent: As an IT admin, I need to be able to copy data to Data Box to upload on-premises data from my server onto Azure.
sudo mount -t nfs -o vers=2.1 10.126.76.138:/utsac1_BlockBlob /home/databoxubunt
Once you're connected to the Data Box shares, the next step is to copy data. Before you begin the data copy, review the following considerations: * Make sure that you copy the data to shares that correspond to the appropriate data format. For instance, copy the block blob data to the share for block blobs. Copy the VHDs to page blob. If the data format doesn't match the appropriate share type, then at a later step, the data upload to Azure will fail.
-* Always create a folder under the share for the files that you intend to copy and then copy the files to that folder. The folder created under block blob and page blob shares represents a container to which the data is uploaded as blobs. You cannot copy files directly to the *root* folder in the storage account.
+* Always create a folder under the share for the files that you intend to copy and then copy the files to that folder. The folder created under block blob and page blob shares represents a container to which the data is uploaded as blobs. You cannot copy files directly to the *root* folder in the storage account. The same behavior applies to Azure Files. Under shares for Azure Files, first-level entities are shares, second-level entities are files.
* While copying data, make sure that the data size conforms to the size limits described in the [Azure storage account size limits](data-box-limits.md#azure-storage-account-size-limits). * If you want to preserve metadata (ACLs, timestamps, and file attributes) when transferring data to Azure Files, follow the guidance in [Preserving file ACLs, attributes, and timestamps with Azure Data Box](data-box-file-acls-preservation.md) * If data that is being uploaded by Data Box is also being uploaded by another application, outside Data Box, at the same time, this could result in upload job failures and data corruption.
defender-for-cloud Governance Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/governance-rules.md
Title: Driving your organization to remediate security issues with recommendation governance in Microsoft Defender for Cloud description: Learn how to assign owners and due dates to security recommendations and create rules to automatically assign owners and due dates -- Previously updated : 11/13/2022 Last updated : 01/23/2023 + # Drive your organization to remediate security recommendations with governance Security teams are responsible for improving the security posture of their organizations but they may not have the resources or authority to actually implement security recommendations. [Assigning owners with due dates](#manually-assigning-owners-and-due-dates-for-recommendation-remediation) and [defining governance rules](#building-an-automated-process-for-improving-security-with-governance-rules) creates accountability and transparency so you can drive the process of improving the security posture in your organization.
You can then review the progress of the tasks by subscription, recommendation, o
### Defining governance rules to automatically set the owner and due date of recommendations
-Governance rules can identify resources that require remediation according to specific recommendations or severities, and the rule assigns an owner and due date to make sure the recommendations are handled. Many governance rules can apply to the same recommendations, so the rule with lower priority value is the one that assigns the owner and due date.
+Governance rules can identify resources that require remediation according to specific recommendations or severities. The rule assigns an owner and due date to ensure the recommendations are handled. Many governance rules can apply to the same recommendations, so the rule with lower priority value is the one that assigns the owner and due date.
-The due date set for the recommendation to be remediated is based on a timeframe of 7, 14, 30, or 90 days from when the recommendation is found by the rule. For example, if the rule identifies the resource on March 1st and the remediation timeframe is 14 days, March 15th is the due date. You can apply a grace period so that the resources that are given a due date don't impact your secure score until they're overdue.
+The due date set for the recommendation to be remediated is based on a timeframe of 7, 14, 30, or 90 days from when the recommendation is found by the rule. For example, if the rule identifies the resource on March 1 and the remediation timeframe is 14 days, March 15 is the due date. You can apply a grace period so that the resources that 's given a due date don't affect your secure score until they're overdue.
You can also set the owner of the resources that are affected by the specified recommendations. In organizations that use resource tags to associate resources with an owner, you can specify the tag key and the governance rule reads the name of the resource owner from the tag.
To define a governance rule that assigns an owner and due date:
- **By resource tag** - Enter the resource tag on your resources that defines the resource owner. - **By email address** - Enter the email address of the owner to assign to the recommendations. 1. Set the **remediation timeframe**, which is the time between when the resources are identified to require remediation and the time that the remediation is due.
-1. If you don't want the resources to impact your secure score until they're overdue, select **Apply grace period**.
+1. If you don't want the resources to affect your secure score until they're overdue, select **Apply grace period**.
1. If you don't want either the owner or the owner's manager to receive weekly emails, clear the notification options. 1. Select **Create**.
If there are existing recommendations that match the definition of the governanc
> - Create and apply rules on multiple scopes at once using management scopes cross cloud. > - Check effective rules on selected scope using the scope filter.
-To view the effect rules on specific scope, use the ΓÇ£scopeΓÇ¥ filter and select a desired scope.
+To view the effect of rules on a specific scope, use the Scope filter to select a specific scope.
-Conflicting rules are applied in priority order. For example, rules on a management scope, (Azure management groups, AWS master accents and GCP organizations) take effect before rules on scopes (for example, Azure subscriptions, AWS accounts, or GCP projects).
+Conflicting rules are applied in priority order. For example, rules on a management scope (Azure management groups, AWS accounts and GCP organizations), take effect before rules on scopes (for example, Azure subscriptions, AWS accounts, or GCP projects).
## Manually assigning owners and due dates for recommendation remediation
-For every resource affected by a recommendation, you can assign an owner and a due date so that you know who needs to implement the security changes to improve your security posture and when they're expected to do it by. You can also apply a grace period so that the resources that are given a due date don't impact your secure score unless they become overdue.
+For every resource affected by a recommendation, you can assign an owner and a due date so that you know who needs to implement the security changes to improve your security posture and when they're expected to do it by. You can also apply a grace period so that the resources that 's given a due date don't affect your secure score unless they become overdue.
To manually assign owners and due dates to recommendations:
To manually assign owners and due dates to recommendations:
1. For any resource that doesn't have an owner or due date, select the resources and select **Assign owner**. 1. Enter the email address of the owner that needs to make the changes that remediate the recommendation for those resources. 1. Select the date by which to remediate the recommendation for the resources.
-1. You can select **Apply grace period** to keep the resource from impacting the secure score until it's overdue.
+1. You can select **Apply grace period** to keep the resource from affecting the secure score until it's overdue.
1. Select **Save**. The recommendation is now shown as assigned and on time.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 01/19/2023 Last updated : 01/23/2023 # What's new in Microsoft Defender for Cloud?
Updates in January include:
- [New version of the recommendation to find missing system updates (Preview)](#new-version-of-the-recommendation-to-find-missing-system-updates-preview) - [Cleanup of deleted Azure Arc machines in connected AWS and GCP accounts](#cleanup-of-deleted-azure-arc-machines-in-connected-aws-and-gcp-accounts) - [Allow continuous export to Event Hub behind a firewall](#allow-continuous-export-to-event-hubs-behind-a-firewall)
+- [The name of the Secure score control Protect your applications with Azure advanced networking solutions has been changed](#the-name-of-the-secure-score-control-protect-your-applications-with-azure-advanced-networking-solutions-has-been-changed)
### New version of the recommendation to find missing system updates (Preview)
You can enable this as the alerts or recommendations are generated or you can de
Learn how to enable [continuous export to an Event Hub behind an Azure firewall](continuous-export.md#continuously-export-to-an-event-hub-behind-a-firewall).
+### The name of the Secure score control Protect your applications with Azure advanced networking solutions has been changed
+
+The secure score control, `Protect your applications with Azure advanced networking solutions` has been changed to `Protect applications against DDoS attacks`.
+
+The updated name is reflected on Azure Resource Graph (ARG), Secure Score Controls API and the `Download CSV report`.
+ ## December 2022 Updates in December include:
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 01/18/2023- Last updated : 01/22/2023 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| [Recommendation to find vulnerabilities in running container images to be released for General Availability (GA)](#recommendation-to-find-vulnerabilities-in-running-container-images-to-be-released-for-general-availability-ga) | January 2023 | | [Recommendation to enable diagnostic logs for Virtual Machine Scale Sets to be deprecated](#recommendation-to-enable-diagnostic-logs-for-virtual-machine-scale-sets-to-be-deprecated) | January 2023 | | [The policy Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports is set to be deprecated](#the-policy-vulnerability-assessment-settings-for-sql-server-should-contain-an-email-address-to-receive-scan-reports-is-set-to-be-deprecated) | January 2023 |
-| [The name of the Secure score control Protect your applications with Azure advanced networking solutions will be changed](#the-name-of-the-secure-score-control-protect-your-applications-with-azure-advanced-networking-solutions-will-be-changed) | January 2023 |
| [Deprecation and improvement of selected alerts for Windows and Linux Servers](#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers) | April 2023 | ### Recommendation to enable diagnostic logs for Virtual Machine Scale Sets to be deprecated
The built-in policy [`[Preview]: Private endpoint should be configured for Key V
The related [policy definition](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7c1b1214-f927-48bf-8882-84f0af6588b1) will also be replaced by this new policy in all standards displayed in the regulatory compliance dashboard.
-### The name of the Secure score control Protect your applications with Azure advanced networking solutions will be changed
-
-**Estimated date for change: January 2023**
-
-The secure score control `Protect your applications with Azure advanced networking solutions` will change to `Protect applications against DDoS attacks`.
-
-The updated name will be reflected on Azure Resource Graph (ARG), Secure Score Controls API and the `Download CSV report`.
- ### Deprecation and improvement of selected alerts for Windows and Linux Servers **Estimated date for change: April 2023**
digital-twins Resources Compare Original Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/resources-compare-original-release.md
-
-# Mandatory fields.
Title: Differences from original release-
-description: Understand what has changed in the new version of Azure Digital Twins
-- Previously updated : 8/24/2021---
-# Optional fields. Don't forget to remove # if you need a field.
-#
-#
-#
--
-# What is the new Azure Digital Twins? How is it different from the original version (2018)?
-
-The first public preview of Azure Digital Twins was released in October of 2018. While the core concepts from that original version have carried through to the current service, many of the interfaces and implementation details have changed to make the service more flexible and accessible. These changes were motivated by customer feedback.
-
-> [!IMPORTANT]
-> In light of the new service's expanded capabilities, the original Azure Digital Twins service has been retired. As of January 2021, its APIs and associated data are no longer available.
-
-If you used the first version of Azure Digital Twins during the first public preview, use the information and best practices in this article to learn how to work with the current service, and take advantage of its features.
-
-## Differences by topic
-
-The chart below provides a side-by-side view of concepts that have changed between the original version of the service and the current service.
-
-| Topic | In original version | In current version |
-| | | | |
-| **Modeling**<br>*More flexible* | The original release was designed around smart spaces, so it came with a built-in vocabulary for buildings. | The current Azure Digital Twins is domain-agnostic. You can define your own custom vocabulary and custom models for your solution, to represent more kinds of environments in more flexible ways.<br><br>Learn more in [Custom models](concepts-models.md). |
-| **Topology**<br>*More flexible*| The original release supported a tree data structure, tailored to smart spaces. Digital twins were connected with hierarchical relationships. | With the current release, your digital twins can be connected into arbitrary graph topologies, organized however you want. This freedom gives you more flexibility to express the complex relationships of the real world.<br><br>Learn more in [Digital twins and the twin graph](concepts-twins-graph.md). |
-| **Compute**<br>*Richer, more flexible* | In the original release, logic for processing events and telemetry was defined in JavaScript user-defined functions (UDFs). Debugging with UDFs was limited. | The current release has an open compute model: you provide custom logic by attaching external compute resources like [Azure Functions](../azure-functions/functions-overview.md). This functionality lets you use a programming language of your choice, access custom code libraries without restriction, and take advantage of development and debugging resources that the external service may have.<br><br>To see an end-to-end scenario driven by data flow through Azure functions, see [Connect an end-to-end solution](tutorial-end-to-end.md). |
-| **Device management with IoT Hub**<br>*More accessible* | The original release managed devices with an instance of [IoT Hub](../iot-hub/about-iot-hub.md) that was internal to the Azure Digital Twins service. This integrated hub wasn't fully accessible to developers. | In the current release, you "bring your own" IoT hub, by attaching an independently created IoT Hub instance (along with any devices it already manages). This architecture gives you full access to IoT Hub's capabilities and puts you in control of device management.<br><br>Learn more in [Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md). |
-| **Security**<br>*More standard* | The original release had pre-defined roles that you could use to manage access to your instance. | The current release integrates with the same [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) back-end service that other Azure services use. This type of integration may make it simpler to authenticate between other Azure services in your solution, like IoT Hub, Azure Functions, Event Grid, and more.<br>With RBAC, you can still use pre-defined roles, or you can build and configure custom roles.<br><br>Learn more in [Security for Azure Digital Twins solutions](concepts-security.md). |
-| **Scalability**<br>*Greater* | The original release had scale limitations for devices, messages, graphs, and scale units. Only one instance of Azure Digital Twins was supported per subscription. | The current release relies on a new architecture with improved scalability, and has greater compute power. It also supports 10 instances per region, per subscription.<br><br>See [Azure Digital Twins service limits](reference-service-limits.md) for details of the limits in the current release. |
-
-## Service limits
-
-For a list of Azure Digital Twins limits, see [Azure Digital Twins service limits](reference-service-limits.md).
-
-## Next steps
-
-* Dive into working with the current release in [Get started with Azure Digital Twins Explorer](quickstart-azure-digital-twins-explorer.md).
-
-* Or, start reading about key concepts with [Custom models](concepts-models.md).
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
To monitor database migrations in the Azure portal:
- You can't use an existing self-hosted integration runtime that was created in Azure Data Factory for database migrations with Database Migration Service. Initially, create the self-hosted integration runtime by using the Azure SQL Migration extension for Azure Data Studio. You can reuse that self-hosted integration runtime in future database migrations.
+- Azure Data Studio currently supports both Azure Active Directory (Azure AD)/Windows authentication and SQL logins for connecting to the source SQL Server instance. For the Azure SQL targets, only SQL logins are supported.
+ ## Pricing - Azure Database Migration Service is free to use with the Azure SQL Migration extension for Azure Data Studio. You can migrate multiple SQL Server databases by using Database Migration Service at no charge.
firewall Firewall Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-performance.md
Azure Firewall has two versions: Standard and Premium.
- Azure Firewall Standard
- Azure Firewall Standard has been generally available since September 2018. It is cloud native, highly available, with built-in auto scaling firewall-as-a-service. You can centrally govern and log all your traffic flows using a DevOps approach. The service supports both application and network level-filtering rules, and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains.
+ Azure Firewall Standard has been generally available since September 2018. It's cloud native, highly available, with built-in auto scaling firewall-as-a-service. You can centrally govern and log all your traffic flows using a DevOps approach. The service supports both application and network level-filtering rules, and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains.
- Azure Firewall Premium Azure Firewall Premium is a next generation firewall. It has capabilities that are required for highly sensitive and regulated environments. The features that might affect the performance of the Firewall are TLS (Transport Layer Security) inspection and IDPS (Intrusion Detection and Prevention).
For more information about Azure Firewall, see [What is Azure Firewall?](overvie
## Performance testing
-Before you deploy Azure Firewall, the performance needs to be tested and evaluated to ensure it meets your expectations. Not only should Azure Firewall handle the current traffic on a network, but it should also be ready for potential traffic growth. It is recommended to evaluate on a test network and not in a production environment. The testing should attempt to replicate the production environment as close as possible. This includes the network topology, and emulating the actual characteristics of the expected traffic through the firewall.
+Before you deploy Azure Firewall, the performance needs to be tested and evaluated to ensure it meets your expectations. Not only should Azure Firewall handle the current traffic on a network, but it should also be ready for potential traffic growth. It's recommended to evaluate on a test network and not in a production environment. The testing should attempt to replicate the production environment as close as possible. This includes the network topology, and emulating the actual characteristics of the expected traffic through the firewall.
## Performance data
The following set of performance results demonstrates the maximal Azure Firewall
### Total throughput for initial firewall deployment
-The following throughput numbers are for an Azure Firewall deployment before auto-scale (out of the box deployment). Azure Firewall gradually scales when the average throughput or CPU consumption is at 60%. It starts to scale out when it reaches 60% of its maximum throughput. Scale out takes five to seven minutes.
+The following throughput numbers are for an Azure Firewall deployment before auto-scale (out of the box deployment). Azure Firewall gradually scales out when the average throughput or CPU consumption is at 60%. Scale out takes five to seven minutes. Azure Firewall gradually scales in when the average throughput or CPU consumption is below 20%.
When performance testing, ensure you test for at least 10 to 15 minutes, and start new connections to take advantage of newly created firewall nodes.
firewall Premium Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-migrate.md
function ValidatePolicy {
exit(1) } if ($Policy.GetType().Name -ne "PSAzureFirewallPolicy") {
- Write-Host "Resource must be of type Microsoft.Network/firewallPolicies" -ForegroundColor Red
+ Write-Error "Resource must be of type Microsoft.Network/firewallPolicies"
exit(1) } if ($Policy.Sku.Tier -eq "Premium") {
- Write-Host "Policy is already premium"
+ Write-Host "Policy is already premium" -ForegroundColor Green
exit(1) } }
function TransformPolicyToPremium {
Name = (GetPolicyNewName -Policy $Policy) ResourceGroupName = $Policy.ResourceGroupName Location = $Policy.Location
- ThreatIntelMode = $Policy.ThreatIntelMode
BasePolicy = $Policy.BasePolicy.Id
- DnsSetting = $Policy.DnsSettings
+ ThreatIntelMode = $Policy.ThreatIntelMode
+ ThreatIntelWhitelist = $Policy.ThreatIntelWhitelist
+ PrivateRange = $Policy.PrivateRange
+ DnsSetting = $Policy.DnsSettings
+ SqlSetting = $Policy.SqlSetting
+ ExplicitProxy = $Policy.ExplicitProxy
+ DefaultProfile = $Policy.DefaultProfile
Tag = $Policy.Tag SkuTier = "Premium" }
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/copy-to-synapse.md
In this article, you'll learn three ways to copy data from Azure API for FHIR to [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), which is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics.
-* Use the [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md) OSS tool
+* Use the [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deploy-FhirToDatalake.md) OSS tool
* Use the [FHIR to CDM pipeline generator](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) OSS tool * Use $export and load data to Synapse using T-SQL ## Using the FHIR to Synapse Sync Agent OSS tool > [!Note]
-> [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
+> [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deploy-FhirToDatalake.md) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
The **FHIR to Synapse Sync Agent** is a Microsoft OSS project released under MIT License. It's an Azure function that extracts data from a FHIR server using FHIR Resource APIs, converts it to hierarchical Parquet files, and writes it to Azure Data Lake in near real time. This also contains a script to create external tables and views in [Synapse Serverless SQL pool](../../synapse-analytics/sql/on-demand-workspace-overview.md) pointing to the Parquet files. This solution enables you to query against the entire FHIR data with tools such as Synapse Studio, SSMS, and Power BI. You can also access the Parquet files directly from a Synapse Spark pool. You should consider this solution if you want to access all of your FHIR data in near real time, and want to defer custom transformation to downstream systems.
-Follow the OSS [documentation](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md) for installation and usage instructions.
+Follow the OSS [documentation](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deploy-FhirToDatalake.md) for installation and usage instructions.
## Using the FHIR to CDM pipeline generator OSS tool
Next, you can learn about how you can de-identify your FHIR data while exporting
>[!div class="nextstepaction"] >[Exporting de-identified data](de-identified-export.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Dicom Services Conformance Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement.md
You'll find example requests for supported transactions in the [Postman collecti
## Preamble Sanitization
-The service ignores the 128-byte File Preamble, and replaces its contents with null characters. This ensures that no files passed through the service are vulnerable to the [malicious preamble vulnerability](https://dicom.nema.org/medical/dicom/current/output/chtml/part10/sect_7.5.html). However, this also means that [preambles used to encode dual format content](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6489422/) such as TIFF can't be used with the service.
+The service ignores the 128-byte File Preamble, and replaces its contents with null characters. This behavior ensures that no files passed through the service are vulnerable to the [malicious preamble vulnerability](https://dicom.nema.org/medical/dicom/current/output/chtml/part10/sect_7.5.html). However, this also means that [preambles used to encode dual format content](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6489422/) such as TIFF can't be used with the service.
## Studies Service
Only transfer syntaxes with explicit Value Representations are accepted.
| `202 (Accepted)` | Some instances in the request have been stored but others have failed. | | `204 (No Content)` | No content was provided in the store transaction request. | | `400 (Bad Request)` | The request was badly formatted. For example, the provided study instance identifier didn't conform to the expected UID format. |
-| `401 (Unauthorized)` | The client isnn't authenticated. |
+| `401 (Unauthorized)` | The client isn't authenticated. |
| `403 (Forbidden)` | The user isn't authorized. | | `406 (Not Acceptable)` | The specified `Accept` header isn't supported. | | `409 (Conflict)` | None of the instances in the store transaction request have been stored. | | `415 (Unsupported Media Type)` | The provided `Content-Type` isn't supported. |
-| `503 (Service Unavailable)` | The service is unavailable or busy. Please try again later. |
+| `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
### Store response payload
Each dataset in the `FailedSOPSequence` will have the following elements (if the
| (0008, 1150) | `ReferencedSOPClassUID` | The SOP class unique identifier of the instance that failed to store. | | (0008, 1155) | `ReferencedSOPInstanceUID` | The SOP instance unique identifier of the instance that failed to store. | | (0008, 1197) | `FailureReason` | The reason code why this instance failed to store. |
+| (0074, 1048) | `FailedAttributesSequence` | The sequence of `ErrorComment` that includes the reason for each failed attribute. |
Each dataset in the `ReferencedSOPSequence` will have the following elements:
An example response with `Accept` header `application/dicom+json`:
| `45070` | A DICOM instance with the same `StudyInstanceUID`, `SeriesInstanceUID`, and `SopInstanceUID` has already been stored. If you wish to update the contents, delete this instance first. | | `45071` | A DICOM instance is being created by another process, or the previous attempt to create has failed and the cleanup process hasn't had chance to clean up yet. Delete the instance first before attempting to create again. |
+#### Store warning reason codes
+| Code | Description |
+| :- | :- |
+| `45063` | A DICOM instance Data Set doesn't match SOP Class. The Studies Store Transaction (Section 10.5) observed that the Data Set didn't match the constraints of the SOP Class during storage of the instance. |
+
+### Store Error Codes
+
+| Code | Description |
+| :- | :- |
+| `100` | The provided instance attributes didn't meet the validation criteria. |
+ ### Retrieve (WADO-RS) This Retrieve Transaction offers support for retrieving stored studies, series, instances, and frames by reference.
Cache validation is supported using the `ETag` mechanism. In the response to a m
| `403 (Forbidden)` | The user isn't authorized. | | `404 (Not Found)` | The specified DICOM resource couldn't be found. | | `406 (Not Acceptable)` | The specified `Accept` header isn't supported. |
-| `503 (Service Unavailable)` | The service is unavailable or busy. Please try again later. |
+| `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
### Search (QIDO-RS)
The following parameters for each query are supported:
| Key | Support Value(s) | Allowed Count | Description | | :-- | :-- | : | :- | | `{attributeID}=` | `{value}` | 0...N | Search for attribute/ value matching in query. |
-| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The additional attributes to return in the response. Both, public and private tags are supported.<br/>When `all` is provided. Refer to [Search Response](#search-response) for more information about which attributes will be returned for each query type.<br/>If a mixture of `{attributeID}` and `all` is provided, the server will default to using `all`. |
+| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The additional attributes to return in the response. Both, public and private tags are supported.<br/>When `all` is provided, refer to [Search Response](#search-response) for more information about which attributes will be returned for each query type.<br/>If a mixture of `{attributeID}` and `all` is provided, the server will default to using `all`. |
| `limit=` | `{value}` | 0..1 | Integer value to limit the number of values returned in the response.<br/>Value can be between the range 1 >= x <= 200. Defaulted to 100. | | `offset=` | `{value}` | 0..1 | Skip `{value}` results.<br/>If an offset is provided larger than the number of search query results, a 204 (no content) response will be returned. | | `fuzzymatching=` | `true` / `false` | 0..1 | If true fuzzy matching is applied to PatientName attribute. It will do a prefix word match of any name part inside PatientName value. For example, if PatientName is "John^Doe", then "joh", "do", "jo do", "Doe" and "John Doe" will all match. However, "ohn" won't match. |
We support searching the following attributes and search types.
| `ReferringPhysicianName` | X | X | X | | `StudyDate` | X | X | X | | `StudyDescription` | X | X | X |
+| `ModalitiesInStudy` | X | X | X |
| `SeriesInstanceUID` | | X | X | | `Modality` | | X | X | | `PerformedProcedureStepStartDate` | | X | X |
We support the following matching types.
#### Attribute ID
-Tags can be encoded in a number of ways for the query parameter. We've partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported:
+Tags can be encoded in several ways for the query parameter. We've partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported:
| Value | Example | | : | : |
Along with those below attributes are returned:
* If the target resource is `All Series`, then `Study` level attributes are also returned. * If the target resource is `All Instances`, then `Study` and `Series` level attributes are also returned. * If the target resource is `Study's Instances`, then `Series` level attributes are also returned.
+* `NumberOfStudyRelatedInstances` aggregated attribute is supported in `Study` level `includeField`.
+* `NumberOfSeriesRelatedInstances` aggregated attribute is supported in `Series` level `includeField`.
### Search response codes
The query API returns one of the following status codes in the response:
| `400 (Bad Request)` | The server was unable to perform the query because the query component was invalid. Response body contains details of the failure. | | `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. |
-| `503 (Service Unavailable)` | The service is unavailable or busy. Please try again later. |
+| `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
### Additional notes
There are no restrictions on the request's `Accept` header, `Content-Type` heade
| `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. | | `404 (Not Found)` | When the specified series wasn't found within a study or the specified instance wasn't found within the series. |
-| `503 (Service Unavailable)` | The service is unavailable or busy. Please try again later. |
+| `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
### Delete response payload
If not specified in the URI, the payload dataset must contain the Workitem in th
The `Accept` and `Content-Type` headers are required in the request, and must both have the value `application/dicom+json`.
-There are a number of requirements related to DICOM data attributes in the context of a specific transaction. Attributes may be required to be present, required to not be present, required to be empty, or required to not be empty. These requirements can be found [in this table](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3).
+There are several requirements related to DICOM data attributes in the context of a specific transaction. Attributes may be required to be present, required to not be present, required to be empty, or required to not be empty. These requirements can be found [in this table](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3).
Notes on dataset attributes:
Notes on dataset attributes:
|`403 (Forbidden)` | The user isn't authorized. | |`409 (Conflict)` |The Workitem already exists. |`415 (Unsupported Media Type)`| The provided `Content-Type` isn't supported.
-|`503 (Service Unavailable)`| The service is unavailable or busy. Please try again later.|
+|`503 (Service Unavailable)`| The service is unavailable or busy. Try again later.|
#### Create Response Payload
There are [four valid Workitem states](https://dicom.nema.org/medical/dicom/curr
* `CANCELED` * `COMPLETED`
-This transaction will only succeed against Workitems in the `SCHEDULED` state. Any user can claim ownership of a Workitem by setting its Transaction UID and changing its state to `IN PROGRESS`. From then on, a user can only modify the Workitem by providing the correct Transaction UID. While UPS defines Watch and Event SOP classes that allow cancellation requests and other events to be forwarded, this DICOM service does not implement these classes, and so cancellation requests on workitems that are `IN PROGRESS` will return failure. An owned Workitem can be canceled via the [Change Workitem State](#change-workitem-state) transaction.
+This transaction will only succeed against Workitems in the `SCHEDULED` state. Any user can claim ownership of a Workitem by setting its Transaction UID and changing its state to `IN PROGRESS`. From then on, a user can only modify the Workitem by providing the correct Transaction UID. While UPS defines Watch and Event SOP classes that allow cancellation requests and other events to be forwarded, this DICOM service doesn't implement these classes, and so cancellation requests on workitems that are `IN PROGRESS` will return failure. An owned Workitem can be canceled via the [Change Workitem State](#change-workitem-state) transaction.
|Method |Path| Description| |:|:|:|
The request payload shall contain the Change UPS State Data Elements. These data
|Code| Description| |:|:| |`200 (OK)`| Workitem Instance was successfully retrieved.|
-|`400 (Bad Request)` |The request cannot be performed for one of the following reasons: (1) the request is invalid given the current state of the Target Workitem. (2) the Transaction UID is missing. (3) the Transaction UID is incorrect|
+|`400 (Bad Request)` |The request can't be performed for one of the following reasons: (1) the request is invalid given the current state of the Target Workitem. (2) the Transaction UID is missing. (3) the Transaction UID is incorrect|
|`401 (Unauthorized)` |The client isn't authenticated.| |`403 (Forbidden)` | The user isn't authorized. | |`404 (Not Found)`| The Target Workitem wasn't found.|
The response will be an array of `0...N` DICOM datasets with the following attri
* All attributes in [DICOM PowerShell 3.4 Table CC.2.5-3](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3) with a Return Key Type of 1 or 2 * All attributes in [DICOM PowerShell 3.4 Table CC.2.5-3](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3) with a Return Key Type of 1C for which the conditional requirements are met * All other Workitem attributes passed as match parameters
-* All other Workitem attributes passed as includefield parameter values
+* All other Workitem attributes passed as `includefield` parameter values
#### Search Response Codes
The query API will return one of the following status codes in the response:
|`400 (Bad Request)`| The was a problem with the request. For example, invalid Query Parameter syntax. The Response body contains details of the failure.| |`401 (Unauthorized)`| The client isn't authenticated.| |`403 (Forbidden)` | The user isn't authorized. |
-|`503 (Service Unavailable)` | The service is unavailable or busy. Please try again later.|
+|`503 (Service Unavailable)` | The service is unavailable or busy. Try again later.|
#### Additional Notes
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/copy-to-synapse.md
The **FHIR to Synapse Sync Agent** is a Microsoft OSS project released under MIT
This solution enables you to query against the entire FHIR data with tools such as Synapse Studio, SSMS, and Power BI. You can also access the Parquet files directly from a Synapse Spark pool. You should consider this solution if you want to access all of your FHIR data in near real time, and want to defer custom transformation to downstream systems.
-Follow the OSS [documentation](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md) for installation and usage instructions.
+Follow the OSS [documentation](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deploy-FhirToDatalake.md) for installation and usage instructions.
## Using the FHIR to CDM pipeline generator OSS tool
internet-peering How To Exchange Route Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/how-to-exchange-route-server-portal.md
Title: Peering Connection for Exchange partners with route server by using the Portal-
-description: Create or modify an Exchange peering with route server by using the Azure portal
+ Title: Create or modify an Exchange peering with Route Server - Azure portal
+description: Create or modify an Exchange peering with Route Server using the Azure portal
Previously updated : 5/19/2020 Last updated : 01/23/2023 +
-# Create or modify an Exchange peering with route server in Azure portal
+# Create or modify an Exchange peering with Route Server using the Azure portal
-This article describes how to create a Microsoft Exchange peering with a route server by using the Azure portal. This article also shows how to check the status of the resource, update it, or delete and deprovision it.
+This article describes how to create a Microsoft Exchange peering with a route server using the Azure portal. This article also shows how to check the status of the resource, update it, or delete and deprovision it.
## Before you begin
As an Internet Exchange Provider, you can create an exchange peering request by
* For Peering type, select **Direct** * For Microsoft network, select **AS8075 with exchange route server**. * Select SKU as **Basic Free**. Don't select premium free as it's reserved for special applications.
- * Select the **Metro** location where you want to setup peering.
+ * Select the **Metro** location where you want to set up peering.
1. Under **Peering Connections**, select **Create new**
As an Internet Exchange Provider, you can create an exchange peering request by
* Maximum advertised IPv4 prefix can be up to 20000. * Use for Peering Service is disabled by default. It can be enabled once the exchange provider has signed a Peering Service Agreement with Microsoft.
-1. Upon completion, click **Save**.
+1. Upon completion, select **Save**.
-1. Under Create a peering, you will see validation passed. Once validation passed, click **Create**
+1. Under Create a peering, you'll see validation passed. Once validation passed, select **Create**
> [!div class="mx-imgBorder"] > ![Validation of settings](./media/setup-exchange-conf-tab-validation.png)
As an Internet Exchange Provider, you can create an exchange peering request by
> [!div class="mx-imgBorder"] > ![Screenshot shows the Register an A S N pane with Name and A S N text boxes.](./media/setup-exchange-register-new-asn.png)
-1. Under Register an ASN, select a Name, populate the customer ASN, and click Save.
+1. Under Register an ASN, select a Name, populate the customer ASN, and select Save.
-1. Under Registered ASNs, there will be an associated Prefix Key assigned to each ASN. As an exchange provider, you will need to provide this Prefix Key to your customer so they can register Peering Service under their subscription.
+1. Under Registered ASNs, there will be an associated Prefix Key assigned to each ASN. As an exchange provider, you'll need to provide this Prefix Key to your customer so they can register Peering Service under their subscription.
> [!div class="mx-imgBorder"] > ![Screenshot shows the Registered A S Ns pane with prefix keys.](./media/setup-exchange-register-asn-prefixkey.png)
internet-peering Howto Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-direct-portal.md
Title: Create or modify a Direct peering by using the Azure portal-
-description: Create or modify a Direct peering by using the Azure portal
+ Title: Create or modify a Direct peering - Azure portal
+description: Create or modify a Direct peering using the Azure portal
Previously updated : 5/19/2020 Last updated : 01/23/2023 +
-# Create or modify a Direct peering by using the Azure portal
+# Create or modify a Direct peering using the Azure portal
+
+> [!div class="op_single_selector"]
+> - [Azure portal](howto-direct-portal.md)
+> - [PowerShell](howto-direct-powershell.md)
This article describes how to create a Microsoft Direct peering for an Internet Service Provider or Internet Exchange Provider by using the Azure portal. This article also shows how to check the status of the resource, update it, or delete and de-provision it.
As an Internet Service Provider or Internet Exchange Provider, you can create a
3. For Resource group, you can either choose an existing resource group from the drop-down list or create a new group by selecting Create new. We'll create a new resource group for this example.
+ >[!NOTE]
+ >Once a subscription and resource group have been selected for the peering resource, it cannot be moved to another subscription or resource group.
+ 4. Name corresponds to the resource name and can be anything you choose. 5. Region is auto-selected if you chose an existing resource group. If you chose to create a new resource group, you also need to choose the Azure region where you want the resource to reside.
internet-peering Howto Direct Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-direct-powershell.md
Title: Create or modify a Direct peering by using PowerShell-
-description: Create or modify a Direct peering by using PowerShell
+ Title: Create or modify a Direct peering - PowerShell
+description: Create or modify a Direct peering using PowerShell
Previously updated : 11/27/2019 Last updated : 01/23/2023 -+
-# Create or modify a Direct peering by using PowerShell
+# Create or modify a Direct peering using PowerShell
+
+> [!div class="op_single_selector"]
+> - [Azure portal](howto-direct-portal.md)
+> - [PowerShell](howto-direct-powershell.md)
This article describes how to create a Microsoft Direct peering by using PowerShell cmdlets and the Azure Resource Manager deployment model. This article also shows you how to check the status of the resource, update it, or delete and deprovision it.
-If you prefer, you can complete this guide by using the Azure [portal](howto-direct-portal.md).
+If you prefer, you can complete this guide by using the [Azure portal](howto-direct-portal.md).
## Before you begin+ * Review the [prerequisites](prerequisites.md) and the [Direct peering walkthrough](walkthrough-direct-all.md) before you begin configuration. * If you already have Direct peering connections with Microsoft that aren't converted to Azure resources, see [Convert a legacy Direct peering to an Azure resource by using PowerShell](howto-legacy-direct-powershell.md).
internet-peering Howto Exchange Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-exchange-portal.md
Title: Create or modify an Exchange peering by using the Azure portal-
-description: Create or modify an Exchange peering by using the Azure portal
+ Title: Create or modify an Exchange peering - Azure portal
+description: Create or modify an Exchange peering using the Azure portal
Previously updated : 5/2/2020 Last updated : 01/23/2023 +
-# Create or modify an Exchange peering by using the Azure portal
+# Create or modify an Exchange peering using the Azure portal
+
+> [!div class="op_single_selector"]
+> - [Azure portal](howto-exchange-portal.md)
+> - [PowerShell](howto-exchange-powershell.md)
This article describes how to create a Microsoft Exchange peering by using the Azure portal. This article also shows how to check the status of the resource, update it, or delete and deprovision it.
As an Internet Exchange Provider, you can create an exchange peering request by
* For Resource group, you can either choose an existing resource group from the drop-down list or create a new group by selecting Create new. We'll create a new resource group for this example.
+ >[!NOTE]
+ >Once a subscription and resource group have been selected for the peering resource, it cannot be moved to another subscription or resource group.
+ * Name corresponds to the resource name and can be anything you choose. * Region is auto-selected if you chose an existing resource group. If you chose to create a new resource group, you also need to choose the Azure region where you want the resource to reside.
internet-peering Howto Exchange Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-exchange-powershell.md
Title: Create or modify an Exchange peering by using PowerShell-
-description: Create or modify an Exchange peering by using PowerShell
+ Title: Create or modify an Exchange peering - PowerShell
+description: Create or modify an Exchange peering using PowerShell
Previously updated : 11/27/2019 Last updated : 01/23/2023 -+
-# Create or modify an Exchange peering by using PowerShell
+# Create or modify an Exchange peering using PowerShell
+
+> [!div class="op_single_selector"]
+> - [Azure portal](howto-exchange-portal.md)
+> - [PowerShell](howto-exchange-powershell.md)
This article describes how to create a Microsoft Exchange peering by using PowerShell cmdlets and the Resource Manager deployment model. This article also shows you how to check the status of the resource, update it, or delete and deprovision it.
-If you prefer, you can complete this guide by using the Azure [portal](howto-exchange-portal.md).
+If you prefer, you can complete this guide by using the [Azure portal](howto-exchange-portal.md).
## Before you begin * Review the [prerequisites](prerequisites.md) and the [Exchange peering walkthrough](walkthrough-exchange-all.md) before you begin configuration.
internet-peering Howto Legacy Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-legacy-direct-portal.md
 Title: Convert a legacy Direct peering to an Azure resource by using the Azure portal-
-description: Convert a legacy Direct peering to an Azure resource by using the Azure portal
+ Title: Convert a legacy Direct peering to an Azure resource - Azure portal
+description: Convert a legacy Direct peering to an Azure resource using the Azure portal
Previously updated : 11/27/2019 Last updated : 01/23/2023 +
-# Convert a legacy Direct peering to an Azure resource by using the Azure portal
+# Convert a legacy Direct peering to an Azure resource using the Azure portal
+
+> [!div class="op_single_selector"]
+> - [Azure portal](howto-legacy-direct-portal.md)
+> - [PowerShell](howto-legacy-direct-powershell.md)
This article describes how to convert an existing legacy Direct peering to an Azure resource by using the Azure portal. If you prefer, you can complete this guide by using [PowerShell](howto-legacy-direct-powershell.md). ## Before you begin
-* Review the [prerequisites](prerequisites.md) and the [Direct peering walkthrough](walkthrough-direct-all.md) before you begin configuration.
+* Review the [prerequisites](prerequisites.md) and the [Direct peering walkthrough](walkthrough-direct-all.md) before you begin configuration.
## Convert a legacy Direct peering to an Azure resource
As an Internet Service Provider, you can convert legacy direct peering connectio
* For Resource group, you can either choose an existing resource group from the drop-down list or create a new group by selecting Create new. We'll create a new resource group for this example.
+ >[!NOTE]
+ >Once a subscription and resource group have been selected for the peering resource, it cannot be moved to another subscription or resource group.
+ * Name corresponds to the resource name and can be anything you choose. * Region is auto-selected if you chose an existing resource group. If you chose to create a new resource group, you also need to choose the Azure region where you want the resource to reside.
As an Internet Service Provider, you can convert legacy direct peering connectio
>[!NOTE] >The region where a resource group resides is independent of the location where you want to create peering with Microsoft. But it's a best practice to organize your peering resources within resource groups that reside in the closest Azure regions. For example, for peerings in Ashburn, you can create a resource group in East US or East US2.
-* Select your ASN in the **PeerASN** box.
+* Select your ASN in the **Peer ASN** box.
>[!IMPORTANT]
->You can only choose an ASN with ValidationState as Approved before you submit a peering request. If you just submitted your PeerAsn request, wait for 12 hours or so for ASN association to be approved. If the ASN you select is pending validation, you'll see an error message. If you don't see the ASN you need to choose, check that you selected the correct subscription. If so, check if you have already created PeerAsn by using **[Associate Peer ASN to Azure subscription](https://go.microsoft.com/fwlink/?linkid=2129592)**.
+>You can only choose an ASN with ValidationState as Approved before you submit a peering request. If you just submitted your Peer ASN request, wait for 12 hours or so for ASN association to be approved. If the ASN you select is pending validation, you'll see an error message. If you don't see the ASN you need to choose, check that you selected the correct subscription. If so, check if you have already created Peer ASN by using **[Associate Peer ASN to Azure subscription](https://go.microsoft.com/fwlink/?linkid=2129592)**.
#### Launch the resource and configure basic settings [!INCLUDE [direct-peering-basic](./includes/direct-portal-basic.md)]
internet-peering Howto Legacy Direct Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-legacy-direct-powershell.md
Title: Convert a legacy Direct peering to an Azure resource by using PowerShell-
-description: Convert a legacy Direct peering to an Azure resource by using PowerShell
+ Title: Convert a legacy Direct peering to an Azure resource - PowerShell
+description: Convert a legacy Direct peering to an Azure resource using PowerShell
Previously updated : 11/27/2019 Last updated : 01/23/2023 -+
-# Convert a legacy Direct peering to an Azure resource by using PowerShell
+# Convert a legacy Direct peering to an Azure resource using PowerShell
+
+> [!div class="op_single_selector"]
+> - [Azure portal](howto-legacy-direct-portal.md)
+> - [PowerShell](howto-legacy-direct-powershell.md)
This article describes how to convert an existing legacy Direct peering to an Azure resource by using PowerShell cmdlets.
-If you prefer, you can complete this guide by using the Azure [portal](howto-legacy-direct-portal.md).
+If you prefer, you can complete this guide by using the [Azure portal](howto-legacy-direct-portal.md).
## Before you begin * Review the [prerequisites](prerequisites.md) and the [Direct peering walkthrough](walkthrough-direct-all.md) before you begin configuration.
internet-peering Howto Legacy Exchange Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-legacy-exchange-portal.md
 Title: Convert a legacy Exchange peering to an Azure resource by using the Azure portal-
-description: Convert a legacy Exchange peering to an Azure resource by using the Azure portal
+ Title: Convert a legacy Exchange peering to an Azure resource - Azure portal
+description: Convert a legacy Exchange peering to an Azure resource using the Azure portal
Previously updated : 5/21/2020 Last updated : 01/23/2023 +
-# Convert a legacy Exchange peering to an Azure resource by using the Azure portal
+# Convert a legacy Exchange peering to an Azure resource using the Azure portal
+
+> [!div class="op_single_selector"]
+> - [Azure portal](howto-legacy-exchange-portal.md)
+> - [PowerShell](howto-legacy-exchange-powershell.md)
This article describes how to convert an existing legacy Exchange peering to an Azure resource by using the Azure portal.
As an Internet Exchange Provider, you can create an exchange peering request by
* For Resource group, you can either choose an existing resource group from the drop-down list or create a new group by selecting Create new. We'll create a new resource group for this example.
+ >[!NOTE]
+ >Once a subscription and resource group have been selected for the peering resource, it cannot be moved to another subscription or resource group.
+ * Name corresponds to the resource name and can be anything you choose. * Region is auto-selected if you chose an existing resource group. If you chose to create a new resource group, you also need to choose the Azure region where you want the resource to reside.
internet-peering Howto Legacy Exchange Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-legacy-exchange-powershell.md
Title: Convert a legacy Exchange peering to an Azure resource by using PowerShell-
-description: Convert a legacy Exchange peering to an Azure resource by using PowerShell
+ Title: Convert a legacy Exchange peering to an Azure resource - PowerShell
+description: Convert a legacy Exchange peering to an Azure resource using PowerShell
Previously updated : 12/15/2020 Last updated : 01/23/2023 -+
-# Convert a legacy Exchange peering to an Azure resource by using PowerShell
+# Convert a legacy Exchange peering to an Azure resource using PowerShell
+
+> [!div class="op_single_selector"]
+> - [Azure portal](howto-legacy-exchange-portal.md)
+> - [PowerShell](howto-legacy-exchange-powershell.md)
This article describes how to convert an existing legacy Exchange peering to an Azure resource by using PowerShell cmdlets.
-If you prefer, you can complete this guide by using the Azure [portal](howto-legacy-exchange-portal.md).
+If you prefer, you can complete this guide by using the [Azure portal](howto-legacy-exchange-portal.md).
## Before you begin * Review the [prerequisites](prerequisites.md) and the [Exchange peering walkthrough](walkthrough-exchange-all.md) before you begin configuration.
internet-peering Howto Peering Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-peering-service-portal.md
Title: Enable Azure Peering Service on a Direct peering by using the Azure portal-
-description: Enable Azure Peering Service on a Direct peering by using the Azure portal
+ Title: Enable Azure Peering Service on a Direct peering - Azure portal
+description: Enable Azure Peering Service on a Direct peering using the Azure portal
Previously updated : 3/18/2020 Last updated : 01/23/2023 +
-# Enable Azure Peering Service on a Direct peering by using the Azure portal
+# Enable Azure Peering Service on a Direct peering using the Azure portal
+
+> [!div class="op_single_selector"]
+> - [Azure portal](howto-peering-service-portal.md)
+> - [PowerShell](howto-peering-service-powershell.md)
This article describes how to enable [Azure Peering Service](../peering-service/about.md) on a Direct peering by using the Azure portal.
internet-peering Howto Peering Service Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-peering-service-powershell.md
Title: Enable Azure Peering Service on a Direct peering by using PowerShell-
-description: Enable Azure Peering Service on a Direct peering by using PowerShell
+ Title: Enable Azure Peering Service on a Direct peering - PowerShell
+description: Enable Azure Peering Service on a Direct peering using PowerShell
Previously updated : 11/27/2019 Last updated : 01/23/2023 -+
-# Enable Azure Peering Service on a Direct peering by using PowerShell
+# Enable Azure Peering Service on a Direct peering using PowerShell
+
+> [!div class="op_single_selector"]
+> - [Azure portal](howto-peering-service-portal.md)
+> - [PowerShell](howto-peering-service-powershell.md)
This article describes how to enable [Azure Peering Service](../peering-service/about.md) on a Direct peering by using PowerShell cmdlets and the Azure Resource Manager deployment model.
-If you prefer, you can complete this guide by using the Azure [portal](howto-peering-service-portal.md).
+If you prefer, you can complete this guide by using the [Azure portal](howto-peering-service-portal.md).
## Before you begin * Review the [prerequisites](prerequisites.md) before you begin configuration.
internet-peering Howto Subscription Association Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-subscription-association-portal.md
Title: Associate peer ASN to Azure subscription using the portal-
-description: Associate peer ASN to Azure subscription using the portal
+ Title: Associate peer ASN to Azure subscription - Azure portal
+description: Associate peer ASN to Azure subscription using the Azure portal
Previously updated : 5/18/2020 Last updated : 01/23/2023 +
-# Associate peer ASN to Azure subscription using the portal
+# Associate peer ASN to Azure subscription using the Azure portal
-As an Internet Service Provider or Internet Exchange Provider, before you submit a peering request, you should first associate your ASN with an Azure subscription using the steps below.
+> [!div class="op_single_selector"]
+> - [Azure portal](howto-subscription-association-portal.md)
+> - [PowerShell](howto-subscription-association-powershell.md)
+
+As an Internet Service Provider or Internet Exchange Provider, before you submit a peering request, you should first associate your ASN with an Azure subscription by following the steps in this article.
If you prefer, you can complete this guide using the [PowerShell](howto-subscription-association-powershell.md).
If you prefer, you can complete this guide using the [PowerShell](howto-subscrip
[!INCLUDE [Account](./includes/account-portal.md)] ### Register for peering resource provider
-Register for peering resource provider in your subscription by following the steps below. If you do not execute this, then Azure resources required to set up peering are not accessible.
+Register for peering resource provider in your subscription by following these steps. If you don't register for peering resource provider, then Azure resources required to set up peering aren't accessible.
-1. Click on **Subscriptions** on the top left corner of the portal. If you don't see it, click on **More services** and search for it.
+1. Select **Subscriptions** on the top left corner of the portal. If you don't see it, select **More services** and search for it.
> [!div class="mx-imgBorder"] > ![Open subscriptions](./media/rp-subscriptions-open.png)
-1. Click on the subscription you want to use for peering.
+1. Select the subscription you want to use for peering.
> [!div class="mx-imgBorder"] > ![Launch subscription](./media/rp-subscriptions-launch.png)
-1. Once the subscription opens, on the left, click on **Resource providers**. Then, in the right pane, search for *peering* in the search window, or use the scroll bar to find **Microsoft.Peering** and look at the **Status**. If the status is ***Registered***, skip the steps below and proceed to section **Create PeerAsn**. If the status is ***NotRegistered***, select **Microsoft.Peering** and click on **Register**.
+1. Once the subscription opens, select **Resource providers**. Then, in the right pane, search for *peering* in the search window, or use the scroll bar to find **Microsoft.Peering** and look at the **Status**. If the status is ***Registered***, skip the following steps and proceed to **Create PeerAsn**. If the status is ***NotRegistered***, select **Microsoft.Peering** and select **Register**.
> [!div class="mx-imgBorder"] > ![Registration start](./media/rp-register-start.png)
Register for peering resource provider in your subscription by following the ste
> [!div class="mx-imgBorder"] > ![Registration in-progress](./media/rp-register-progress.png)
-1. Wait for a min or so for it to complete registration. Then, click on **Refresh** and verify that the status is ***Registered***.
+1. Wait for a min or so for it to complete registration. Then, select **Refresh** and verify that the status is ***Registered***.
> [!div class="mx-imgBorder"] > ![Registration completed](./media/rp-register-completed.png)
Register for peering resource provider in your subscription by following the ste
### Create PeerAsn As an Internet Service Provider or Internet Exchange Provider, you can create a new PeerAsn resource for associating an Autonomous System Number (ASN) with Azure subscription on the [Associate a Peer ASN page](https://go.microsoft.com/fwlink/?linkid=2129592) . You can associate multiple ASNs to a subscription by creating a **PeerAsn** for each ASN you need to associate.
-1. On the **Associate a Peer ASN** page, under **Basics** tab, fill out the fields as shown below.
+1. On the **Associate a Peer ASN** page, under **Basics** tab, fill out the fields as following:
> [!div class="mx-imgBorder"] > ![PeerAsn Basics Tab](./media/peerasn-basics-tab.png) * **Name** corresponds to resource name and can be anything you choose.
- * Choose the **Subscription** that you need to associate the ASN with.
- * **Peer name** corresponds to your company's name and needs to be as close as possible to your PeeringDB profile. Note that value supports only characters a-z, A-Z, and space
+ * Select the **Subscription** that you need to associate the ASN with.
+ * **Peer name** corresponds to your company's name and needs to be as close as possible to your PeeringDB profile.
* Enter your ASN in the **Peer ASN** field.
- * Click on **Create new** and enter **EMAIL ADDRESS** and **PHONE NUMBER** for your Network Operations Center (NOC)
-1. Then, click on **Review + create** and observe that portal runs basic validation of the information you entered. This is displayed in a ribbon on the top, as *Running final validation...*.
+ * Select **Create new** and enter **EMAIL ADDRESS** and **PHONE NUMBER** for your Network Operations Center (NOC)
+1. Then, select **Review + create** and observe that portal runs basic validation of the information you entered.
> [!div class="mx-imgBorder"] > ![Screenshot shows the Associate a Peer A S N Basics tab.](./media/peerasn-review-tab-validation.png)
-1. Once the message in the ribbon turns to *Validation Passed*, verify your information and submit the request by clicking **Create**. If the validation doesn't pass, then click on **Previous** and repeat the steps above to modify your request and ensure the values you enter have no errors.
+1. Once the message in the ribbon turns to *Validation Passed*, verify your information and submit the request by clicking **Create**. If the validation doesn't pass, then select **Previous** and repeat the previous steps to modify your request and ensure the values you enter have no errors.
> [!div class="mx-imgBorder"] > ![Screenshot shows the Associate a Peer A S N Basics tab with Validation passed.](./media/peerasn-review-tab.png)
-1. After you submit the request, wait for it to complete deployment. If deployment fails, contact [Microsoft peering](mailto:peering@microsoft.com). A successful deployment will appear as below.
+1. After you submit the request, wait for it to complete deployment. If deployment fails, contact [Microsoft peering](mailto:peering@microsoft.com). A successful deployment will appear as follows:
> [!div class="mx-imgBorder"] > ![PeerAsn Success](./media/peerasn-success.png) ### View status of a PeerAsn
-Once PeerAsn resource is deployed successfully, you will need to wait for Microsoft to approve the association request. It may take up to 12 hours for approval. Once approved, you will receive a notification to the email address entered in the above section.
+Once PeerAsn resource is deployed successfully, you'll need to wait for Microsoft to approve the association request. It may take up to 12 hours for approval. Once approved, you'll receive a notification to the email address entered in the above section.
> [!IMPORTANT] > Wait for the ValidationState to turn "Approved" before submitting a peering request. It may take up to 12 hours for this approval. ## Modify PeerAsn
-Modifying PeerAsn is not currently supported. If you need to modify, contact [Microsoft peering](mailto:peering@microsoft.com).
+Modifying PeerAsn isn't currently supported. If you need to modify, contact [Microsoft peering](mailto:peering@microsoft.com).
## Delete PeerAsn
-Deleting a PeerAsn is not currently supported. If you need to delete PeerAsn, contact [Microsoft peering](mailto:peering@microsoft.com).
+Deleting a PeerAsn isn't currently supported. If you need to delete PeerAsn, contact [Microsoft peering](mailto:peering@microsoft.com).
## Next steps
internet-peering Howto Subscription Association Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-subscription-association-powershell.md
Title: Associate peer ASN to Azure subscription using PowerShell-
+ Title: Associate peer ASN to Azure subscription - PowerShell
description: Associate peer ASN to Azure subscription using PowerShell Previously updated : 12/15/2020 Last updated : 01/23/2023 -+ # Associate peer ASN to Azure subscription using PowerShell
+> [!div class="op_single_selector"]
+> - [Azure portal](howto-subscription-association-portal.md)
+> - [PowerShell](howto-subscription-association-powershell.md)
+ Before you submit a peering request, you should first associate your ASN with Azure subscription using the steps below.
-If you prefer, you can complete this guide using the [portal](howto-subscription-association-portal.md).
+If you prefer, you can complete this guide using the [Azure portal](howto-subscription-association-portal.md).
### Working with Azure PowerShell [!INCLUDE [CloudShell](./includes/cloudshell-powershell-about.md)]
If you prefer, you can complete this guide using the [portal](howto-subscription
[!INCLUDE [Account](./includes/account-powershell.md)] ### Register for peering resource provider
-Register for peering resource provider in your subscription using the command below. If you do not execute this, then Azure resources required to set up peering are not accessible.
+Register for peering resource provider in your subscription using the command below. If you don't execute this, then Azure resources required to set up peering aren't accessible.
```powershell Register-AzResourceProvider -ProviderNamespace Microsoft.Peering
Set-PeerAsn -Name Contoso_1234 -Email "newemail@test.com" -Phone "1800-000-0000"
``` ## Delete PeerAsn
-Deleting a PeerASN is not currently supported. If you need to delete PeerASN, contact [Microsoft peering](mailto:peering@microsoft.com).
+Deleting a PeerASN isn't currently supported. If you need to delete PeerASN, contact [Microsoft peering](mailto:peering@microsoft.com).
## Next steps
internet-peering Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/prerequisites.md
Title: Prerequisites to set up peering with Microsoft- description: Prerequisites to set up peering with Microsoft Previously updated : 12/15/2020 Last updated : 01/23/2023 + # Prerequisites to set up peering with Microsoft
Ensure the prerequisites below are met before you request for a new peering or c
## Azure related prerequisites * **Microsoft Azure account:**
-If you don't have a Microsoft Azure account, create a [Microsoft Azure account](https://azure.microsoft.com/free). A valid and active Microsoft Azure subscription is required to set up peering, as the peerings are modeled as resources within Azure subscriptions. It is important to note that:
- * The Azure resource types used to set up peering are always-free Azure products, i.e., you are not charged for creating an Azure account or creating a subscription or accessing the Azure resources **PeerAsn** and **Peering** to set up peering. This is not to be confused with peering agreement for Direct peering between you and Microsoft, the terms for which are explicitly discussed with our peering team. Contact [Microsoft peering](mailto:peering@microsoft.com) if any questions in this regard.
- * You can use the same Azure subscription to access other Azure products or cloud services which may be free or paid. When you access a paid product you will incur charges.
- * If you are creating a new Azure account and/or subscription, you may be eligible for free Azure credit during a trial period which you may utilize to try Azure Cloud services. If interested, visit [Microsoft Azure account](https://azure.microsoft.com/free) for more info.
+If you don't have a Microsoft Azure account, create a [Microsoft Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). A valid and active Microsoft Azure subscription is required to set up peering, as the peerings are modeled as resources within Azure subscriptions. It's important to note that:
+ * The Azure resource types used to set up peering are always-free Azure products, so you aren't charged for creating an Azure account or creating a subscription or accessing the Azure resources **PeerAsn** and **Peering** to set up peering. This isn't to be confused with peering agreement for Direct peering between you and Microsoft, the terms for which are explicitly discussed with our peering team. Contact [Microsoft peering](mailto:peering@microsoft.com) if any questions in this regard.
+ * You can use the same Azure subscription to access other Azure products or cloud services, which may be free or paid. When you access a paid product, you'll incur charges.
+ * If you're creating a new Azure account and or subscription, you may be eligible for free Azure credit during a trial period that you may utilize to try Azure Cloud services. If interested, visit [Microsoft Azure account](https://azure.microsoft.com/free) for more info.
* **Associate Peer ASN:** Before requesting for peering, first associate your ASN and contact info to your subscription. Follow the instructions in [Associate Peer ASN to Azure Subscription](howto-subscription-association-powershell.md).
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
To enable secure connections, every IoT Edge parent device in a gateway scenario
The output of list with correct ownership and permission is similar to the following: ```Output
- azureUser@vm-h2hnm5j5uxk2a:/var/aziot$ sudo ls -Rla /var/aziot
+ azureUser@vm:/var/aziot$ sudo ls -Rla /var/aziot
/var/aziot: total 16 drwxr-xr-x 4 root root 4096 Dec 14 00:16 . drwxr-xr-x 15 root root 4096 Dec 14 00:15 ..
- drw-r--r-- 2 aziotcs aziotcs 4096 Jan 14 00:31 certs
- drwx 2 aziotks aziotks 4096 Jan 14 00:35 secrets
-
+ drwxr-xr-x 2 aziotcs aziotcs 4096 Jan 14 00:31 certs
+ drwx 2 aziotks aziotks 4096 Jan 23 17:23 secrets
+ /var/aziot/certs: total 20
- drw-r--r-- 2 aziotcs aziotcs 4096 Jan 14 00:31 .
+ drwxr-xr-x 2 aziotcs aziotcs 4096 Jan 14 00:31 .
drwxr-xr-x 4 root root 4096 Dec 14 00:16 .. -rw-r--r-- 1 aziotcs aziotcs 1984 Jan 14 00:24 azure-iot-test-only.root.ca.cert.pem -rw-r--r-- 1 aziotcs aziotcs 5887 Jan 14 00:27 iot-edge-device-ca-gateway-full-chain.cert.pem
-
+ /var/aziot/secrets:
- total 20
- drwx 2 aziotks aziotks 4096 Jan 14 00:35 .
+ total 16
+ drwx 2 aziotks aziotks 4096 Jan 23 17:23 .
drwxr-xr-x 4 root root 4096 Dec 14 00:16 .. -rw- 1 aziotks aziotks 3326 Jan 14 00:29 azure-iot-test-only.root.ca.key.pem -rw- 1 aziotks aziotks 3243 Jan 14 00:28 iot-edge-device-ca-gateway.key.pem
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
sudo ls -Rla /var/aziot
The output of list with correct ownership and permission is similar to the following: ```Output
-azureUser@vm-h2hnm5j5uxk2a:/var/aziot$ sudo ls -Rla /var/aziot
+azureUser@vm:/var/aziot$ sudo ls -Rla /var/aziot
/var/aziot: total 16 drwxr-xr-x 4 root root 4096 Dec 14 00:16 . drwxr-xr-x 15 root root 4096 Dec 14 00:15 ..
-drw-r--r-- 2 aziotcs aziotcs 4096 Jan 14 00:31 certs
-drwx 2 aziotks aziotks 4096 Jan 14 00:35 secrets
+drwxr-xr-x 2 aziotcs aziotcs 4096 Jan 14 00:31 certs
+drwx 2 aziotks aziotks 4096 Jan 23 17:23 secrets
/var/aziot/certs: total 20
-drw-r--r-- 2 aziotcs aziotcs 4096 Jan 14 00:31 .
+drwxr-xr-x 2 aziotcs aziotcs 4096 Jan 14 00:31 .
drwxr-xr-x 4 root root 4096 Dec 14 00:16 .. -rw-r--r-- 1 aziotcs aziotcs 1984 Jan 14 00:24 azure-iot-test-only.root.ca.cert.pem--rw-r--r-- 1 aziotcs aziotcs 5887 Jan 14 00:27 iot-device-devicename-full-chain.cert.pem
+-rw-r--r-- 1 aziotcs aziotcs 5887 Jan 14 00:27 iot-edge-device-ca-devicename-full-chain.cert.pem
/var/aziot/secrets:
-total 20
-drwx 2 aziotks aziotks 4096 Jan 14 00:35 .
+total 16
+drwx 2 aziotks aziotks 4096 Jan 23 17:23 .
drwxr-xr-x 4 root root 4096 Dec 14 00:16 .. -rw- 1 aziotks aziotks 3326 Jan 14 00:29 azure-iot-test-only.root.ca.key.pem--rw- 1 aziotks aziotks 3243 Jan 14 00:28 iot-device-devicename.key.pem
+-rw- 1 aziotks aziotks 3243 Jan 14 00:28 iot-edge-device-ca-devicename.key.pem
``` ## Manage trusted root CA (trust bundle)
iot-hub How To Collect Device Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-collect-device-logs.md
+
+ Title: Collect device debug logs
+
+description: To troubleshoot device issues, it's sometimes useful to collect low-level debug logs from the devices. This article shows how to use the device SDKs to generate debug logs.
++ Last updated : 01/20/2023+++
+zone_pivot_groups: programming-languages-set-twenty-seven
+
+#- id: programming-languages-set-twenty-seven
+## Owner: dobett
+# Title: Programming languages
+# prompt: Choose a programming language
+# pivots:
+# - id: programming-language-ansi-c
+# Title: C
+# - id: programming-language-csharp
+# Title: C#
+# - id: programming-language-java
+# Title: Java
+# - id: programming-language-javascript
+# Title: JavaScript
+# - id: programming-language-python
+# Title: Python
+# - id: programming-language-embedded-c
+# Title: Embedded C
+
+#Customer intent: As a device builder, I want to see a working IoT Plug and Play device sample connecting to IoT Hub and sending properties and telemetry, and responding to commands. As a solution builder, I want to use a tool to view the properties, commands, and telemetry an IoT Plug and Play device reports to the IoT hub it connects to.
++
+# How to collect debug logs from your Azure IoT devices
+
+To troubleshoot device issues, it's sometimes useful to collect low-level debug logs from the devices. This article shows how to capture debug logs from the device SDKs. The steps outlined in this article assume you have either direct or remote access the device.
+
+> [!CAUTION]
+> If you're sharing logs with a support engineer or adding them to a GitHub issue, be sure to remove any confidential information such as connection strings.
+
+## Capture trace logs
++
+To capture trace data from the Azure IoT Hub client connection, you use the client `logtrace` option.
+
+You can set the option by using either the convenience layer or low-level layer:
+
+```c
+// Convenience layer for device client
+IOTHUB_CLIENT_RESULT IoTHubDeviceClient_SetOption(IOTHUB_DEVICE_CLIENT_HANDLE iotHubClientHandle, const char* optionName, const void* value);
+
+// Lower layer for device client
+IOTHUB_CLIENT_RESULT IoTHubDeviceClient_LL_SetOption(IOTHUB_DEVICE_CLIENT_LL_HANDLE iotHubClientHandle, const char* optionName, const void* value);
+```
+
+The following example from the [pnp_temperature_controller.c](https://github.com/Azure/azure-iot-sdk-c/blob/main/iothub_client/samples/pnp/pnp_temperature_controller/pnp_temperature_controller.c) sample shows how to enable trace capture by using the convenience layer:
+
+```c
+static bool g_hubClientTraceEnabled = true;
+
+...
+
+else if ((iothubClientResult = IoTHubDeviceClient_LL_SetOption(deviceClient, OPTION_LOG_TRACE, &g_hubClientTraceEnabled)) != IOTHUB_CLIENT_OK)
+{
+ LogError("Unable to set logging option, error=%d", iothubClientResult);
+ result = false;
+}
+```
+
+The trace output is written to `stdout`.
+
+To learn more about capturing and viewing trace data from the C SDK, see [IoT Hub device and module client options](https://github.com/Azure/azure-iot-sdk-c/blob/main/doc/Iothub_sdk_options.md#iot-hub-device-and-module-client-options).
+++
+### Capture trace data on Windows
+
+On Windows, the Azure IoT SDK for .NET exports trace data by using Event Tracing for Windows (ETW). The SDK repository includes [PowerShell scripts to start and stop a capture](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/tools/CaptureLogs).
+
+Run the following scripts in an elevated PowerShell prompt on the device. The *iot_providers.txt* file lists the [GUIDs for the IoT SDK providers](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/tools/CaptureLogs#azure-iot-sdk-providers). To start capturing trace data in a file called *iot.etl*:
+
+```powershell
+.\iot_startlog.ps1 -Output iot.etl -ProviderFile .\iot_providers.txt -TraceName IotTrace
+```
+
+To stop the capture:
+
+```powershell
+ .\iot_stoplog.ps1 -TraceName IotTrace
+```
+
+To learn more about capturing and viewing trace data from the .NET SDK, see [Capturing Traces](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/tools/CaptureLogs/readme.md).
+
+### Capture trace data on Linux
+
+On Linux, you can use the **dotnet-trace** tool to capture trace data. To install the tool, run the following command:
+
+```bash
+dotnet tool install --global dotnet-trace
+```
+
+Before you can collect a trace, you need the process ID of the device client application. To list the processes on the device, run the following command:
+
+```bash
+dotnet-trace ps
+```
+
+The following example output includes the **TemperatureController** device client process with process ID **24987**:
+
+```bash
+24772 dotnet /usr/share/dotnet/dotnet dotnet run
+25206 dotnet /usr/share/dotnet/dotnet dotnet trace ps
+24987 TemperatureController /bin/Debug/net6.0/TemperatureController
+```
+
+To capture trace data from this process to a file called *device.nettrace*, run the following command:
+
+```bash
+dotnet-trace collect --process-id 24987 --output device.nettrace --providers Microsoft-Azure-Devices-Device-Client
+```
+
+The `providers` argument is a comma-separated list of event providers. The following list shows the Azure IoT SDK providers:
+
+- Microsoft-Azure-Devices-Device-Client
+- Microsoft-Azure-Devices-Service-Client
+- Microsoft-Azure-Devices-Provisioning-Client
+- Microsoft-Azure-Devices-Provisioning-Transport-Amqp
+- Microsoft-Azure-Devices-Provisioning-Transport-Http
+- Microsoft-Azure-Devices-Provisioning-Transport-Mqtt.
+- Microsoft-Azure-Devices-Security-Tpm
+
+To learn more about capturing and viewing trace data from the .NET SDK, see [Capturing Traces](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/tools/CaptureLogs/readme.md).
+++
+The Azure IoT SDK for Java exports trace data by using [SLF4j](http://www.slf4j.org/faq.html). The samples included in the SDK configure SLF4j by using a property file: *src/main/resources/log4j2.properties*. The property file in each sample configures logging to the console:
+
+```properties
+status = error
+name = Log4j2PropertiesConfig
+
+appenders = console
+
+appender.console.type = Console
+appender.console.name = LogToConsole
+appender.console.layout.type = PatternLayout
+appender.console.layout.pattern = %d %p (%t) [%c] - %m%n
+
+rootLogger.level = debug
+rootLogger.appenderRefs = stdout
+rootLogger.appenderRef.stdout.ref = LogToConsole
+```
+
+To log just debug messages from the SDK to a file, you can use the following configuration:
+
+```properties
+status = error
+name = Log4j2PropertiesConfig
+
+# Log file location - choose a suitable path for your OS
+property.filePath = c/temp/logs
+
+appenders = console,file
+
+appender.console.type = Console
+appender.console.name = LogToConsole
+appender.console.layout.type = PatternLayout
+appender.console.layout.pattern = %d %p (%t) [%c] - %m%n
+
+appender.file.type = File
+appender.file.name = LogToFile
+appender.file.fileName = ${filePath}/device.log
+appender.file.layout.type = PatternLayout
+appender.file.layout.pattern = %d %p (%t) [%c] - %m%n
+
+loggers.file
+logger.file.name = com.microsoft.azure.sdk.iot
+logger.file.level = debug
+logger.file.appenderRefs = logfile
+logger.file.appenderRef.logfile.ref = LogToFile
+
+rootLogger.level = debug
+rootLogger.appenderRefs = stdout
+rootLogger.appenderRef.stdout.ref = LogToConsole
+```
+
+To learn more about capturing and viewing trace data from the Java SDK, see [Azure IoT SDK logging](https://github.com/Azure/azure-iot-sdk-jav#azure-iot-sdk-logging).
+++
+The Azure IoT SDK for Node.js uses the [debug](https://github.com/visionmedia/debug) library to capture trace logs. You control the trace by using the `DEBUG` environment variable.
+
+To capture trace information from the SDK and the low-level MQTT library, set the following environment variable before you run your device code:
+
+```bash
+export DEBUG=azure*,mqtt*
+```
+
+> [!TIP]
+> If you're using the AMQP protocol, use `rhea*` to capture trace information from the low-level library.
+
+To capture just the trace data to a file called *trace.log*, use a command such as:
+
+```bash
+node pnp_temperature_controller.js 2> trace.log
+```
+
+To learn more about capturing and viewing trace data from the Node.js SDK, see [Troubleshooting guide - devices](https://github.com/Azure/azure-iot-sdk-node/wiki/Troubleshooting-Guide-Devices).
+++
+The Azure IoT SDK for Python uses the [logging](https://docs.python.org/3/library/logging.html) module to capture trace logs. You control the trace by using a logging configuration file. If you're using one of the samples in the SDK, you may need to modify the code to load a logging configuration from a file:
+
+Replace the following line:
+
+```python
+logging.basicConfig(level=logging.ERROR)
+```
+
+With this line:
+
+```python
+logging.config.fileConfig('logging.conf')
+```
+
+Create a file called *logging.conf*. The following example captures debug information from all modules with a prefix `azure.iot.device` in the file:
+
+```conf
+[loggers]
+keys=root,azure
+
+[handlers]
+keys=consoleHandler,fileHandler
+
+[formatters]
+keys=simpleFormatter
+
+[logger_root]
+level=ERROR
+handlers=consoleHandler
+
+[logger_azure]
+level=DEBUG
+handlers=fileHandler
+qualname=azure.iot.device
+propagate=0
+
+[handler_consoleHandler]
+class=StreamHandler
+level=DEBUG
+formatter=simpleFormatter
+args=(sys.stdout,)
+
+[handler_fileHandler]
+class=FileHandler
+level=DEBUG
+formatter=simpleFormatter
+args=('device.log', 'w')
+
+[formatter_simpleFormatter]
+format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
+```
+
+To learn more about capturing and viewing trace data from the Python SDK, see [Configure logging in the Azure libraries for Python](/azure/developer/python/sdk/azure-sdk-logging).
+++
+To capture trace information from the [Azure SDK for Embedded C](https://github.com/Azure/azure-sdk-for-c) library for embedded IoT devices, add a callback function to your device code that handles the trace messages. For example, your callback function could write to the console and save the messages to a file.
+
+The following example shows how you could modify the [paho_iot_hub_sas_telemetry_sample.c](https://github.com/Azure/azure-sdk-for-c/blob/main/sdk/samples/iot/paho_iot_hub_sas_telemetry_sample.c) to capture trace information and write it to the console:
+
+```c
+#include <azure/core/az_log.h>
+
+...
+
+static void write_log_message(az_log_classification, az_span);
+
+...
+
+int main(void)
+{
+ az_log_set_message_callback(write_log_message);
+
+ ...
+}
+
+static void write_log_message(az_log_classification classification, az_span message)
+{
+ (void)classification;
+ printf("TRACE:\t\t%.*s\n", az_span_size(message), az_span_ptr(message));
+}
+```
+
+To learn more about capturing and filtering trace data in the Embedded C SDK, see [Logging SDK operations](https://github.com/Azure/azure-sdk-for-c/tree/main/sdk/docs/core#logging-sdk-operations).
++
+## Next steps
+
+If you need more help, you can contact the Azure experts on the [Microsoft Q&A and Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, you can file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select **Get Support**.
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-java.md
Title: Quickstart for Azure Key Vault Certificate client library - Java description: Learn about the the Azure Key Vault Certificate client library for Java with the steps in this quickstart. -+ Last updated 11/14/2022
ms.devlang: java
# Quickstart: Azure Key Vault Certificate client library for Java (Certificates)+ Get started with the Azure Key Vault Certificate client library for Java. Follow the steps below to install the package and try out example code for basic tasks. Additional resources:
-* [Source code](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/keyvault/azure-security-keyvault-certificates)
-* [API reference documentation](https://azure.github.io/azure-sdk-for-java/keyvault.html)
-* [Product documentation](index.yml)
-* [Samples](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/keyvault/azure-security-keyvault-certificates/src/samples/java/com/azure/security/keyvault/certificates)
+- [Source code](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/keyvault/azure-security-keyvault-certificates)
+- [API reference documentation](https://azure.github.io/azure-sdk-for-java/keyvault.html)
+- [Product documentation](index.yml)
+- [Samples](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/keyvault/azure-security-keyvault-certificates/src/samples/java/com/azure/security/keyvault/certificates)
## Prerequisites+ - An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - [Java Development Kit (JDK)](/java/azure/jdk/) version 8 or above - [Apache Maven](https://maven.apache.org)
Additional resources:
This quickstart assumes you are running [Azure CLI](/cli/azure/install-azure-cli) and [Apache Maven](https://maven.apache.org) in a Linux terminal window. ## Setting up+ This quickstart is using the Azure Identity library with Azure CLI to authenticate user to Azure Services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls, for more information, see [Authenticate the client with Azure Identity client library](/java/api/overview/azure/identity-readme). ### Sign in to Azure+ 1. Run the `login` command.
- ```azurecli-interactive
- az login
- ```
+ ```azurecli-interactive
+ az login
+ ```
If the CLI can open your default browser, it will do so and load an Azure sign-in page. Otherwise, open a browser page at [https://aka.ms/devicelogin](https://aka.ms/devicelogin) and enter the authorization code displayed in your terminal.
-2. Sign in with your account credentials in the browser.
+1. Sign in with your account credentials in the browser.
### Create a new Java console app+ In a console window, use the `mvn` command to create a new Java console app with the name `akv-certificates-java`. ```console
cd akv-certificates-java
``` ### Install the package+ Open the *pom.xml* file in your text editor. Add the following dependency elements to the group of dependencies. ```xml
Open the *pom.xml* file in your text editor. Add the following dependency elemen
``` ### Create a resource group and key vault+ [!INCLUDE [Create a resource group and key vault](../../../includes/key-vault-rg-kv-creation.md)] #### Grant access to your key vault+ Create an access policy for your key vault that grants certificate permissions to your user account. ```azurecli
az keyvault set-policy --name <your-key-vault-name> --upn user@domain.com --cert
``` #### Set environment variables+ This application is using your key vault name as an environment variable called `KEY_VAULT_NAME`. Windows+ ```cmd set KEY_VAULT_NAME=<your-key-vault-name> ````+ Windows PowerShell+ ```powershell $Env:KEY_VAULT_NAME="<your-key-vault-name>" ``` macOS or Linux+ ```cmd export KEY_VAULT_NAME=<your-key-vault-name> ``` ## Object model+ The Azure Key Vault Certificate client library for Java allows you to manage certificates. The [Code examples](#code-examples) section shows how to create a client, create a certificate, retrieve a certificate, and delete a certificate. The entire console app is [below](#sample-code). ## Code examples+ ### Add directives+ Add the following directives to the top of your code: ```java
import com.azure.security.keyvault.certificates.models.KeyVaultCertificateWithPo
``` ### Authenticate and create a client
-In this quickstart, a logged in user is used to authenticate to Key Vault, which is preferred method for local development. For applications deployed to Azure, a Managed Identity should be assigned to an App Service or Virtual Machine. For more information, see [Managed Identity Overview](../../active-directory/managed-identities-azure-resources/overview.md).
-In the example below, the name of your key vault is expanded to the key vault URI, in the format "https://\<your-key-vault-name\>.vault.azure.net". This example is using the ['DefaultAzureCredential()'](/java/api/com.azure.identity.defaultazurecredential) class, which allows to use the same code across different environments with different options to provide identity. For more information, see [Default Azure Credential Authentication](/java/api/overview/azure/identity-readme).
+Application requests to most Azure services must be authorized. Using the [DefaultAzureCredential](/java/api/com.azure.identity.defaultazurecredential) is the recommended approach for implementing passwordless connections to Azure services in your code. `DefaultAzureCredential` supports multiple authentication methods and determines which method should be used at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code.
+
+In this quickstart, `DefaultAzureCredential` authenticates to key vault using the credentials of the local development user logged into the Azure CLI. When the application is deployed to Azure, the same `DefaultAzureCredential` code can automatically discover and use a managed identity that is assigned to an App Service, Virtual Machine, or other services. For more information, see [Managed Identity Overview](/azure/active-directory/managed-identities-azure-resources/overview).
+
+In this example, the name of your key vault is expanded to the key vault URI, in the format `https://<your-key-vault-name>.vault.azure.net`. For more information about authenticating to key vault, see [Developer's Guide](/azure/key-vault/general/developers-guide#authenticate-to-key-vault-in-code).
```java String keyVaultName = System.getenv("KEY_VAULT_NAME");
CertificateClient certificateClient = new CertificateClientBuilder()
``` ### Save a secret+ Now that your application is authenticated, you can create a certificate in your key vault using the `certificateClient.beginCreateCertificate` method. This requires a name for the certificate and a certificate policy -- we've assigned the value "myCertificate" to the `certificateName` variable in this sample and use a default policy. Certificate creation is a long running operation, for which you can poll its progress or wait for it to complete.
KeyVaultCertificate createdCertificate = certificatePoller.getFinalResult();
``` ### Retrieve a certificate+ You can now retrieve the previously created certificate with the `certificateClient.getCertificate` method. ```java
KeyVaultCertificate retrievedCertificate = certificateClient.getCertificate(cert
You can now access the details of the retrieved certificate with operations like `retrievedCertificate.getName`, `retrievedCertificate.getProperties`, etc. As well as its contents `retrievedCertificate.getCer`. ### Delete a certificate+ Finally, let's delete the certificate from your key vault with the `certificateClient.beginDeleteCertificate` method, which is also a long running operation. ```java
deletionPoller.waitForCompletion();
``` ## Clean up resources+ When no longer needed, you can use the Azure CLI or Azure PowerShell to remove your key vault and the corresponding resource group. ```azurecli
Remove-AzResourceGroup -Name "myResourceGroup"
``` ## Sample code+ ```java package com.keyvault.certificates.quickstart;
public class App {
System.out.printf("key vault name = %s and kv uri = %s \n", keyVaultName, keyVaultUri); CertificateClient certificateClient = new CertificateClientBuilder()
- .vaultUrl(keyVaultUri)
- .credential(new DefaultAzureCredentialBuilder().build())
- .buildClient();
+ .vaultUrl(keyVaultUri)
+ .credential(new DefaultAzureCredentialBuilder().build())
+ .buildClient();
String certificateName = "myCertificate"; System.out.print("Creating a certificate in " + keyVaultName + " called '" + certificateName + " ... "); SyncPoller<CertificateOperation, KeyVaultCertificateWithPolicy> certificatePoller =
- certificateClient.beginCreateCertificate(certificateName, CertificatePolicy.getDefault());
+ certificateClient.beginCreateCertificate(certificateName, CertificatePolicy.getDefault());
certificatePoller.waitForCompletion(); System.out.print("done.");
public class App {
``` ## Next steps+ In this quickstart you created a key vault, created a certificate, retrieved it, and then deleted it. To learn more about Key Vault and how to integrate it with your applications, continue on to the articles below. - Read an [Overview of Azure Key Vault](../general/overview.md)
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-java.md
Title: Quickstart - Azure Key Vault Key client library for Java description: Provides a quickstart for the Azure Key Vault Keys client library for Java. -+ Last updated 01/04/2023
Get started with the Azure Key Vault Key client library for Java. Follow these s
Additional resources:
-* [Source code](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/keyvault/azure-security-keyvault-keys)
-* [API reference documentation](https://azure.github.io/azure-sdk-for-java/keyvault.html)
-* [Product documentation](index.yml)
-* [Samples](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/keyvault/azure-security-keyvault-keys/src/samples/java/com/azure/security/keyvault/keys)
+- [Source code](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/keyvault/azure-security-keyvault-keys)
+- [API reference documentation](https://azure.github.io/azure-sdk-for-java/keyvault.html)
+- [Product documentation](index.yml)
+- [Samples](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/keyvault/azure-security-keyvault-keys/src/samples/java/com/azure/security/keyvault/keys)
## Prerequisites+ - An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - [Java Development Kit (JDK)](/java/azure/jdk/) version 8 or above - [Apache Maven](https://maven.apache.org)
Additional resources:
This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli) and [Apache Maven](https://maven.apache.org) in a Linux terminal window. ## Setting up+ This quickstart is using the Azure Identity library with Azure CLI to authenticate user to Azure Services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls, for more information, see [Authenticate the client with Azure Identity client library](/java/api/overview/azure/identity-readme). ### Sign in to Azure+ 1. Run the `login` command.
- ```azurecli-interactive
- az login
- ```
+ ```azurecli-interactive
+ az login
+ ```
If the CLI can open your default browser, it will do so and load an Azure sign-in page. Otherwise, open a browser page at [https://aka.ms/devicelogin](https://aka.ms/devicelogin) and enter the authorization code displayed in your terminal.
-2. Sign in with your account credentials in the browser.
+1. Sign in with your account credentials in the browser.
### Create a new Java console app+ In a console window, use the `mvn` command to create a new Java console app with the name `akv-keys-java`. ```console
cd akv-keys-java
``` ### Install the package+ Open the *pom.xml* file in your text editor. Add the following dependency elements to the group of dependencies. ```xml
Open the *pom.xml* file in your text editor. Add the following dependency elemen
``` ### Create a resource group and key vault+ [!INCLUDE [Create a resource group and key vault](../../../includes/key-vault-rg-kv-creation.md)] #### Grant access to your key vault+ Create an access policy for your key vault that grants key permissions to your user account. ```azurecli
az keyvault set-policy --name <your-key-vault-name> --upn user@domain.com --key-
``` #### Set environment variables+ This application is using your key vault name as an environment variable called `KEY_VAULT_NAME`. Windows+ ```cmd set KEY_VAULT_NAME=<your-key-vault-name> ````+ Windows PowerShell+ ```powershell $Env:KEY_VAULT_NAME="<your-key-vault-name>" ``` macOS or Linux+ ```cmd export KEY_VAULT_NAME=<your-key-vault-name> ``` ## Object model+ The Azure Key Vault Key client library for Java allows you to manage keys. The [Code examples](#code-examples) section shows how to create a client, create a key, retrieve a key, and delete a key. The entire console app is supplied in [Sample code](#sample-code). ## Code examples+ ### Add directives+ Add the following directives to the top of your code: ```java
import com.azure.security.keyvault.keys.models.KeyVaultKey;
### Authenticate and create a client
-In this quickstart, a logged in user is used to authenticate to Key Vault, which is preferred method for local development. For applications deployed to Azure, a Managed Identity should be assigned to an App Service or Virtual Machine. For more information, see [Managed Identity Overview](../../active-directory/managed-identities-azure-resources/overview.md).
+Application requests to most Azure services must be authorized. Using the [DefaultAzureCredential](/java/api/com.azure.identity.defaultazurecredential) class is the recommended approach for implementing passwordless connections to Azure services in your code. `DefaultAzureCredential` supports multiple authentication methods and determines which method should be used at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code.
+
+In this quickstart, `DefaultAzureCredential` authenticates to key vault using the credentials of the local development user logged into the Azure CLI. When the application is deployed to Azure, the same `DefaultAzureCredential` code can automatically discover and use a managed identity that is assigned to an App Service, Virtual Machine, or other services. For more information, see [Managed Identity Overview](/azure/active-directory/managed-identities-azure-resources/overview).
-In this example, the name of your key vault is expanded to the key vault URI, in the format `https://<your-key-vault-name>.vault.azure.net`. This example is using the ['DefaultAzureCredential()'](/java/api/com.azure.identity.defaultazurecredential) class, which allows to use the same code across different environments with different options to provide identity. For more information, see [Default Azure Credential Authentication](/java/api/overview/azure/identity-readme).
+In this example, the name of your key vault is expanded to the key vault URI, in the format `https://<your-key-vault-name>.vault.azure.net`. For more information about authenticating to key vault, see [Developer's Guide](/azure/key-vault/general/developers-guide#authenticate-to-key-vault-in-code).
```java String keyVaultName = System.getenv("KEY_VAULT_NAME");
KeyClient keyClient = new KeyClientBuilder()
``` ### Create a key+ Now that your application is authenticated, you can create a key in your key vault using the `keyClient.createKey` method. This requires a name for the key and a key type. We've assigned the value "myKey" to the `keyName` variable and use a an RSA `KeyType` in this sample. ```java
az keyvault key show --vault-name <your-unique-key-vault-name> --name myKey
``` ### Retrieve a key+ You can now retrieve the previously created key with the `keyClient.getKey` method. ```java
KeyVaultKey retrievedKey = keyClient.getKey(keyName);
You can now access the details of the retrieved key with operations like `retrievedKey.getProperties`, `retrievedKey.getKeyOperations`, etc. ### Delete a key+ Finally, let's delete the key from your key vault with the `keyClient.beginDeleteKey` method. Key deletion is a long running operation, for which you can poll its progress or wait for it to complete.
az keyvault key show --vault-name <your-unique-key-vault-name> --name myKey
``` ## Clean up resources+ When no longer needed, you can use the Azure CLI or Azure PowerShell to remove your key vault and the corresponding resource group. ```azurecli
Remove-AzResourceGroup -Name "myResourceGroup"
``` ## Sample code+ ```java package com.keyvault.keys.quickstart;
public class App {
System.out.printf("key vault name = %s and key vault URI = %s \n", keyVaultName, keyVaultUri); KeyClient keyClient = new KeyClientBuilder()
- .vaultUrl(keyVaultUri)
- .credential(new DefaultAzureCredentialBuilder().build())
- .buildClient();
+ .vaultUrl(keyVaultUri)
+ .credential(new DefaultAzureCredentialBuilder().build())
+ .buildClient();
String keyName = "myKey";
public class App {
``` ## Next steps+ In this quickstart, you created a key vault, created a key, retrieved it, and then deleted it. To learn more about Key Vault and how to integrate it with your applications, continue on to these articles. - Read an [Overview of Azure Key Vault](../general/overview.md)
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-java.md
Title: Quickstart - Azure Key Vault Secret client library for Java description: Provides a quickstart for the Azure Key Vault Secret client library for Java. -+ Last updated 01/11/2023
Get started with the Azure Key Vault Secret client library for Java. Follow thes
Additional resources:
-* [Source code](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/keyvault/azure-security-keyvault-secrets)
-* [API reference documentation](https://azure.github.io/azure-sdk-for-java/keyvault.html)
-* [Product documentation](index.yml)
-* [Samples](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/keyvault/azure-security-keyvault-secrets/src/samples/java/com/azure/security/keyvault/secrets)
+- [Source code](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/keyvault/azure-security-keyvault-secrets)
+- [API reference documentation](https://azure.github.io/azure-sdk-for-java/keyvault.html)
+- [Product documentation](index.yml)
+- [Samples](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/keyvault/azure-security-keyvault-secrets/src/samples/java/com/azure/security/keyvault/secrets)
## Prerequisites+ - An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - [Java Development Kit (JDK)](/java/azure/jdk/) version 8 or above - [Apache Maven](https://maven.apache.org)
Additional resources:
This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli) and [Apache Maven](https://maven.apache.org) in a Linux terminal window. ## Setting up+ This quickstart is using the Azure Identity library with Azure CLI to authenticate user to Azure Services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls, for more information, see [Authenticate the client with Azure Identity client library](/java/api/overview/azure/identity-readme). ### Sign in to Azure+ 1. Run the `login` command.
- ```azurecli-interactive
- az login
- ```
+ ```azurecli-interactive
+ az login
+ ```
If the CLI can open your default browser, it will do so and load an Azure sign-in page. Otherwise, open a browser page at [https://aka.ms/devicelogin](https://aka.ms/devicelogin) and enter the authorization code displayed in your terminal.
-2. Sign in with your account credentials in the browser.
+1. Sign in with your account credentials in the browser.
### Create a new Java console app+ In a console window, use the `mvn` command to create a new Java console app with the name `akv-secrets-java`. ```console
cd akv-secrets-java
``` ### Install the package+ Open the *pom.xml* file in your text editor. Add the following dependency elements to the group of dependencies. ```xml
Open the *pom.xml* file in your text editor. Add the following dependency elemen
``` ### Create a resource group and key vault+ [!INCLUDE [Create a resource group and key vault](../../../includes/key-vault-rg-kv-creation.md)] #### Grant access to your key vault+ Create an access policy for your key vault that grants secret permissions to your user account. ```azurecli
az keyvault set-policy --name <your-key-vault-name> --upn user@domain.com --secr
``` #### Set environment variables+ This application is using your key vault name as an environment variable called `KEY_VAULT_NAME`. Windows+ ```cmd set KEY_VAULT_NAME=<your-key-vault-name> ````+ Windows PowerShell+ ```powershell $Env:KEY_VAULT_NAME="<your-key-vault-name>" ``` macOS or Linux+ ```cmd export KEY_VAULT_NAME=<your-key-vault-name> ``` ## Object model+ The Azure Key Vault Secret client library for Java allows you to manage secrets. The [Code examples](#code-examples) section shows how to create a client, set a secret, retrieve a secret, and delete a secret. ## Code examples ### Add directives+ Add the following directives to the top of your code: ```java
import com.azure.security.keyvault.secrets.models.KeyVaultSecret;
### Authenticate and create a client
-In this quickstart, a logged in user is used to authenticate to Key Vault, which is preferred method for local development. For applications deployed to Azure, a Managed Identity should be assigned to an App Service or Virtual Machine. For more information, see [Managed Identity Overview](../../active-directory/managed-identities-azure-resources/overview.md).
+Application requests to most Azure services must be authorized. Using the [DefaultAzureCredential](/java/api/com.azure.identity.defaultazurecredential) class is the recommended approach for implementing passwordless connections to Azure services in your code. `DefaultAzureCredential` supports multiple authentication methods and determines which method should be used at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code.
+
+In this quickstart, `DefaultAzureCredential` authenticates to key vault using the credentials of the local development user logged into the Azure CLI. When the application is deployed to Azure, the same `DefaultAzureCredential` code can automatically discover and use a managed identity that is assigned to an App Service, Virtual Machine, or other services. For more information, see [Managed Identity Overview](/azure/active-directory/managed-identities-azure-resources/overview).
-In this example, the name of your key vault is expanded to the key vault URI, in the format `https://\<your-key-vault-name\>.vault.azure.net`. This example is using the ['DefaultAzureCredential()'](/java/api/com.azure.identity.defaultazurecredential) class, which allows to use the same code across different environments with different options to provide identity. For more information, see [Default Azure Credential Authentication](/java/api/overview/azure/identity-readme).
+In this example, the name of your key vault is expanded to the key vault URI, in the format `https://<your-key-vault-name>.vault.azure.net`. For more information about authenticating to key vault, see [Developer's Guide](/azure/key-vault/general/developers-guide#authenticate-to-key-vault-in-code).
```java String keyVaultName = System.getenv("KEY_VAULT_NAME");
SecretClient secretClient = new SecretClientBuilder()
``` ### Save a secret+ Now that your application is authenticated, you can put a secret into your key vault using the `secretClient.setSecret` method. This requires a name for the secretΓÇöwe've assigned the value "mySecret" to the `secretName` variable in this sample. ```java
az keyvault secret show --vault-name <your-unique-key-vault-name> --name mySecre
``` ### Retrieve a secret+ You can now retrieve the previously set secret with the `secretClient.getSecret` method. ```java
KeyVaultSecret retrievedSecret = secretClient.getSecret(secretName);
You can now access the value of the retrieved secret with `retrievedSecret.getValue()`. ### Delete a secret+ Finally, let's delete the secret from your key vault with the `secretClient.beginDeleteSecret` method. Secret deletion is a long running operation, for which you can poll its progress or wait for it to complete.
az keyvault secret show --vault-name <your-unique-key-vault-name> --name mySecre
``` ## Clean up resources+ When no longer needed, you can use the Azure CLI or Azure PowerShell to remove your key vault and the corresponding resource group. ```azurecli
Remove-AzResourceGroup -Name "myResourceGroup"
``` ## Sample code+ ```java package com.keyvault.secrets.quickstart;
public class App {
.credential(new DefaultAzureCredentialBuilder().build()) .buildClient();
- Console con = System.console();
+ Console con = System.console();
String secretName = "mySecret"; System.out.println("Please provide the value of your secret > ");
-
+ String secretValue = con.readLine(); System.out.print("Creating a secret in " + keyVaultName + " called '" + secretName + "' with value '" + secretValue + "' ... ");
public class App {
System.out.println("done."); System.out.println("Forgetting your secret.");
-
+ secretValue = ""; System.out.println("Your secret's value is '" + secretValue + "'.");
public class App {
``` ## Next steps+ In this quickstart, you created a key vault, stored a secret, retrieved it, and then deleted it. To learn more about Key Vault and how to integrate it with your applications, continue on to these articles. - Read an [Overview of Azure Key Vault](../general/overview.md)
lab-services Troubleshoot Access Lab Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/troubleshoot-access-lab-vm.md
Last updated 12/05/2022
In this article, you learn about the different approaches for troubleshooting lab VMs. Understand how each approach affects your lab environment and user data on the lab VM. There can be different reasons why you're unable to connect to a lab VM in Azure Lab Services, or why you're stuck to complete a course. For example, the underlying VM is experiencing issues, your organization's firewall settings have changed, or a software change in the lab VM operating system.
+## Prerequisites
+
+- To change settings for the lab plan, your Azure account needs the Owner or Contributor Azure Active Directory role on the lab plan. Learn more about the [Azure Lab Services built-in roles](./administrator-guide.md#rbac-roles).
+
+- To redeploy or reset a lab VM, you need to be either the lab user that is assigned to the VM, or your Azure account has the Owner, Contributor, Lab Creator, Lab Contributor, or Lab Operator role. Learn more about the [Azure Lab Services built-in roles](./administrator-guide.md#rbac-roles).
+ ## Symptoms To use and access a lab VM, you connect to it by using Remote Desktop (RDP) or Secure Shell (SSH). You may experience difficulties to access your lab VM:
As students use a lab VM to advance through a course, they might get stuck at sp
Learn how to [set up a new lab](./tutorial-setup-lab.md#create-a-lab) and how to [create and manage templates](./how-to-create-manage-template.md).
-## Contact us for help
+## Advanced troubleshooting
-If you have questions or need help, [create a support request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview?DMC=troubleshoot), or ask [Azure community support](/answers/topics/azure-labservices.html).
## Next steps
lab-services Troubleshoot Lab Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/troubleshoot-lab-creation.md
+
+ Title: Troubleshoot lab creation
+
+description: Learn how to resolve common issues with creating a lab in Azure Lab Services.
+++++ Last updated : 01/19/2023++
+# Troubleshoot lab creation in Azure Lab Services
+
+In this article, you learn how to resolve common issues with creating a lab in Azure Lab Services. The options that are available to a lab creator for creating a lab on a lab plan depend on the lab plan configuration settings. For example, in the lab plan you can specify which virtual machine images or sizes are available.
+
+## Prerequisites
+
+- To change settings for the lab plan, your Azure account needs the Owner or Contributor Azure Active Directory role on the lab plan. Learn more about the [Azure Lab Services built-in roles](./administrator-guide.md#rbac-roles).
+
+## Virtual machine image is not available
+
+On the lab plan, you can configure the list of available VM images for creating a lab:
+
+1. Select VM images from the Azure Marketplace
+1. Select custom VM images from an Azure compute gallery
+
+Verify the following potential causes for a VM image not to be available.
+
+### Virtual machine image is not enabled
+
+In the lab plan configuration, you can enable or disable specific VM images for both Marketplace images and Azure Compute Gallery images. To enable images, see [how to specify Azure Marketplace images](specify-marketplace-images.md) or [how to enable image in an Azure compute gallery](./how-to-attach-detach-shared-image-gallery.md#enable-and-disable-images).
+
+### Azure compute gallery is not connected to the lab plan
+
+To use custom VM images, you have to connect an Azure compute gallery to your lab plan. [Verify if the compute gallery is attached to the lab plan](./how-to-attach-detach-shared-image-gallery.md), or [save a custom image to your compute gallery](./approaches-for-custom-image-creation.md).
+
+### Virtual machine image is not in the same location as the lab plan
+
+To use a custom VM image from a compute gallery, the image has to be replicated in the same location as the lab plan. You can configure replication in the compute gallery. Learn more about how to [store and share images in an Azure compute gallery](/azure/virtual-machines/shared-image-galleries).
+
+### Virtual machine image size is too large or uses multiple disks
+
+Azure Lab Services doesn't support VM image sizes that are larger than 127 GB, or images with multiple disks.
+
+## Virtual machine size is not available
+
+On the lab plan, you can configure which VM sizes are available for creating a lab. In addition, your Azure subscription has a quota for the number of CPU or GPU cores that are available.
+
+Verify the following potential causes for a VM size not to be available.
+
+### VM size is restricted on the lab plan
+
+On the lab plan, you can set a policy to restrict which VM SKU sizes are available for creating labs. For example, you can prevent labs to use GPU-powered virtual machines. Learn how you can [configure VM size restrictions for creating labs](./how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy.md).
+
+### Quota limit is reached
+
+Your Azure subscription has limits (quota) on the number of cores you use. The quota is based on specific VM size and Azure region. To learn more about [capacity limits in Azure Lab Services](./capacity-limits.md).
+
+If you've reached the limit of VM cores of a specific VM size, or if the quota is granted for a different region than the region of the lab plan, the VM size isn't available for creating labs.
+
+[Determine your current VM core usage and the quota](./how-to-determine-your-quota-usage.md) for your Azure subscription.
+
+Learn how you can [request a VM core limit increase](capacity-limits.md#request-a-limit-increase).
+
+>[!TIP]
+> You can also run a script to query for lab quotas across all your regions. For more information, see the [PowerShell Quota script](https://aka.ms/azlabs/scripts/quota-powershell).
+
+## Azure region or location is not available
+
+When you create a lab, you need to select an Azure region where the lab will be hosted. On the lab plan, you can configure which Azure regions are available for creating labs.
+
+Verify the following potential causes for an Azure region not to be available.
+
+### Azure region is not enabled on the lab plan
+
+On the lab plan, you can enable one or multiple regions for creating labs. Learn how you can [configure Azure regions for creating labs](./create-and-configure-labs-admin.md).
+
+### Lab plan and lab are in a different region than the virtual network
+
+When your lab plan uses advanced networking, the lab plan and all labs must be in the same region as the virtual network. For more information, see [Use Azure Lab Services advanced networking](how-to-connect-vnet-injection.md).
+
+## Advanced troubleshooting
++
+## Next steps
+
+For more information about setting up and managing labs, see:
+
+- [Manage lab plans](how-to-manage-lab-plans.md)
+- [Lab setup guide](setup-guide.md)
lab-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/troubleshoot.md
- Title: Troubleshooting lab creation
-description: This guide helps to fix common issues you might experience when using Azure Lab Services to create labs.
- Previously updated : 07/14/2022--
-# Troubleshooting lab creation in Azure Lab Services
-
-This article provides several common reasons why an educator might not be able to create a lab successfully and what to do to resolve the issue.
-
-## You can't see a virtual machine image
-
-Possible issues:
--- The Azure Compute Gallery is not connected to the lab plan. To connect an Azure Compute Gallery, see [Attach or detach a compute gallery](./how-to-attach-detach-shared-image-gallery.md).--- The image is not enabled by the administrator. This applies to both Marketplace images and Azure Compute Gallery images. To enable images, see [Specify marketplace images for labs](specify-marketplace-images.md).--- The image in the attached Azure Compute Gallery is not replicated to the same location as the lab plan. For more information, see [Store and share images in an Azure Compute Gallery](../virtual-machines/shared-image-galleries.md).--- Image sizes greater than 127GB or with multiple disks are not supported.-
-## The preferred virtual machine size is not available
-
-Possible issues:
--- A quota is not yet requested or you need to request more quota. To request quota, see [Request a limit increase](capacity-limits.md#request-a-limit-increase).--- A quota is granted in a location other than what is enabled for the selected lab plan. For more information, see [Request a limit increase](capacity-limits.md#request-a-limit-increase).-
->[!NOTE]
-> You can run a script to query for lab quotas across all your regions. For more information, see the [PowerShell Quota script](https://aka.ms/azlabs/scripts/quota-powershell).
-
-## You don't see multiple regions/locations to choose from
-
-Possible issues:
--- The administrator only enabled one region for the lab plan. To specify regions, see [Configure regions for labs](create-and-configure-labs-admin.md).--- Lab plan uses advanced networking. The lab plan and all labs must be in the same region as the network. For more information, see [Use advanced networking](how-to-connect-vnet-injection.md).-
-## Next steps
-
-For more information about setting up and managing labs, see:
--- [Manage lab plans](how-to-manage-lab-plans.md) -- [Lab setup guide](setup-guide.md)
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 11/03/2022 Last updated : 01/23/2023 # Limits and configuration reference for Azure Logic Apps
If your workflow uses [managed connectors](../connectors/managed.md), such as th
* [Adjust communication settings for the on-premises data gateway](/data-integration/gateway/service-gateway-communication) * [Configure proxy settings for the on-premises data gateway](/data-integration/gateway/service-gateway-proxy)
-> [!IMPORTANT]
-> If you're using [Microsoft Azure operated by 21Vianet](/azure/china/), managed connectors and custom connectors don't have reserved or fixed IP addresses.
-> So, you can't set up firewall rules for logic apps that use these connectors in this cloud. For the Azure Logic Apps service IPs, review the
-> [documentation version for Azure operated by 21Vianet](https://docs.azure.cn/en-us/logic-apps/logic-apps-limits-and-config#firewall-ip-configuration).
- <a name="ip-setup-considerations"></a> ### Firewall IP configuration considerations
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-sap-connector.md
Title: Connect to SAP
-description: Connect to SAP resources from workflows in Azure Logic Apps.
+description: Connect to an SAP server from a workflow in Azure Logic Apps.
ms.suite: integration Previously updated : 08/22/2022 Last updated : 01/23/2023 tags: connectors
tags: connectors
[!INCLUDE [logic-apps-sku-consumption](../../includes/logic-apps-sku-consumption.md)]
-This article explains how you can access your SAP resources from Azure Logic Apps using the [SAP connector](/connectors/sap/).
+This how-to guide shows how to access your SAP server from a workflow in Azure Logic Apps using the [SAP connector](/connectors/sap/).
## Prerequisites * An Azure account and subscription. If you don't have an Azure subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* A logic app workflow from which you want to access your SAP resources. If you're new to Azure Logic Apps, review the [Azure Logic Apps overview](logic-apps-overview.md) and the [quickstart for creating your first logic app workflow in the Azure portal](quickstart-create-first-logic-app-workflow.md).
+* The logic app workflow from where you want to access your SAP server.
- * If you've used a previous version of the SAP connector that has been deprecated, you must [migrate to the current connector](#migrate-to-current-connector) before you can connect to your SAP server.
+ * If you're using a deprecated version of the SAP connector, you have to [migrate to the current connector](#migrate-to-current-connector) before you can connect to your SAP server.
* If you're running your logic app workflow in multi-tenant Azure, review the [multi-tenant prerequisites](#multi-tenant-azure-prerequisites). * If you're running your logic app workflow in a Premium-level [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), review the [ISE prerequisites](#ise-prerequisites).
-* An [SAP Application server](https://wiki.scn.sap.com/wiki/display/ABAP/ABAP+Application+Server) or [SAP Message server](https://help.sap.com/saphelp_nw70/helpdata/en/40/c235c15ab7468bb31599cc759179ef/frameset.htm) that you want to access from Azure Logic Apps. For information about the SAP servers that support this connector, review [SAP compatibility](#sap-compatibility).
+* The [SAP Application server](https://wiki.scn.sap.com/wiki/display/ABAP/ABAP+Application+Server) or [SAP Message server](https://help.sap.com/saphelp_nw70/helpdata/en/40/c235c15ab7468bb31599cc759179ef/frameset.htm) that you want to access from Azure Logic Apps.
- > [!IMPORTANT]
- > Make sure that you set up your SAP server and user account to allow using RFC. For more information, which includes the supported
- > user account types and the minimum required authorization for each action type (RFC, BAPI, IDOC), review the following SAP note:
- > [460089 - Minimum authorization profiles for external RFC programs](https://launchpad.support.sap.com/#/notes/460089).
- >
- > * For RFC actions, the user account additionally needs access to function modules `RFC_GROUP_SEARCH` and `DD_LANGU_TO_ISOLA`.
- > * For BAPI actions, the user account also needs access to the following function modules: `BAPI_TRANSACTION_COMMIT`,
- > `BAPI_TRANSACTION_ROLLBACK`, `RPY_BOR_TREE_INIT`, `SWO_QUERY_METHODS` and `SWO_QUERY_API_METHODS`.
- > * For IDOC actions, the user account also needs access to the following function modules: `IDOCTYPES_LIST_WITH_MESSAGES`,
- > `IDOCTYPES_FOR_MESTYPE_READ`, `INBOUND_IDOCS_FOR_TID`, `OUTBOUND_IDOCS_FOR_TID`, `GET_STATUS_FROM_IDOCNR`, and `IDOC_RECORD_READ`.
- > * For the **Read Table** action, the user account also needs access to *either* following function module:
- > `RFC BBP_RFC_READ_TABLE` or `RFC_READ_TABLE`.
+ For information about the SAP servers that support this connector, review [SAP compatibility](#sap-compatibility).
+
+* Set up your SAP server and user account to allow using RFC.
+
+ For more information, which includes the supported user account types and the minimum required authorization for each action type (RFC, BAPI, IDOC), review the following SAP note: [460089 - Minimum authorization profiles for external RFC programs](https://launchpad.support.sap.com/#/notes/460089).
+
+* Your SAP user account needs access to the `RFC_METADATA` function group and the respective function modules for the following operations:
-* Message content to send to your SAP server, such as a sample IDoc file. This content must be in XML format and include the namespace of the [SAP action](#actions) you want to use. You can [send IDocs with a flat file schema by wrapping them in an XML envelope](#send-flat-file-idocs).
+ | Operations | Access to function modules |
+ ||-|
+ | RFC actions | `RFC_GROUP_SEARCH` and `DD_LANGU_TO_ISOLA` |
+ | BAPI actions | `BAPI_TRANSACTION_COMMIT`, `BAPI_TRANSACTION_ROLLBACK`, `RPY_BOR_TREE_INIT`, `SWO_QUERY_METHODS`, and `SWO_QUERY_API_METHODS` |
+ | IDOC actions | `IDOCTYPES_LIST_WITH_MESSAGES`, `IDOCTYPES_FOR_MESTYPE_READ`, `INBOUND_IDOCS_FOR_TID`, `OUTBOUND_IDOCS_FOR_TID`, `GET_STATUS_FROM_IDOCNR`, and `IDOC_RECORD_READ` |
+ | **Read Table** action | Either `RFC BBP_RFC_READ_TABLE` or `RFC_READ_TABLE` |
+ | Grant strict minimum access to SAP server for your SAP connection | `RFC_METADATA_GET` and `RFC_METADATA_GET_TIMESTAMP` |
-* If you want to use the **When a message is received from SAP** trigger, you must also do the following tasks:
+* To use the **When a message is received from SAP** trigger, complete the following tasks:
- * Set up your SAP gateway security permissions or Access Control List (ACL). In the **secinfo** and **reginfo** files, which are visible in the Gateway Monitor dialog box, T-Code SMGW, follow **Goto > Expert Functions > External Security > Maintenance of ACL Files**. The following permission setting is required:
+ * Set up your SAP gateway security permissions or Access Control List (ACL). In the **Gateway Monitor** (T-Code SMGW) dialog box, which shows the **secinfo** and **reginfo** files, open the **Goto** menu, and select **Expert Functions** > **External Security** > **Maintenance of ACL Files**.
+
+ The following permission setting is required:
`P TP=LOGICAPP HOST=<on-premises-gateway-server-IP-address> ACCESS=*`
This article explains how you can access your SAP resources from Azure Logic App
`P TP=<trading-partner-identifier-(program-name)-or-*-for-all-partners> HOST=<comma-separated-list-with-external-host-IP-or-network-names-that-can-register-the-program> ACCESS=<*-for-all-permissions-or-a-comma-separated-list-of-permissions>`
- If you don't configure the SAP gateway security permissions, you might receive this error:
+ If you don't configure the SAP gateway security permissions, you might receive the following error:
`Registration of tp Microsoft.PowerBI.EnterpriseGateway from host <host-name> not allowed`
This article explains how you can access your SAP resources from Azure Logic App
* Set up your SAP gateway security logging to help find Access Control List (ACL) issues. For more information, review the [SAP help topic for setting up gateway logging](https://help.sap.com/viewer/62b4de4187cb43668d15dac48fc00732/7.31.25/en-US/48b2a710ca1c3079e10000000a42189b.html).
- * In the **Configuration of RFC Connections** (T-Code SM59) dialog box, create an RFC connection with the **TCP/IP** type. The **Activation Type** must be **Registered Server Program**. Set the RFC connection's **Communication Type with Target System** value to **Unicode**.
+ * In the **Configuration of RFC Connections** (T-Code SM59) dialog box, create an RFC connection with the **TCP/IP** type. Make sure that the **Activation Type** is set to **Registered Server Program**. Set the RFC connection's **Communication Type with Target System** value to **Unicode**.
* If you use this SAP trigger with the **IDOC Format** parameter set to **FlatFile** along with the [Flat File Decode action](logic-apps-enterprise-integration-flatfile.md), you have to use the `early_terminate_optional_fields` property in your flat file schema by setting the value to `true`. This requirement is necessary because the flat file IDoc data record that's sent by SAP on the tRFC call `IDOC_INBOUND_ASYNCHRONOUS` isn't padded to the full SDATA field length. Azure Logic Apps provides the flat file IDoc original data without padding as received from SAP. Also, when you combine this SAP trigger with the Flat File Decode action, the schema that's provided to the action must match. > [!NOTE]
+ >
> This SAP trigger uses the same URI location to both renew and unsubscribe from a webhook subscription. The renewal > operation uses the HTTP `PATCH` method, while the unsubscribe operation uses the HTTP `DELETE` method. This behavior > might make a renewal operation appear as an unsubscribe operation in your trigger's history, but the operation is > still a renewal because the trigger uses `PATCH` as the HTTP method, not `DELETE`.
+* The message content to send to your SAP server, such as a sample IDoc file. This content must be in XML format and include the namespace of the [SAP action](#actions) you want to use. You can [send IDocs with a flat file schema by wrapping them in an XML envelope](#send-flat-file-idocs).
+ ### SAP compatibility The SAP connector is compatible with the following types of SAP systems:
Next, create an action to send your IDoc message to SAP when your [Request trigg
![Screenshot that shows how to create SAP Message server connection.](./media/logic-apps-using-sap-connector/create-SAP-message-server-connection.png)
- In SAP, the Logon Group is maintained by opening the **CCMS: Maintain Logon Groups** (T-Code SMLG) dialog box. For more information, review [SAP Note 26317 - Set up for LOGON group for automatic load balancing](https://service.sap.com/sap/support/notes/26317).
+ In the SAP server, the Logon Group is maintained by opening the **CCMS: Maintain Logon Groups** (T-Code SMLG) dialog box. For more information, review [SAP Note 26317 - Set up for LOGON group for automatic load balancing](https://service.sap.com/sap/support/notes/26317).
By default, strong typing is used to check for invalid values by performing XML validation against the schema. This behavior can help you detect issues earlier. The **Safe Typing** option is available for backward compatibility and only checks the string length. Learn more about the [Safe Typing option](#safe-typing).
To send IDocs from SAP to your logic app workflow, you need the following minimu
1. On the **Technical Settings** tab, for **Activation Type**, select **Registered Server Program**.
- 1. For your **Program ID**, enter a value. In SAP, your logic app workflow's trigger is registered by using this identifier.
+ 1. For your **Program ID**, enter a value. In the SAP server, your logic app workflow's trigger is registered by using this identifier.
> [!IMPORTANT] > The SAP **Program ID** is case-sensitive. Make sure you consistently use the same case format for your **Program ID**
For production environments, you must create two partner profiles. The first pro
1. Select **Standard Outbound Processing**.
-1. To start outbound IDoc processing, select **Continue**. When processing finishes, the **IDoc sent to SAP system or external program** message appears.
+1. To start outbound IDoc processing, select **Continue**. When the tool finishes processing, the **IDoc sent to SAP system or external program** message appears.
1. To check for processing errors, use the **sm58** transaction code (T-Code) with the **/n** prefix.
The SAP connection parameters for a logic app workflow don't have a language pro
### Confirm transaction explicitly
-When you send transactions to SAP from Logic Apps, this exchange happens in two steps as described in the SAP document, [Transactional RFC Server Programs](https://help.sap.com/doc/saphelp_nwpi71/7.1/22/042ad7488911d189490000e829fbbd/content.htm?no_cache=true). By default, the **Send to SAP** action handles both the steps for the function transfer and for the transaction confirmation in a single call. The SAP connector gives you the option to decouple these steps. You can send an IDoc and rather than automatically confirm the transaction, you can use the explicit **\[IDOC] Confirm transaction ID** action.
+When you send transactions to SAP from Azure Logic Apps, this exchange happens in two steps as described in the SAP document, [Transactional RFC Server Programs](https://help.sap.com/doc/saphelp_nwpi71/7.1/22/042ad7488911d189490000e829fbbd/content.htm?no_cache=true). By default, the **Send to SAP** action handles both the steps for the function transfer and for the transaction confirmation in a single call. The SAP connector gives you the option to decouple these steps. You can send an IDoc and rather than automatically confirm the transaction, you can use the explicit **\[IDOC] Confirm transaction ID** action.
-This capability to decouple the transaction ID confirmation is useful when you don't want to duplicate transactions in SAP, for example, in scenarios where failures might happen due to causes such as network issues. By confirming the transaction ID separately, the transaction is only completed one time in your SAP system.
+This capability to decouple the transaction ID confirmation is useful when you don't want to duplicate transactions in SAP, for example, in scenarios where failures might happen due to causes such as network issues. When the **Send to SAP** action separately confirms the transaction ID, the SAP system completes the transaction only once.
Here's an example that shows this pattern:
-1. Create a blank logic app and add the Request trigger.
+1. Create a blank logic app workflow, and add the Request trigger.
1. From the SAP connector, add the **\[IDOC] Send document to SAP** action. Provide the details for the IDoc that you send to your SAP system.
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
Using **Azure Machine Learning**, you can design and run your automated ML train
1. **Identify the ML problem** to be solved: classification, forecasting, regression, computer vision or NLP.
-1. **Choose whether you want to a code-first experience or a no-code studio web experience**: Users who prefer a code-first experience can use the [AzureML SDKv2](how-to-configure-auto-train.md) or the [AzureML CLIv2](how-to-train-cli.md). Get started with [Tutorial: Train an object detection model with AutoML and Python](tutorial-auto-train-image-models.md). Users who prefer a limited/no-code experience can use the [web interface](how-to-use-automated-ml-for-ml-models.md) in Azure Machine Learning studio at [https://ml.azure.com](https://ml.azure.com/). Get started with [Tutorial: Create a classification model with automated ML in Azure Machine Learning](tutorial-first-experiment-automated-ml.md).
+1. **Choose whether you want a code-first experience or a no-code studio web experience**: Users who prefer a code-first experience can use the [AzureML SDKv2](how-to-configure-auto-train.md) or the [AzureML CLIv2](how-to-train-cli.md). Get started with [Tutorial: Train an object detection model with AutoML and Python](tutorial-auto-train-image-models.md). Users who prefer a limited/no-code experience can use the [web interface](how-to-use-automated-ml-for-ml-models.md) in Azure Machine Learning studio at [https://ml.azure.com](https://ml.azure.com/). Get started with [Tutorial: Create a classification model with automated ML in Azure Machine Learning](tutorial-first-experiment-automated-ml.md).
1. **Specify the source of the labeled training data**: You can bring your data to AzureML in [many different ways](concept-data.md).
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-instance.md
You can [Add RStudio or Posit Workbench (formerly RStudio Workbench)](how-to-cre
|ONNX packages|`keras2onnx`</br>`onnx`</br>`onnxconverter-common`</br>`skl2onnx`</br>`onnxmltools`| |Azure Machine Learning Python samples||
-Python packages are all installed in the **Python 3.8 - AzureML** environment. Compute instance has Ubuntu 18.04 as the base OS.
+Python packages are all installed in the **Python 3.8 - AzureML** environment. Compute instance has Ubuntu 20.04 as the base OS.
## Accessing files
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
The following table highlights the key differences between managed online endpoi
| **Out-of-box logging** | [Azure Logs and Log Analytics at endpoint level](how-to-deploy-managed-online-endpoints.md#optional-integrate-with-log-analytics) | Unsupported | | **Application Insights** | Supported | Supported | | **Managed identity** | [Supported](how-to-access-resources-from-endpoints-managed-identities.md) | Supported |
-| **Virtual Network (VNET)** | [Supported](how-to-secure-online-endpoint.md) (preview) | Supported |
+| **Virtual Network (VNET)** | [Supported](how-to-secure-online-endpoint.md) | Supported |
| **View costs** | [Endpoint and deployment level](how-to-view-online-endpoints-costs.md) | Cluster level | | **Mirrored traffic** | [Supported](how-to-safely-rollout-online-endpoints.md#test-the-deployment-with-mirrored-traffic-preview) | Unsupported | | **No-code deployment** | Supported ([MLflow](how-to-deploy-mlflow-models-online-endpoints.md) and [Triton](how-to-deploy-with-triton.md) models) | Supported ([MLflow](how-to-deploy-mlflow-models-online-endpoints.md) and [Triton](how-to-deploy-with-triton.md) models) |
machine-learning Vm Do Ten Things https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/vm-do-ten-things.md
In Visual Studio, you can do the same clone operation. The following screenshot
![Screenshot of Visual Studio with the GitHub connection displayed](./media/vm-do-ten-things/VSGit.png)
-You can find more information on using Git to work with your GitHub repository from resources available on github.com. The [cheat sheet](https://services.github.com/on-demand/downloads/github-git-cheat-sheet.pdf) is a useful reference.
+You can find more information on using Git to work with your GitHub repository from resources available on github.com. The [cheat sheet](https://training.github.com/downloads/github-git-cheat-sheet/) is a useful reference.
## Access Azure data and analytics services ### Azure Blob storage
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
See the following sample YAML files for each NLP task.
See the sample notebooks for detailed code examples for each NLP task.
-* [Multi-class text classification](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-classification-multiclass-task-sentiment-analysis/automl-nlp-text-classification-multiclass-task-sentiment.ipynb)
+* [Multi-class text classification](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-classification-multiclass-task-sentiment-analysis/automl-nlp-multiclass-sentiment.ipynb)
* [Multi-label text classification](
-https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-classification-multilabel-task-paper-categorization/automl-nlp-text-classification-multilabel-task-paper-cat.ipynb)
+https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-classification-multilabel-task-paper-categorization/automl-nlp-multilabel-paper-cat.ipynb)
* [Named entity recognition](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-named-entity-recognition-task/automl-nlp-text-ner-task.ipynb)
machine-learning How To Machine Learning Fairness Aml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-fairness-aml.md
To compare multiple models and see how their fairness assessments differ, you ca
## Upload unmitigated and mitigated fairness insights
-You can use Fairlearn's [mitigation algorithms](https://fairlearn.org/main/user_guide/mitigation.html), compare their generated mitigated model(s) to the original unmitigated model, and navigate the performance/fairness trade-offs among compared models.
+You can use Fairlearn's [mitigation algorithms](https://fairlearn.org/main/user_guide/mitigation/https://docsupdatetracker.net/index.html), compare their generated mitigated model(s) to the original unmitigated model, and navigate the performance/fairness trade-offs among compared models.
-To see an example that demonstrates the use of the [Grid Search](https://fairlearn.org/main/user_guide/mitigation.html#grid-search) mitigation algorithm (which creates a collection of mitigated models with different fairness and performance trade offs) check out this [sample notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/contrib/fairness/fairlearn-azureml-mitigation.ipynb).
+To see an example that demonstrates the use of the [Grid Search](https://fairlearn.org/main/api_reference/fairlearn.reductions.html#fairlearn.reductions.GridSearch) mitigation algorithm (which creates a collection of mitigated models with different fairness and performance trade offs) check out this [sample notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/contrib/fairness/fairlearn-azureml-mitigation.ipynb).
Uploading multiple models' fairness insights in a single Run allows for comparison of models with respect to fairness and performance. You can click on any of the models displayed in the model comparison chart to see the detailed fairness insights of the particular model.
machine-learning How To Prevent Data Loss Exfiltration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prevent-data-loss-exfiltration.md
__Allow__ outbound traffic over __TCP port 443__ to the following FQDNs. Replace
* `<region>.batch.azure.com` * `<region>.service.batch.com`
-* `*.blob.core.windows.net` - A Service Endpoint Policy will be applied in a later step to limit outbound traffic.
-* `*.queue.core.windows.net` - A Service Endpoint Policy will be applied in a later step to limit outbound traffic.
-* `*.table.core.windows.net` - A Service Endpoint Policy will be applied in a later step to limit outbound traffic.
> [!IMPORTANT] > If you use one firewall for multiple Azure services, having outbound storage rules impacts other services. In this case, limit thee source IP of the outbound storage rule to the address space of the subnet that contains your compute instance and compute cluster resources. This limits the rule to the compute resources in the subnet.
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
In the `az ml compute create` command, replace the following values:
* `AmlCompute` or `ComputeInstance`: Specifying `AmlCompute` creates a *compute cluster*. `ComputeInstance` creates a *compute instance*. ```azurecli
-az ml compute create --resource-group rg --workspace-name ws --vnet-name yourvnet --subnet yoursubnet --type AmlCompute or ComputeInstance --enable-node-public-ip false
+# create a compute cluster with no public IP
+az ml compute create --name cpu-cluster --resource-group rg --workspace-name ws --vnet-name yourvnet --subnet yoursubnet --type AmlCompute --set enable_node_public_ip=False
+
+# create a compute instance with no public IP
+az ml compute create --name myci --resource-group rg --workspace-name ws --vnet-name yourvnet --subnet yoursubnet --type ComputeInstance --set enable_node_public_ip=False
``` # [Python](#tab/python)
In the `az ml compute create` command, replace the following values:
* `AmlCompute` or `ComputeInstance`: Specifying `AmlCompute` creates a *compute cluster*. `ComputeInstance` creates a *compute instance*. ```azurecli
-az ml compute create --resource-group rg --workspace-name ws --vnet-name yourvnet --subnet yoursubnet --type AmlCompute or ComputeInstance
+# create a compute cluster with a public IP
+az ml compute create --name cpu-cluster --resource-group rg --workspace-name ws --vnet-name yourvnet --subnet yoursubnet --type AmlCompute
+
+# create a compute instance with a public IP
+az ml compute create --name myci --resource-group rg --workspace-name ws --vnet-name yourvnet --subnet yoursubnet --type ComputeInstance
``` # [Python](#tab/python)
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
Azure Container Registry can be configured to use a private endpoint. Use the fo
If you've [installed the Machine Learning extension v2 for Azure CLI](how-to-configure-cli.md), you can use the `az ml workspace show` command to show the workspace information. The v1 extension does not return this information. ```azurecli-interactive
- az ml workspace show -w yourworkspacename -g resourcegroupname --query 'container_registry'
+ az ml workspace show -n yourworkspacename -g resourcegroupname --query 'container_registry'
``` This command returns a value similar to `"/subscriptions/{GUID}/resourceGroups/{resourcegroupname}/providers/Microsoft.ContainerRegistry/registries/{ACRname}"`. The last part of the string is the name of the Azure Container Registry for the workspace.
machine-learning How To Setup Mlops Azureml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-mlops-azureml.md
This step deploys the training pipeline to the Azure Machine Learning workspace
> Make sure you understand the [Architectural Patterns](/azure/architecture/data-guide/technology-choices/machine-learning-operations-v2) of the solution accelerator before you checkout the MLOps v2 repo and deploy the infrastructure. In examples you'll use the [classical ML project type](/azure/architecture/data-guide/technology-choices/machine-learning-operations-v2#classical-machine-learning-architecture). ### Run Azure infrastructure pipeline
-1. Go to the first repo you imported in the previous section, `mlops-v2-ado-demo`, select the **config-infra-dev.yml** file.
+1. Go to the first repo you imported in the previous section, `mlops-v2-ado-demo`. Make sure you have the `main` branch selected and then select the **config-infra-dev.yml** file.
![Screenshot of Repo in ADO.](./media/how-to-setup-mlops-azureml/ADO-repo.png)
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-curated-environments.md
The following configurations are supported:
| AzureML-ACPT-pytorch-1.11-py38-cuda11.5-gpu | Ubuntu 20.04 | cu115 | 3.8 | 1.11.0 | 1.11.1 | 0.7.3 | 1.11.0 | | AzureML-ACPT-pytorch-1.11-py38-cuda11.3-gpu | Ubuntu 20.04 | cu113 | 3.8 | 1.11.0 | 1.11.1 | 0.7.3 | 1.11.0 |
+> [!NOTE]
+> Currently, due to underlying cuda and cluster incompatibilities, on [NC series](../virtual-machines/nc-series.md) only AzureML-ACPT-pytorch-1.11-py38-cuda11.3-gpu with cuda 11.3 can be used.
+ ### PyTorch **Name**: AzureML-pytorch-1.10-ubuntu18.04-py38-cuda11-gpu
machine-learning Tutorial Create Secure Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace-template.md
To run the Bicep template, use the following commands from the `machine-learning
-1. To run the template, use the following command:
-
- # [Azure CLI](#tab/cli)
+1. To run the template, use the following command. Replace the `prefix` with a unique prefix. The prefix will be used when creating Azure resources that are required for Azure Machine Learning. Replace the `securepassword` with a secure password for the jump box. The password is for the login account for the jump box (`azureadmin` in the examples below):
> [!TIP]
- > The `prefix` must be 5 or less characters.
+ > The `prefix` must be 5 or less characters. It can't be entirely numeric or contain the following characters: `~ ! @ # $ % ^ & * ( ) = + _ [ ] { } \ | ; : . ' " , < > / ?`.
+
+ # [Azure CLI](#tab/cli)
```azurecli az deployment group create \ --resource-group exampleRG \ --template-file main.bicep \ --parameters \
- prefix=myprefix \
+ prefix=prefix \
dsvmJumpboxUsername=azureadmin \ dsvmJumpboxPassword=securepassword ``` # [Azure PowerShell](#tab/ps1)
- > [!TIP]
- > The `prefix` must be 5 or less characters.
- ```azurepowershell $dsvmPassword = ConvertTo-SecureString "mysecurepassword" -AsPlainText -Force New-AzResourceGroupDeployment -ResourceGroupName exampleRG ` -TemplateFile ./main.bicep `
- -prefix "myprefix" `
+ -prefix "prefix" `
-dsvmJumpboxUsername "azureadmin" ` -dsvmJumpboxPassword $dsvmPassword ```
After the template completes, use the following steps to connect to the DSVM:
1. From the DSVM desktop, start __Microsoft Edge__ and enter `https://ml.azure.com` as the address. Sign in to your Azure subscription, and then select the workspace created by the template. The studio for your workspace is displayed.
+## Troubleshooting
+
+### Error: Windows computer name cannot be more than 15 characters long, be entirely numeric, or contain the following characters
+
+This error can occur when the name for the DSVM jump box is greater than 15 characters or includes one of the following characters: `~ ! @ # $ % ^ & * ( ) = + _ [ ] { } \ | ; : . ' " , < > / ?`.
+
+When using the Bicep template, the jump box name is generated programmatically using the prefix value provided to the template. To make sure the name does not exceed 15 characters or contain any invalid characters, use a prefix that is 5 characters or less and do not use any of the following characters in the prefix: `~ ! @ # $ % ^ & * ( ) = + _ [ ] { } \ | ; : . ' " , < > / ?`.
+
+When using the Terraform template, the jump box name is passed using the `dsvm_name` parameter. To avoid this error, use a name that is not greater than 15 characters and does not use any of the following characters as part of the name: `~ ! @ # $ % ^ & * ( ) = + _ [ ] { } \ | ; : . ' " , < > / ?`.
+ ## Next steps > [!IMPORTANT]
mysql Tutorial Logic Apps With Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-logic-apps-with-mysql.md
Last updated 12/15/2022
[!INCLUDE [logic-apps-sku-consumption](../../../includes/logic-apps-sku-consumption.md)]
-This quickstart shows how to create an automated workflow using Azure Logic Apps with Azure database for MySQL Flexible Server.
+This quickstart shows how to create an automated workflow using Azure Logic Apps with Azure database for MySQL Connector (Preview).
## Prerequisites
mysql Tutorial Power Automate With Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-power-automate-with-mysql.md
Power Automate is a service that helps you create automated workflows between yo
- Connect to more than 500 data sources or any publicly available API - Perform CRUD (create, read, update, delete) operations on data
-In this quickstart shows how to create an automated workflow usingPower automate flow with [Azure database for MySQL connector](/connectors/azuremysql/).
+In this quickstart shows how to create an automated workflow usingPower automate flow with [Azure database for MySQL connector(Preview)](/connectors/azuremysql/).
## Prerequisites
network-watcher Network Watcher Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-monitoring-overview.md
Title: Azure Network Watcher | Microsoft Docs
+ Title: Azure Network Watcher
description: Learn about Azure Network Watcher's monitoring, diagnostics, metrics, and logging capabilities for resources in a virtual network. -
-# Customer intent: As someone with basic Azure network experience, I want to understand how Azure Network Watcher can help me resolve some of the network-related problems I've encountered and provide insight into how I use Azure networking.
- ms.assetid: 14bc2266-99e3-42a2-8d19-bd7257fec35e Previously updated : 10/11/2022 Last updated : 01/23/2023 -+
+# Customer intent: As someone with basic Azure network experience, I want to understand how Azure Network Watcher can help me resolve some of the network-related problems I've encountered and provide insight into how I use Azure networking.
# What is Azure Network Watcher?
-Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products including Virtual Machines (VM), Virtual Networks, Application Gateways, Load balancers, etc.
+Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products including virtual machines (VMs), virtual networks (VNets), application gateways, load balancers, etc.
> [!Note]
-> It is not intended for and will not work for PaaS monitoring or Web analytics.
+> Network Watcher isn't intended for and will not work for PaaS monitoring or Web analytics.
-For information about analyzing traffic from a network security group, see [Network Security Group](network-watcher-nsg-flow-logging-overview.md) and [Traffic Analytics](traffic-analytics.md).
+For information about analyzing traffic from a network security group, see [Network security group flow logging](network-watcher-nsg-flow-logging-overview.md) and [Traffic analytics](traffic-analytics.md).
## Monitoring
Connection monitor also provides the minimum, average, and maximum latency obser
As resources are added to a virtual network, it can become difficult to understand what resources are in a virtual network and how they relate to each other. The *topology* capability enables you to generate a visual diagram of the resources in a virtual network and the relationships between the resources. The following image shows an example topology diagram for a virtual network that has three subnets, two VMs, network interfaces, public IP addresses, network security groups, route tables, and the relationships between the resources:
-![Topology view](./media/network-watcher-monitoring-overview/topology.png)
- You can download an editable version of the picture in SVG format. Learn more about [topology view](view-network-topology.md). ## Diagnostics
Advanced filtering options and fine-tuned controls, such as the ability to set t
### Diagnose problems with an Azure Virtual network gateway and connections
-Virtual network gateways provide connectivity between on-premises resources and Azure virtual networks. Monitoring gateways and their connections are critical to ensuring communication are not broken. The *VPN diagnostics* capability provides the ability to diagnose gateways and connections. VPN diagnostics diagnoses the health of the gateway, or gateway connection, and informs you whether a gateway and gateway connections are available. If the gateway or connection is not available, VPN diagnostics tells you why, so you can resolve the problem. Learn more about VPN diagnostics by completing the [Diagnose a communication problem between networks](diagnose-communication-problem-between-networks.md) tutorial.
+Virtual network gateways provide connectivity between on-premises resources and Azure virtual networks. Monitoring gateways and their connections are critical to ensuring communication aren't broken. The *VPN diagnostics* capability provides the ability to diagnose gateways and connections. VPN diagnostics diagnoses the health of the gateway, or gateway connection, and informs you whether a gateway and gateway connections are available. If the gateway or connection isn't available, VPN diagnostics tells you why, so you can resolve the problem. Learn more about VPN diagnostics by completing the [Diagnose a communication problem between networks](diagnose-communication-problem-between-networks.md) tutorial.
### Determine relative latencies between Azure regions and internet service providers
The effective security rules for a network interface are a combination of all se
## Metrics
-There are [limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=/azure/network-watcher/toc.json#azure-resource-manager-virtual-networking-limits) to the number of network resources that you can create within an Azure subscription and region. If you meet the limits, you're unable to create more resources within the subscription or region. The *network subscription limit* capability provides a summary of how many of each network resource you have deployed in a subscription and region, and what the limit is for the resource. The following picture shows the partial output for network resources deployed in the East US region for an example subscription:
+There are [limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=/azure/network-watcher/toc.json#azure-resource-manager-virtual-networking-limits) to the number of network resources that you can create within an Azure subscription and region. If you meet the limits, you're unable to create more resources within the subscription or region. The *Usage + quotas* capability provides a summary of how many of each network resource you've deployed in a subscription and region, and what the limit is for the resource. The following picture shows the partial output for network resources deployed in the East US region for an example subscription:
-![Subscription limits](./media/network-watcher-monitoring-overview/subscription-limit.png)
The information is helpful when planning future resource deployments.
The information is helpful when planning future resource deployments.
Network security groups (NSG) allow or deny inbound or outbound traffic to a network interface in a VM. The *NSG flow log* capability allows you to log the source and destination IP address, port, protocol, and whether traffic was allowed or denied by an NSG. You can analyze logs using a variety of tools, such as Power BI and the *traffic analytics* capability. Traffic analytics provides rich visualizations of data written to NSG flow logs. The following picture shows some of the information and visualizations that traffic analytics presents from NSG flow log data:
-![Traffic analytics](./media/network-watcher-monitoring-overview/traffic-analytics.png)
-Learn more about NSG flow logs by completing the [Log network traffic to and from a virtual machine](network-watcher-nsg-flow-logging-portal.md) tutorial and how to implement [traffic analytics](traffic-analytics.md).
+To learn more about NSG flow logs, see [Tutorial: Log network traffic to and from a virtual machine](network-watcher-nsg-flow-logging-portal.md) and [traffic analytics](traffic-analytics.md).
### View diagnostic logs for network resources
You can enable diagnostic logging for Azure networking resources such as network
## Network Watcher automatic enablement
-When you create or update a virtual network in your subscription, Network Watcher will be enabled automatically in your Virtual Network's region. There is no impact on your resources or associated charge for automatically enabling Network Watcher. For more information, see [Network Watcher create](network-watcher-create.md).
+When you create or update a virtual network in your subscription, Network Watcher will be enabled automatically in your Virtual Network's region. There's no impact on your resources or associated charge for automatically enabling Network Watcher. For more information, see [Network Watcher create](network-watcher-create.md).
## Next steps
-* You now have an overview of Azure Network Watcher. To get started using Network Watcher, diagnose a common communication problem to and from a virtual machine using IP flow verify. To learn how, see the [Diagnose a virtual machine network traffic filter problem](diagnose-vm-network-traffic-filtering-problem.md) quickstart.
-
-* [Learn module: Introduction to Azure Network Watcher](/training/modules/intro-to-azure-network-watcher).
+- [Quickstart: Diagnose a virtual machine network traffic filter problem](diagnose-vm-network-traffic-filtering-problem.md).
+- [Tutorial: Diagnose a virtual machine network routing problem](diagnose-vm-network-routing-problem.md).
+- [Tutorial: Monitor network communication between two virtual machines](connection-monitor.md).
+- [Learn module: Introduction to Azure Network Watcher](/training/modules/intro-to-azure-network-watcher).
networking Azure Network Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/azure-network-latency.md
Title: Azure network round-trip latency statistics | Microsoft Docs
+ Title: Azure network round-trip latency statistics
description: Learn about round-trip latency statistics between Azure regions. Previously updated : 06/08/2021 Last updated : 06/30/2022 - + # Azure network round-trip latency statistics Azure continuously monitors the latency (speed) of core areas of its network using internal monitoring tools as well as measurements collected by [ThousandEyes](https://thousandeyes.com), a third-party synthetic monitoring service.
The monthly Percentile P50 round trip times between Azure regions for the past 3
:::image type="content" source="media/azure-network-latency/azure-network-latency-thmb-july-2022.png" alt-text="Chart of the inter-region latency statistics as of June 30, 2022." lightbox="media/azure-network-latency/azure-network-latency-july-2022.png":::
-> [IMPORTANT!}
+> [!IMPORTANT]
> Monthly latency numbers across Azure regions do not change regulary. Given this, you can expect an update of this table every 6 to 9 months outside of the addition of new regions. When new regions come online, we will update this document as soon as data is available. ## Next steps Learn about [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/).++
networking Create Zero Trust Network Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/create-zero-trust-network-web-apps.md
Other Azure services that will be deployed and configured and not explicitly lis
First up, you create a resource group to store all of the created resources. > [!IMPORTANT]
-> You may use your own resource group name and desired region. For this how-to, we will deploy all resources to the same resource group, **myResourceGroup**, deploy all resources to the **East US** Azure region. Use these and your Azure subscription as the default settings throughout the article.
+> You may use your own resource group name and desired region. For this how-to, we will deploy all resources to the same resource group, **myResourceGroup**, and deploy all resources to the **East US** Azure region. Use these and your Azure subscription as the default settings throughout the article.
+>
> Creating all your resources in the same resource group is good practice for keeping track of resources used, and makes it easier to clean up a demonstration or non-production environment. 1. From the **Azure portal**, search for and select **Resource groups**.
In this step, you'll deploy [Azure Key Vault](../key-vault/general/overview.md)
1. From the Azure portal menu, or from the **Home** page, select **Create a resource**. 1. In the **Search** box, enter **Key Vault** and select **Key Vault** from the results. 1. In the **Key Vault** creation page, select **Create**.
-1. In the **Create key vault** page, enter or select these settings:
+1. In the **Create key vault** page, enter or select these settings along with default values:
| Setting | Value | | | |
- | Subscription | Select a subscription. |
- | Resource Group | Select the **myResourceGroup** resource group. |
| Name | Enter a unique name for your key vault. This example will use **myKeyVaultZT**. |
- | Location | Select **East US**. |
| Pricing tier | Select **Standard**. | | Days to retain deleted vaults | Enter **7**. | | Purge Protection | Select **Disable purge protection (allow key vault and objects to be purged during retention period)**. |
In this task, you'll upload your trusted wildcard certificate for your public do
1. Navigate to the previously created key vault, **myKeyVaultZT**. 1. In the **Key Vault** page, select **Certificates** under **Objects**. 1. On the **Certificates** page, select **+ Generate/Import**
-1. On the **Create a certificate** page, specify the following settings:
+1. On the **Create a certificate** page, enter or select these settings along with default values:
| Setting | Value | | | |
In this task, you'll upload your trusted wildcard certificate for your public do
1. Select **Create**. ### Deploy a user-assigned managed identity+ You'll create a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) and give that identity access to the Azure Key Vault. The application gateway and Azure Firewall will then use this identity to retrieve the certificate from the vault. 1. In the **Search** box, enter and select **Managed identities**. 1. Select **+ Create**.
-1. On the **Basics** tab, enter the following settings:
-
- | Setting | Value |
- | | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | Region | Select **East US** |
- | Name | Select **myManagedIDappGW** |
-
+1. On the **Basics** tab, select **myManagedIDappGW** for **Name**.
1. Select **Review + Create** and select **Create**. #### Assign access to Key Vault for the managed identity
You'll deploy a hub and spoke architecture for your web application. The hub net
1. Select **+ Create a resource** in the upper left-hand corner of the portal. 1. In the search box, enter **Virtual Network**. Select **Virtual Network** in the search results. 1. In the **Virtual Network** page, select **Create**.
-1. In **Create virtual network**, enter or select the following settings on the **Basics** tab:
-
- | Setting | Value |
- | | |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | **myResourceGroup** |
- | **Instance details** | |
- | Name | Enter **myHubVNet**. |
- | Region | Select **(US) East US**. |
+1. In **Create virtual network**, enter **Name** of **hub-vnet** on the **Basics** tab.
-1. Select the **IP Addresses** tab, or select the **Next: IP Addresses** button at the bottom of the page and enter in the following settings:
+1. Select the **IP Addresses** tab, or select the **Next: IP Addresses** button at the bottom of the page and enter these settings along with default values:
| Setting | Value | |--|-|
You'll deploy a hub and spoke architecture for your web application. The hub net
| Setting | Value | |--|-|
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | **myResourceGroup** |
| **Instance details** | |
- | Name | **mySpokeVNet** |
- | Region | **(US) East US** |
+ | Name | **spoke-vnet** |
| IPv4 address space | **172.16.0.0/16** | | Select **+ Add subnet** | | | Subnet name | Enter **AppGwSubnet**. |
You'll deploy a hub and spoke architecture for your web application. The hub net
| Select **Add**. | | 1. Select the **Review + create** > **Create**.
-1. Navigate to the **myHubVNet** that you previously created.
+1. Navigate to the **hub-vnet** that you previously created.
1. From the **Hub virtual network** page, select **Peerings** from under **Settings**. 1. In the **Peerings** page, select **+ Add**.
-1. In the **Add peering** page, enter theIn This virtual network section, specify the following settings:
+1. In the **Add peering** page, enter or select these settings along with default values:
| Setting | Value | |--| - |
You'll deploy a hub and spoke architecture for your web application. The hub net
| Virtual network deployment model | **Resource manager** | | I know my resource ID | Keep default of **Unselected** | | Subscription | Select your subscription |
- | Virtual network | **mySpokeVNet** |
+ | Virtual network | **spoke-vnet** |
| Traffic to remote virtual network | **Allow** | | Traffic forwarded from remote virtual network | **Allow** | | Virtual network gateway or Route Server | **None** |
To securely access the web app, a fully qualified DNS name must be configured in
1. Select **+ Create a resource** in the upper left-hand corner of the portal. 1. In the Search box, enter **DNS Zone**. 1. From the results list, select **DNS Zone** > **Create**.
-1. On the **Basics** tab, enter the following settings:
-
- | Setting | Value |
- |--| - |
- | Subscription | Select your subscription |
- | Resource Group | **myResourceGroup** |
- | Name | Enter your domain name |
+1. On the **Basics** tab, enter your domain name.
1. Select **Review + Create** > **Create**. >[!NOTE]
To securely access the web app, a fully qualified DNS name must be configured in
You'll deploy [Azure App Service](../app-service/overview.md).for hosting the secured web application. 1. In the search bar, type **App Services**. Under Services, select **App Services**. 1. In the **App Services** page, select **+ Create**.
-1. In the **Create Web App** page, enter or select the following on the **Basics** tab:
+1. In the **Create Web App** page, enter or select these settings along with default values on the **Basics** tab:
| Setting | Value | |--| - |
- | **Project Details** | |
- | Resource group | Select **myResourceGroup**. |
| **Instance Details** | | | Name | Enter a globally unique name for your web app. For example, **myWebAppZT1**. | | Publish | Select **Code** | | Runtime stack | select **.NET 6 (LTS)**. | | Operating System | Select **Windows** |
- | Region | Select **East US**. |
| **Pricing Plans** | | | Windows Plan (East US) | Select **Create new** and enter **zt-asp** for the name. | | Pricing plan | Leave default of **Standard S1** or select another plan from the menu. |
You'll deploy [Azure App Service](../app-service/overview.md).for hosting the se
1. From the **App Service** page, select **Networking** from under **Settings**. 1. In the **Inbound Traffic** section, select **Private endpoints**. 1. In the **Private endpoint connection** page, select **+ Add** > **Express**.
-1. In the **Add Private Endpoint** pane, enter or select the following settings:
+1. In the **Add Private Endpoint** pane, enter or select these settings along with default values:
| Setting | Value | |--| - | | Name | **pe-appservice** |
- | Subscription | Select your subscription |
- | Virtual network | **mySpokeVNet** |
+ | Virtual network | **spoke-vnet** |
| Subnet | **App1** | 1. Select **OK**.
You'll deploy an application gateway and the edge ingress solution for the app t
1. In the search bar, type **application gateways**. Under Services, select **Application gateways**. 1. In the **Load balancing | Application gateway** page, select **+ Create**.
-1. On the **Basics** tab, enter these settings for the following application gateway settings:
+1. On the **Basics** tab, enter or select these settings along with default values:
| Setting | Value | |--| - |
- | **Project details** | |
- | Resource group | Select **myResourceGroup**.|
| **Instance details** | | | Application gateway name | Enter **myAppGateway**. |
- | Region | Select **East US**. |
| Tier | Select **WAF v2**. | | Enable autoscaling | Select **No**. | | Instance count | Enter **1**. |
You'll deploy an application gateway and the edge ingress solution for the app t
| HTTP2 | Select **Disabled**. | | WAF Policy | Select **Create new**. <br/> Enter **myWAFpolicy** for the WAF policy name and select **Ok**.</br>| | **Configure virtual network** | |
- | Virtual network| Select **mySpokeVNet**. |
+ | Virtual network| Select **spoke-vnet**. |
| Subnet | Select **AppGwSubnet (172.16.0.0/24)**. | 1. Select **Next: Frontends >** and configure the Frontends with the following settings:
You'll deploy an application gateway and the edge ingress solution for the app t
| Setting | Value | |--| - |
- | Rule name: **myRouteRule1** |
- | Priority: **100** |
-1. Under the **Listener** tab, enter the following settings:
+ | Rule name: Enter **myRouteRule1** |
+ | Priority: Enter **100** |
+
+1. Under the **Listener** tab, enter or select these settings along with default values:
| Setting | Value | |--| - |
You'll deploy an application gateway and the edge ingress solution for the app t
> [!NOTE] > The FQDN used for **Host name** must match the DNS record that you will create in a later step. If necessary, you can come back to the Listener configuration and change this to the DNS record that you create.
-1. Select the **Backend targets** tab and enter the following settings:
+1. Select the **Backend targets** tab.
+1. On the **Backend targets** tab, enter or select these settings along with default values:
| Setting | Value | |--| - |
Now, you'll add a custom health probe for your backend pool.
1. Navigate to the previously created application gateway. 1. From the gateway, select **Health probes** under **Settings**. 1. On the Health probe, select **+ Add**.
-1. On the Add health probe page, specify the following settings:
+1. On the Add health probe page, enter or select these settings along with default values:
| Setting | Value | |--| - |
Now, you'll add a custom health probe for your backend pool.
1. To retrieve the Public IP address for your application gateway, navigate to the **Overview** page of the application gateway and copy the **Frontend Public IP Address** listed. 1. Navigate to the DNS zone that you previously created. 1. From the DNS zone, select **+ Record set**.
-1. On the **Add record set** pane, enter or select the following settings:
+1. On the **Add record set** pane, enter or select these settings along with default values:
| Setting | Value | |--| - |
You'll deploy Azure Firewall to perform packet inspection between the applicatio
1. In the Azure portal, search for and select **Firewalls**. 1. On the Firewalls page, select **+ Create**.
-1. On the **Create a firewall** page, specify the following settings:
+1. On the **Create a firewall** page, enter or select these settings along with default values:
| Setting | Value | |-|-
- | Subscription | Select your subscription. |
- | Resource group | Enter **myResourceGroup**. |
| Name | Enter **myFirewall**. |
- | Region | Select **East US**. |
| Availability zone | Select **None**. | | Firewall tier | Select **Premium**. | | Firewall policy | Select **Add new**.| | **Create a new Firewall Policy** | | | Policy name | Enter **myFirewalPolicy**. |
- | Region | Select **East US**. |
| Policy tier | Select **Premium** and select **OK**. | | Choose a virtual network | Select **Use existing**. |
- | Virtual network | Select **myHubVNet**. |
+ | Virtual network | Select **hub-vnet**. |
| Public IP address | Select **Add new**. <br/> Enter **myFirewallpip** and select **OK**.</br> | 1. Select **Review + create** and then select **Create**. This deployment can take up to 30 minutes to complete.
In this task, you'll configure the firewall policy used for packet inspection.
1. In the **Firewall Policy** page, select the **IDPS** under **Settings**. 1. On the **IDPS** page, select **Alert and deny** and then select **Apply**. Wait for the firewall policy to complete updating before proceeding to the next step. 1. Select **TLS inspection** under **Settings**
-1. On the TLS inspection page, select **Enabled** and then specify the following settings:
+1. On the TLS inspection page, select **Enabled**. Then enter or select these settings along with default values:
| Setting | Value | |--|--|
In this task, you'll configure the firewall policy used for packet inspection.
1. Select **Apply**. 1. From the firewall policy, select **Network rules**. 1. In the **Network rules** page, select **Add a rule collection**.
-1. In the **Add a rule collection** page, enter or select the following settings:
+1. In the **Add a rule collection** page, enter or select these settings along with default values:
| Setting | Value | |--| - |
You'll create a route table with user-defined route force traffic all App Servic
1. Type app services in the search. Under Services, select **Route tables**. 1. In the Route tables page, select **+ Create**.
-1. On the Create Route table page, specify the following settings:
+1. On the Create Route table page, enter or select these settings along with default values:
+
| Setting | Value | |--| - |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | Region | Select **East US**. |
| Name | Enter **myRTspoke2hub**. | | Propagate gateway routes | Select **Yes** |+ 1. Select **Review + Create**, and then select **Create**. 1. Navigate back to the Route Tables page and then select **+ Create**.
-1. On the **Create Route** table page, specify the following settings:
+1. On the **Create Route** table page, enter or select these settings along with default values:
+
| Setting | Value | |--| - |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | Region | Select **East US**. |
| Name | Enter **myRTapp2web**. | | Propagate gateway routes | Select **Yes** |+ 1. Select **Review + Create**, and then select **Create**. ### Configuring route tables 1. Navigate to the **myRTspoke2hub** route table. 1. From the Route Table, select the **Routes** page under **Settings** and select **+ Add**.
-1. On the **Add Route** pane, specify the following settings:
+1. On the **Add Route** pane, enter or select these settings along with default values:
+ | Setting | Value | |--| - | | Route name | Enter **ToAppService**. |
You'll create a route table with user-defined route force traffic all App Servic
| Destination IP addresses/CIDR ranges | Enter **172.16.1.0/24**. | | Next hop type | Select **Virtual appliance**. | | Next hop address | The private IP address of the Azure Firewall. For example, **192.168.100.4**. |+ 1. Select **Add**. 1. From the Route table, select **Subnets** under **Settings** and select **+ Associate**.
-1. On the **Associate subnet** pane, select the **mySpokeVNet** virtual network, and then select the **AppGwSubnet** subnet.
+1. On the **Associate subnet** pane, select the **spoke-vnet** virtual network, and then select the **AppGwSubnet** subnet.
1. Select **OK**. 1. After the association appears, select the link to the **AppGwSubnet** association. 1. In the **Network policy for private endpoints** section, select **Route Tables** and select **Save**. 1. Navigate to the **myRTapp2web** route table. 1. From the **Route Table** page, select **Routes** under **Settings**.
-1. In the Add Route pane, specify the following settings:
+1. In the Add Route pane, enter or select these settings along with default values:
+
| Setting | Value | |--| - | | Route name | Enter **ToAppGW**. |
You'll create a route table with user-defined route force traffic all App Servic
| Destination IP addresses/CIDR ranges | Enter **172.16.0.0/24**. | | Next hop type | Select **Virtual appliance**. | | Next hop address | Enter the private IP address of the Azure Firewall. For example, **192.168.100.4**. |+ 1. Select **Add**. 1. Select the **Subnets** page under settings, and select **+ Associate**.
-1. On the **Associate subnet** pane, select the **mySpokeVNet** virtual network, and then select the **App1** subnet.
+1. On the **Associate subnet** pane, select the **spoke-vnet** virtual network, and then select the **App1** subnet.
1. Select **OK**. 1. Repeat this process for another subnet by selecting **+ Associate**.
-1. Select the **mySpokeVNet** virtual network, and then select the **AppGwSubnet** subnet. Select **OK**.
+1. Select the **spoke-vnet** virtual network, and then select the **AppGwSubnet** subnet. Select **OK**.
1. After the association appears, select the link to the **App1** association. 1. In the **Network policy for private endpoints** section, select **Network security groups** and **Route Tables**, and then select **Save**. - ### Test again At this point, you should be able to connect to the App Service through the application gateway. Navigate to the URL of the DNS record that you created to validate that it resolves to the application gateway and that the default App Service page is displayed. If the page loads with an error, check the **Backend Health** page of the gateway for any errors relating to the backend pool, and then check your backend settings. Also verify that you have the routes configured correctly.
You'll deploy network security groups to prevent other subnets from accessing th
1. From the Azure portal, search for and select **Network security groups**. 1. In the **Network security groups** page, select **Create**.
-1. On the Basics tab, enter or select the following settings:
-
- | Setting | Value |
- |--| - |
- | Subscription | Select your subscription. |
- | Resource group | Enter **myResourceGroup**. |
- | Name | Enter **nsg-app1**. |
- | Region | Select **East US**. |
-
+1. On the Basics tab, enter **nsg-app1** in **Name**.
1. Select **Review + Create** and then select **Create**. 1. Navigate to the newly deployed network security group. 1. In the **network security group** page, select **Inbound security rules** under **Settings**. 1. From **Inbound security rules** page, select **Add**.
-1. On the **Add inbound security rule** pane, enter or select the following settings:
+1. On the **Add inbound security rule** pane, enter or select these settings along with default values:
| Setting | Value | |--| - |
You'll deploy network security groups to prevent other subnets from accessing th
1. Select **Add**. 1. From **Inbound security rules** page, select **Add**.
-1. On the **Add inbound security rule** pane, enter or select the following settings:
+1. On the **Add inbound security rule** pane, enter or select these settings along with default values:
| Setting | Value | |--| - |
You'll deploy network security groups to prevent other subnets from accessing th
1. Select **Add**. 1. From the **Network security group** page, select **Subnets** under **Settings**. 1. On the **Subnets** page, select **Associate**.
-1. In the **Associate subnet** pane, select the **mySpokeVNet** virtual network.
+1. In the **Associate subnet** pane, select the **spoke-vnet** virtual network.
1. In the Subnet drop-down, select the **App1** subnet. 1. Select **OK**. ## Clean Up
-You'll clean up your environment by deleting the resource group containing all resources, **myResourceGroup**. Due to soft delete rules, Azure Key Vault may not be deleted. Learn how to delete [Azure Key Vault]
+You'll clean up your environment by deleting the resource group containing all resources, **myResourceGroup**.
## Next steps
openshift Cluster Administration Cluster Admin Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/cluster-administration-cluster-admin-role.md
- Title: Azure Red Hat OpenShift cluster administrator role | Microsoft Docs
-description: Assignment and usage of the Azure Red Hat OpenShift cluster administrator role
----- Previously updated : 09/25/2019
-#Customer intent: As a developer, I need to understand how to administer an Azure Red Hat cluster by using the administrative role
--
-# Azure Red Hat OpenShift customer administrator role
-
-> [!IMPORTANT]
-> Azure Red Hat OpenShift 3.11 will be retired 30 June 2022. Support for creation of new Azure Red Hat OpenShift 3.11 clusters continues through 30 November 2020. Following retirement, remaining Azure Red Hat OpenShift 3.11 clusters will be shut down to prevent security vulnerabilities.
->
-> Follow this guide to [create an Azure Red Hat OpenShift 4 cluster](tutorial-create-cluster.md).
-> If you have specific questions, [please contact us](mailto:arofeedback@microsoft.com).
-
-You're the cluster administrator of an Azure Red Hat OpenShift cluster. Your account has increased permissions and access to all user-created projects.
-
-When your account has the customer-admin-cluster authorization role bound to it, it can automatically manage a project.
-
-> [!Note]
-> The customer-admin-cluster cluster role is not the same as the cluster-admin cluster role.
-
-For example, you can execute actions associated with a set of verbs (`create`) to operate on a set of resource names (`templates`). To view the details of these roles and their sets of verbs and resources, run the following command:
-
-`$ oc get clusterroles customer-admin-cluster -o yaml`
-
-The verb names don't necessarily all map directly to `oc` commands. They equate more generally to the types of CLI operations that you can perform.
-
-For example, having the `list` verb means that you can display a list of all objects of a resource name (`oc get`). The `get` verb means that you can display the details of a specific object if you know its name (`oc describe`).
-
-## Configure the customer administrator role
-
-You can configure the customer-admin-cluster cluster role only during cluster creation by providing the flag `--customer-admin-group-id`. This field is not currently configurable in the Azure portal. To learn how to configure Azure Active Directory and the Administrators group, see [Azure Active Directory integration for Azure Red Hat OpenShift](howto-aad-app-configuration.md).
-
-## Confirm membership in the customer administrator role
-
-To confirm your membership in the customer admin group, try the OpenShift CLI commands `oc get nodes` or `oc projects`. `oc get nodes` will show a list of nodes if you have the customer-admin-cluster role, and a permission error if you only have the customer-admin-project role. `oc projects` will show all projects in the cluster as opposed to just the projects you are working in.
-
-To further explore roles and permissions in your cluster, you can use the [`oc policy who-can <verb> <resource>`](https://docs.openshift.com/container-platform/3.11/admin_guide/manage_rbac.html#managing-role-bindings) command.
-
-## Next steps
-
-Configure the customer-admin-cluster cluster role:
-> [!div class="nextstepaction"]
-> [Azure Active Directory integration for Azure Red Hat OpenShift](howto-aad-app-configuration.md)
openshift Cluster Administration Security Context Constraints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/cluster-administration-security-context-constraints.md
- Title: Manage security context constraints in Azure Red Hat OpenShift | Microsoft Docs
-description: Security context constraints for Azure Red Hat OpenShift cluster administrators
----- Previously updated : 09/25/2019
-#Customer intent: As a developer, I need to understand how to manage security context constraints.
-
-# Manage security context constraints in Azure Red Hat OpenShift
-
-> [!IMPORTANT]
-> Azure Red Hat OpenShift 3.11 will be retired 30 June 2022. Support for creation of new Azure Red Hat OpenShift 3.11 clusters continues through 30 November 2020. Following retirement, remaining Azure Red Hat OpenShift 3.11 clusters will be shut down to prevent security vulnerabilities.
->
-> Follow this guide to [create an Azure Red Hat OpenShift 4 cluster](tutorial-create-cluster.md).
-> If you have specific questions, [please contact us](mailto:arofeedback@microsoft.com).
-
-Security context constraints (SCCs) allow cluster administrators to control permissions for pods. To learn more about this API type, see the [architecture documentation for SCCs](https://docs.openshift.com/container-platform/3.11/architecture/additional_concepts/authorization.html). You can manage SCCs in your instance as normal API objects by using the CLI.
-
-## List security context constraints
-
-To get a current list of SCCs, use this command:
-
-```bash
-$ oc get scc
-
-NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES
-anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny 10 false [configMap downwardAPI emptyDir persistentVolumeClaim secret]
-hostaccess false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir hostPath persistentVolumeClaim secret]
-hostmount-anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir hostPath nfs persistentVolumeClaim secret]
-hostnetwork false [] MustRunAs MustRunAsRange MustRunAs MustRunAs <none> false [configMap downwardAPI emptyDir persistentVolumeClaim secret]
-nonroot false [] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim secret]
-privileged true [*] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*]
-restricted false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim secret]
-```
-
-## Examine an object for security context constraints
-
-To examine a particular SCC, use `oc get`, `oc describe`, or `oc edit`. For example, to examine the **restricted** SCC, use this command:
-```bash
-$ oc describe scc restricted
-Name: restricted
-Priority: <none>
-Access:
- Users: <none>
- Groups: system:authenticated
-Settings:
- Allow Privileged: false
- Default Add Capabilities: <none>
- Required Drop Capabilities: KILL,MKNOD,SYS_CHROOT,SETUID,SETGID
- Allowed Capabilities: <none>
- Allowed Seccomp Profiles: <none>
- Allowed Volume Types: configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret
- Allow Host Network: false
- Allow Host Ports: false
- Allow Host PID: false
- Allow Host IPC: false
- Read Only Root Filesystem: false
- Run As User Strategy: MustRunAsRange
- UID: <none>
- UID Range Min: <none>
- UID Range Max: <none>
- SELinux Context Strategy: MustRunAs
- User: <none>
- Role: <none>
- Type: <none>
- Level: <none>
- FSGroup Strategy: MustRunAs
- Ranges: <none>
- Supplemental Groups Strategy: RunAsAny
- Ranges: <none>
-```
-## Next steps
-> [!div class="nextstepaction"]
-> [Create an Azure Red Hat OpenShift cluster](tutorial-create-cluster.md)
openshift Howto Aad App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-aad-app-configuration.md
- Title: Azure Active Directory integration for Azure Red Hat OpenShift
-description: Learn how to create an Azure AD security group and user for testing apps on your Microsoft Azure Red Hat OpenShift cluster.
---- Previously updated : 05/13/2019--
-# Azure Active Directory integration for Azure Red Hat OpenShift
-
-> [!IMPORTANT]
-> Azure Red Hat OpenShift 3.11 will be retired 30 June 2022. Support for creation of new Azure Red Hat OpenShift 3.11 clusters continues through 30 November 2020. Following retirement, remaining Azure Red Hat OpenShift 3.11 clusters will be shut down to prevent security vulnerabilities.
->
-> Follow this guide to [create an Azure Red Hat OpenShift 4 cluster](tutorial-create-cluster.md).
-> If you have specific questions, [please contact us](mailto:arofeedback@microsoft.com).
-
-If you haven't already created an Azure Active Directory (Azure AD) tenant, follow the directions in [Create an Azure AD tenant for Azure Red Hat OpenShift](howto-create-tenant.md) before continuing with these instructions.
-
-Microsoft Azure Red Hat OpenShift needs permissions to perform tasks on behalf of your cluster. If your organization doesn't already have an Azure AD user, Azure AD security group, or an Azure AD app registration to use as the service principal, follow these instructions to create them.
-
-## Create a new Azure Active Directory user
-
-In the [Azure portal](https://portal.azure.com), ensure that your tenant appears under your user name in the top right of the portal:
-
-![Screenshot of portal with tenant listed in top right](./media/howto-create-tenant/tenant-callout.png)
-If the wrong tenant is displayed, click your user name in the top right, then click **Switch Directory**, and select the correct tenant from the **All Directories** list.
-
-Create a new Azure Active Directory 'Owner' user to sign in to your Azure Red Hat OpenShift cluster.
-
-1. Go to the [Users-All users](https://portal.azure.com/#blade/Microsoft_AAD_IAM/UsersManagementMenuBlade/AllUsers) blade.
-2. Click **+New user** to open the **User** pane.
-3. Enter a **Name** for this user.
-4. Create a **User name** based on the name of the tenant you created, with `.onmicrosoft.com` appended at the end. For example, `yourUserName@yourTenantName.onmicrosoft.com`. Write down this user name. You'll need it to sign in to your cluster.
-5. Click **Directory role** to open the directory role pane, and select **Owner** and then click **Ok** at the bottom of the pane.
-6. In the **User** pane, click **Show Password** and record the temporary password. After you sign in the first time, you'll be prompted to reset it.
-7. At the bottom of the pane, click **Create** to create the user.
-
-## Create an Azure AD security group
-
-To grant cluster admin access, the memberships in an Azure AD security group are synced into the OpenShift group "osa-customer-admins". If not specified, no cluster admin access will be granted.
-
-1. Open the [Azure Active Directory groups](https://portal.azure.com/#blade/Microsoft_AAD_IAM/GroupsManagementMenuBlade/AllGroups) blade.
-2. Click **+New Group**.
-3. Provide a group name and description.
-4. Set **Group type** to **Security**.
-5. Set **Membership type** to **Assigned**.
-
- Add the Azure AD user that you created in the earlier step to this security group.
-
-6. Click **Members** to open the **Select members** pane.
-7. In the members list, select the Azure AD user that you created above.
-8. At the bottom of the portal, click on **Select** and then **Create** to create the security group.
-
- Write down the Group ID value.
-
-9. When the group is created, you will see it in the list of all groups. Click on the new group.
-10. On the page that appears, copy down the **Object ID**. We will refer to this value as `GROUPID` in the [Create an Azure Red Hat OpenShift cluster](tutorial-create-cluster.md) tutorial.
-
-> [!IMPORTANT]
-> To sync this group with the osa-customer-admins OpenShift group, create the cluster by using the Azure CLI. The Azure portal currently lacks a field to set this group.
-
-## Create an Azure AD app registration
-
-If your organization doesn't already have an Azure Active Directory (Azure AD) app registration to use as a service principal, follow these instructions to create one.
-
-1. Open the [App registrations blade](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredAppsPreview) and click **+New registration**.
-2. In the **Register an application** pane, enter a name for your application registration.
-3. Ensure that under **Supported account types** that **Accounts in this organizational directory only** is selected. This is the most secure choice.
-4. We will add a redirect URI later once we know the URI of the cluster. Click the **Register** button to create the Azure AD application registration.
-5. On the page that appears, copy down the **Application (client) ID**. We will refer to this value as `APPID` in the [Create an Azure Red Hat OpenShift cluster](tutorial-create-cluster.md) tutorial.
-
-![Screenshot of app object page](./media/howto-create-tenant/get-app-id.png)
-
-### Create a client secret
-
-Generate a client secret for authenticating your app to Azure Active Directory.
-
-1. In the **Manage** section of the app registrations page, click **Certificates & secrets**.
-2. On the **Certificates & secrets** pane, click **+New client secret**. The **Add a client secret** pane appears.
-3. Provide a **Description**.
-4. Set **Expires** to the duration you prefer, for example **In 2 Years**.
-5. Click **Add** and the key value will appear in the **Client secrets** section of the page.
-6. Copy down the key value. We will refer to this value as `SECRET` in the [Create an Azure Red Hat OpenShift cluster](tutorial-create-cluster.md) tutorial.
-
-![Screenshot of the certificates and secrets pane](./media/howto-create-tenant/create-key.png)
-
-For more information about Azure Application Objects, see [Application and service principal objects in Azure Active Directory](../active-directory/develop/app-objects-and-service-principals.md).
-
-For details on creating a new Azure AD application, see [Register an app with the Azure Active Directory v1.0 endpoint](../active-directory/develop/quickstart-register-app.md).
-
-## Add API permissions
-
-[//]: # (Do not change to Microsoft Graph. It does not work with Microsoft Graph.)
-1. In the **Manage** section click **API permissions**
-2. Click **Add permission** and select **Azure Active Directory Graph** then **Delegated permissions**.
-> [!NOTE]
-> Make sure you selected the "Azure Active Directory Graph" and not the "Microsoft Graph" tile.
-
-3. Expand **User** on the list below and enable the **User.Read** permission. If **User.Read** is enabled by default, ensure that it is the **Azure Active Directory Graph** permission **User.Read**.
-4. Scroll up and select **Application permissions**.
-5. Expand **Directory** on the list below and enable **Directory.ReadAll**.
-6. Click **Add permissions** to accept the changes.
-7. The API permissions panel should now show both *User.Read* and *Directory.ReadAll*. Please note the warning in **Admin consent required** column next to *Directory.ReadAll*.
-8. If you are the *Azure Subscription Administrator*, click **Grant admin consent for *Subscription Name*** below. If you are not the *Azure Subscription Administrator*, request the consent from your administrator.
-
-![Screenshot of the API permissions panel. User.Read and Directory.ReadAll permissions added, admin consent required for Directory.ReadAll](./media/howto-aad-app-configuration/permissions-required.png)
-
-> [!IMPORTANT]
-> Synchronization of the cluster administrators group will work only after consent has been granted. You will see a green circle with a checkmark and a message "Granted for *Subscription Name*" in the *Admin consent required* column.
-
-For details on managing administrators and other roles, see [Add or change Azure subscription administrators](../cost-management-billing/manage/add-change-subscription-administrator.md).
-
-## Resources
-
-* [Applications and service principal objects in Azure Active Directory](../active-directory/develop/app-objects-and-service-principals.md)
-* [Quickstart: Register an app with the Azure Active Directory v1.0 endpoint](../active-directory/develop/quickstart-register-app.md)
-
-## Next steps
-
-If you've met all the [Azure Red Hat OpenShift prerequisites](howto-setup-environment.md), you're ready to create your first cluster!
-
-Try the tutorial:
-> [!div class="nextstepaction"]
-> [Create an Azure Red Hat OpenShift cluster](tutorial-create-cluster.md)
openshift Howto Create A Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-a-restore.md
In this article, an Azure Red Hat OpenShift 4 cluster application was restored.
Advance to the next article to learn about Azure Red Hat OpenShift 4 supported resources.
-* [Azure Red Hat OpenShift v4 supported resources](supported-resources.md)
+* [Azure Red Hat OpenShift v4 supported resources](support-policies-v4.md#supported-virtual-machine-sizes)
openshift Howto Create Private Cluster 3X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-private-cluster-3x.md
- Title: Create a private cluster with Azure Red Hat OpenShift 3.11
-description: Learn how to create a private cluster with Azure Red Hat OpenShift 3.11 and about the benefits of private clusters.
---- Previously updated : 06/02/2022-
-keywords: aro, openshift, private cluster, red hat
-#Customer intent: As a customer, I want to create a private cluster on ARO OpenShift.
--
-# Create a private cluster with Azure Red Hat OpenShift 3.11
-
-> [!IMPORTANT]
-> Azure Red Hat OpenShift 3.11 will be retired. Following retirement, remaining Azure Red Hat OpenShift 3.11 clusters will be shut down to prevent security vulnerabilities.
->
-> Follow this guide to [create an Azure Red Hat OpenShift 4 cluster](tutorial-create-cluster.md).
-> If you have specific questions, [contact us](mailto:arofeedback@microsoft.com).
-
-Private clusters provide the following benefits:
-
-* Private clusters don't expose cluster control plane components, such as the API servers, on a public IP address.
-* The virtual network of a private cluster is configurable by customers. You can set up networking to allow peering with other virtual networks, including ExpressRoute environments. You can also configure custom DNS on the virtual network to integrate with internal services.
-
-## Before you begin
-
-The fields in the following configuration snippet are new and must be included in your cluster configuration. `managementSubnetCidr` must be within the cluster virtual network and is used by Azure to manage the cluster.
-
-```json
-properties:
- networkProfile:
- managementSubnetCidr: 10.0.1.0/24
- masterPoolProfile:
- apiProperties:
- privateApiServer: true
-```
-
-A private cluster can be deployed using the sample scripts provided below. Once the cluster is deployed, run the `cluster get` command and view the `properties.FQDN` property to determine the private IP address of the OpenShift API server.
-
-The cluster virtual network is created with permissions so that you can modify it. You can set up networking to access the virtual network, such as ExpressRoute, VPN, and virtual network peering.
-
-If you change the DNS nameservers on the cluster virtual network, issue an update on the cluster with the `properties.RefreshCluster` property set to `true` so that the virtual machines can be reimaged. This update allows them to pick up the new nameservers.
-
-## Sample configuration scripts
-
-Use the sample scripts in this section to set up and deploy your private cluster.
-
-### Environment
-
-Fill in the environment variables below as using your own values.
-
-> [!NOTE]
-> The location must be set to `eastus2` because this is currently the only supported location for private clusters.
-
-``` bash
-export CLUSTER_NAME=
-export LOCATION=eastus2
-export TOKEN=$(az account get-access-token --query 'accessToken' -o tsv)
-export SUBID=
-export TENANT_ID=
-export ADMIN_GROUP=
-export CLIENT_ID=
-export SECRET=
-```
-
-### private-cluster.json
-
-This sample is a cluster configuration with private cluster enabled. It uses the environment variables defined above.
-
-```json
-{
- "location": "$LOCATION",
- "name": "$CLUSTER_NAME",
- "properties": {
- "openShiftVersion": "v3.11",
- "networkProfile": {
- "vnetCIDR": "10.0.0.0/8",
- "managementSubnetCIDR" : "10.0.1.0/24"
- },
- "authProfile": {
- "identityProviders": [
- {
- "name": "Azure AD",
- "provider": {
- "kind": "AADIdentityProvider",
- "clientId": "$CLIENT_ID",
- "secret": "$SECRET",
- "tenantId": "$TENANT_ID",
- "customerAdminGroupID": "$ADMIN_GROUP"
- }
- }
- ]
- },
- "masterPoolProfile": {
- "name": "master",
- "count": 3,
- "vmSize": "Standard_D4s_v3",
- "osType": "Linux",
- "subnetCIDR": "10.0.0.0/24",
- "apiProperties": {
- "privateApiServer": true
- }
- },
- "agentPoolProfiles": [
- {
- "role": "compute",
- "name": "compute",
- "count": 1,
- "vmSize": "Standard_D4s_v3",
- "osType": "Linux",
- "subnetCIDR": "10.0.0.0/24"
- },
- {
- "role": "infra",
- "name": "infra",
- "count": 3,
- "vmSize": "Standard_D4s_v3",
- "osType": "Linux",
- "subnetCIDR": "10.0.0.0/24"
- }
- ],
- "routerProfiles": [
- {
- "name": "default"
- }
- ]
- }
-}
-```
-
-## Deploy a private cluster
-
-After configuring your private cluster with the sample scripts above, run the following command to deploy your private cluster.
-
-``` bash
-az group create --name $CLUSTER_NAME --location $LOCATION
-cat private-cluster.json | envsubst | curl -v -X PUT \
--H 'Content-Type: application/json; charset=utf-8' \--H 'Authorization: Bearer '$TOKEN'' -d @- \
- https://management.azure.com/subscriptions/$SUBID/resourceGroups/$CLUSTER_NAME/providers/Microsoft.ContainerService/openShiftManagedClusters/$CLUSTER_NAME?api-version=2019-10-27-preview
-```
-
-## Next steps
-
-To learn about how to access the OpenShift console, see [Web Console Walkthrough](https://docs.openshift.com/container-platform/3.11/getting_started/developers_console.html).
openshift Howto Create Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-tenant.md
- Title: Create an Azure AD tenant for Azure Red Hat OpenShift
-description: Here's how to create an Azure Active Directory (Azure AD) tenant to host your Microsoft Azure Red Hat OpenShift cluster.
---- Previously updated : 05/13/2019--
-# Create an Azure AD tenant for Azure Red Hat OpenShift
-
-> [!IMPORTANT]
-> Azure Red Hat OpenShift 3.11 will be retired 30 June 2022. Support for creation of new Azure Red Hat OpenShift 3.11 clusters continues through 30 November 2020. Following retirement, remaining Azure Red Hat OpenShift 3.11 clusters will be shut down to prevent security vulnerabilities.
->
-> Follow this guide to [create an Azure Red Hat OpenShift 4 cluster](tutorial-create-cluster.md).
-> If you have specific questions, [please contact us](mailto:arofeedback@microsoft.com).
-
-Microsoft Azure Red Hat OpenShift requires an [Azure Active Directory (Azure AD)](../active-directory/develop/quickstart-create-new-tenant.md) tenant in which to create your cluster. A *tenant* is a dedicated instance of Azure AD that an organization or app developer receives when they create a relationship with Microsoft by signing up for Azure, Microsoft Intune, or Microsoft 365. Each Azure AD tenant is distinct and separate from other Azure AD tenants and has its own work and school identities and app registrations.
-
-If you don't already have an Azure AD tenant, follow these instructions to create one.
-
-## Create a new Azure AD tenant
-
-To create a tenant:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) using the account you wish to associate with your Azure Red Hat OpenShift cluster.
-2. Open the [Azure Active Directory blade](https://portal.azure.com/#create/Microsoft.AzureActiveDirectory) to create a new tenant (also known as a new *Azure Active Directory*).
-3. Provide an **Organization name**.
-4. Provide an **Initial domain name**. This will have *onmicrosoft.com* appended to it. You can reuse the value for *Organization name* here.
-5. Choose a country or region where the tenant will be created.
-6. Click **Create**.
-7. After your Azure AD tenant is created, select the **Click here to manage your new directory** link. Your new tenant name should be displayed in the upper-right of the Azure portal:
-
- ![Screenshot of the portal showing the tenant name in the upper-right][tenantcallout]
-
-8. Make note of the *tenant ID* so you can later specify where to create your Azure Red Hat OpenShift cluster. In the portal, you should now see the Azure Active Directory overview blade for your new tenant. Select **Properties** and copy the value for your **Directory ID**. We will refer to this value as `TENANT` in the [Create an Azure Red Hat OpenShift cluster](tutorial-create-cluster.md) tutorial.
-
-[tenantcallout]: ./media/howto-create-tenant/tenant-callout.png
-
-## Resources
-
-Check out [Azure Active Directory documentation](../active-directory/index.yml) for more info on [Azure AD tenants](../active-directory/develop/quickstart-create-new-tenant.md).
-
-## Next steps
-
-Learn how to create a service principal, generate a client secret and authentication callback URL, and create a new Active Directory user for testing apps on your Azure Red Hat OpenShift cluster.
-
-[Create an Azure AD app object and user](howto-aad-app-configuration.md)
openshift Howto Deploy Prometheus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-prometheus.md
- Title: Deploy Prometheus instance in Azure Red Hat OpenShift cluster
-description: Create a Prometheus instance in an Azure Red Hat OpenShift cluster to monitor your application's metrics.
---- Previously updated : 06/17/2019
-keywords: prometheus, aro, openshift, metrics, red hat
--
-# Deploy a standalone Prometheus instance in an Azure Red Hat OpenShift cluster
-
-> [!IMPORTANT]
-> Azure Red Hat OpenShift 3.11 will be retired 30 June 2022. Support for creation of new Azure Red Hat OpenShift 3.11 clusters continues through 30 November 2020. Following retirement, remaining Azure Red Hat OpenShift 3.11 clusters will be shut down to prevent security vulnerabilities.
->
-> Follow this guide to [create an Azure Red Hat OpenShift 4 cluster](tutorial-create-cluster.md).
-> If you have specific questions, [please contact us](mailto:arofeedback@microsoft.com).
-
-This article describes how to configure a standalone Prometheus instance that uses service discovery in an Azure Red Hat OpenShift cluster.
-
-> [!NOTE]
-> Customer admin access to Azure Red Hat OpenShift cluster isn't required.
-
-Target setup:
--- One project (prometheus-project), which contains Prometheus and Alertmanager.-- Two projects (app-project1 and app-project2), which contain the applications to monitor.-
-You'll prepare some Prometheus config files locally. Create a new folder to store them. Config files are stored in the cluster as secrets, in case secret tokens are added later to the cluster.
-
-## Sign in to the cluster by using the OC tool
-
-1. Open a web browser, and then go to the web console of your cluster (https://openshift.*random-id*.*region*.azmosa.io).
-2. Sign in with your Azure credentials.
-3. Select your username in the upper-right corner, and then select **Copy Login Command**.
-4. Paste your username into the terminal that you'll use.
-
-> [!NOTE]
-> To see if you're signed in to the correct cluster, run the `oc whoami -c` command.
-
-## Prepare the projects
-
-To create the projects, run the following commands:
-```
-oc new-project prometheus-project
-oc new-project app-project1
-oc new-project app-project2
-```
--
-> [!NOTE]
-> You can either use the `-n` or `--namespace` parameter, or select an active project by running the `oc project` command.
-
-## Prepare the Prometheus configuration file
-Create a prometheus.yml file by entering the following content:
-```
-global:
- scrape_interval: 30s
- evaluation_interval: 5s
-
-scrape_configs:
- - job_name: prom-sd
- scrape_interval: 30s
- scrape_timeout: 10s
- metrics_path: /metrics
- scheme: http
- kubernetes_sd_configs:
- - api_server: null
- role: endpoints
- namespaces:
- names:
- - prometheus-project
- - app-project1
- - app-project2
-```
-Create a secret called Prom by entering the following configuration:
-```
-oc create secret generic prom --from-file=prometheus.yml -n prometheus-project
-```
-
-The prometheus.yml file is a basic Prometheus configuration file. It sets the intervals and configures auto discovery in three projects (prometheus-project, app-project1, app-project2). In the previous configuration file, the auto-discovered endpoints are scraped over HTTP without authentication.
-
-For more information about scraping endpoints, see [Prometheus scape config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config).
--
-## Prepare the Alertmanager config file
-Create an alertmanager.yml file by entering the following content:
-```
-global:
- resolve_timeout: 5m
-route:
- group_wait: 30s
- group_interval: 5m
- repeat_interval: 12h
- receiver: default
- routes:
- - match:
- alertname: DeadMansSwitch
- repeat_interval: 5m
- receiver: deadmansswitch
-receivers:
-- name: default-- name: deadmansswitch
-```
-Create a secret called Prom-Alerts by entering the following configuration:
-```
-oc create secret generic prom-alerts --from-file=alertmanager.yml -n prometheus-project
-```
-
-Alertmanager.yml is the Alert Manager configuration file.
-
-> [!NOTE]
-> To verify the two previous steps, run the `oc get secret -n prometheus-project` command.
-
-## Start Prometheus and Alertmanager
-Go to [openshift/origin repository](https://github.com/openshift/origin/tree/release-3.11/examples/prometheus) and download the [prometheus-standalone.yaml](
-https://raw.githubusercontent.com/openshift/origin/release-3.11/examples/prometheus/prometheus-standalone.yaml) template. Apply the template to prometheus-project by entering the following configuration:
-```
-oc process -f https://raw.githubusercontent.com/openshift/origin/release-3.11/examples/prometheus/prometheus-standalone.yaml | oc apply -f - -n prometheus-project
-```
-The prometheus-standalone.yaml file is an OpenShift template. It will create a Prometheus instance with oauth-proxy in front of it and an Alertmanager instance, also secured with oauth-proxy. In this template, oauth-proxy is configured to allow any user who can "get" the prometheus-project namespace (see the `-openshift-sar` flag).
-
-> [!NOTE]
-> To verify if the prom StatefulSet has equal DESIRED and CURRENT number replicas, run the `oc get statefulset -n prometheus-project` command. To check all resources in the project, run the `oc get all -n prometheus-project` command.
-
-## Add permissions to allow service discovery
-
-Create a prometheus-sdrole.yml file by entering the following content:
-```
-apiVersion: template.openshift.io/v1
-kind: Template
-metadata:
- name: prometheus-sdrole
- annotations:
- "openshift.io/display-name": Prometheus service discovery role
- description: |
- Role and rolebinding added permissions required for service discovery in a given project.
- iconClass: fa fa-cogs
- tags: "monitoring,prometheus,alertmanager,time-series"
-parameters:
-- description: The project name, where a standalone Prometheus is deployed
- name: PROMETHEUS_PROJECT
- value: prometheus-project
-objects:
-- apiVersion: rbac.authorization.k8s.io/v1
- kind: Role
- metadata:
- name: prometheus-sd
- rules:
- - apiGroups:
- - ""
- resources:
- - services
- - endpoints
- - pods
- verbs:
- - list
- - get
- - watch
-- apiVersion: rbac.authorization.k8s.io/v1
- kind: RoleBinding
- metadata:
- name: prometheus-sd
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: Role
- name: prometheus-sd
- subjects:
- - kind: ServiceAccount
- name: prom
- namespace: ${PROMETHEUS_PROJECT}
-```
-To apply the template to all projects from which you want to allow service discovery, run the following commands:
-```
-oc process -f prometheus-sdrole.yml | oc apply -f - -n app-project1
-oc process -f prometheus-sdrole.yml | oc apply -f - -n app-project2
-oc process -f prometheus-sdrole.yml | oc apply -f - -n prometheus-project
-```
-
-> [!NOTE]
-> To verify that Role and RoleBinding were created correctly, run the `oc get role` and `oc get rolebinding` commands.
-
-## Optional: Deploy example application
-
-Everything is working, but there are no metrics sources. Go to the Prometheus URL (https://prom-prometheus-project.apps.*random-id*.*region*.azmosa.io/). You can find it by using following command:
-
-```
-oc get route prom -n prometheus-project
-```
-> [!IMPORTANT]
-> Remember to add the https:// prefix to beginning of the host name.
-
-The **Status > Service Discovery** page will show 0/0 active targets.
-
-To deploy an example application, which exposes basic Python metrics under the /metrics endpoint, run the following commands:
-```
-oc new-app python:3.6~https://github.com/Makdaam/prometheus-example --name=example1 -n app-project1
-
-oc new-app python:3.6~https://github.com/Makdaam/prometheus-example --name=example2 -n app-project2
-```
-The new applications should appear as valid targets on the Service Discovery page within 30 seconds after deployment.
-
-For more details, select **Status** > **Targets**.
-
-> [!NOTE]
-> For every successfully scraped target, Prometheus adds a data point in the up metric. Select **Prometheus** in the upper-left corner, enter **up** as the expression, and then select **Execute**.
-
-## Next steps
-
-You can add custom Prometheus instrumentation to your applications. The Prometheus Client library, which simplifies Prometheus metrics preparation, is ready for different programming languages.
-
-For more information, see the following GitHub libraries:
-
openshift Howto Manage Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-manage-projects.md
- Title: Manage resources in Azure Red Hat OpenShift | Microsoft Docs
-description: Manage projects, templates, image-streams in an Azure Red Hat OpenShift cluster
-
-keywords: red hat openshift projects requests self-provisioner
-- Previously updated : 07/19/2019--
-#Customer intent: As a developer, I need to understand how to manage Openshift projects and development resources
--
-# Manage projects, templates, image-streams in an Azure Red Hat OpenShift cluster
-
-> [!IMPORTANT]
-> Azure Red Hat OpenShift 3.11 will be retired 30 June 2022. Support for creation of new Azure Red Hat OpenShift 3.11 clusters continues through 30 November 2020. Following retirement, remaining Azure Red Hat OpenShift 3.11 clusters will be shut down to prevent security vulnerabilities.
->
-> Follow this guide to [create an Azure Red Hat OpenShift 4 cluster](tutorial-create-cluster.md).
-> If you have specific questions, [please contact us](mailto:arofeedback@microsoft.com).
-
-In an OpenShift Container Platform, projects are used to group and isolate related objects. As an administrator, you can give developers access to specific projects, allow them to create their own projects, and grant them administrative rights to individual projects.
-
-## Self-provisioning projects
-
-You can enable developers to create their own projects. An API endpoint is responsible for provisioning a project according to a template named project-request. The web console and the `oc new-project` command use this endpoint when a developer creates a new project.
-
-When a project request is submitted, the API substitutes the following parameters in the template:
-
-| Parameter | Description |
-| -- | - |
-| PROJECT_NAME | The name of the project. Required. |
-| PROJECT_DISPLAYNAME | The display name of the project. May be empty. |
-| PROJECT_DESCRIPTION | The description of the project. May be empty. |
-| PROJECT_ADMIN_USER | The username of the administrating user. |
-| PROJECT_REQUESTING_USER | The username of the requesting user. |
-
-Access to the API is granted to developers with the self-provisioners cluster role binding. This feature is available to all authenticated developers by default.
-
-## Modify the template for a new project
-
-1. Log in as a user with `customer-admin` privileges.
-
-2. Edit the default project-request template.
-
- ```
- oc edit template project-request -n openshift
- ```
-
-3. Remove the default project template from the Azure Red Hat OpenShift (ARO) update process by adding the following annotation:
- `openshift.io/reconcile-protect: "true"`
-
- ```
- ...
- metadata:
- annotations:
- openshift.io/reconcile-protect: "true"
- ...
- ```
-
- The project-request template will not be updated by the ARO update process. This enables customers to customize the template and preserve these customizations when the cluster is updated.
-
-## Disable the self-provisioning role
-
-You can prevent an authenticated user group from self-provisioning new projects.
-
-1. Log in as a user with `customer-admin` privileges.
-
-2. Edit the self-provisioners cluster role binding.
-
- ```
- oc edit clusterrolebinding.rbac.authorization.k8s.io self-provisioners
- ```
-
-3. Remove the role from the ARO update process by adding the following annotation: `openshift.io/reconcile-protect: "true"`.
-
- ```
- ...
- metadata:
- annotations:
- openshift.io/reconcile-protect: "true"
- ...
- ```
-
-4. Change the cluster role binding to prevent `system:authenticated:oauth` from creating projects:
-
- ```
- apiVersion: rbac.authorization.k8s.io/v1
- groupNames:
- - osa-customer-admins
- kind: ClusterRoleBinding
- metadata:
- annotations:
- openshift.io/reconcile-protect: "true"
- labels:
- azure.openshift.io/owned-by-sync-pod: "true"
- name: self-provisioners
- roleRef:
- name: self-provisioner
- subjects:
- - kind: SystemGroup
- name: osa-customer-admins
- ```
-
-## Manage default templates and imageStreams
-
-In Azure Red Hat OpenShift, you can disable updates for any default templates and image streams inside `openshift` namespace.
-To disable updates for ALL `Templates` and `ImageStreams` in `openshift` namespace:
-
-1. Log in as a user with `customer-admin` privileges.
-
-2. Edit `openshift` namespace:
-
- ```
- oc edit namespace openshift
- ```
-
-3. Remove `openshift` namespace from the ARO update process by adding the following annotation:
-`openshift.io/reconcile-protect: "true"`
-
- ```
- ...
- metadata:
- annotations:
- openshift.io/reconcile-protect: "true"
- ...
- ```
-
- Any individual object in the `openshift` namespace can be removed from the update process by adding annotation `openshift.io/reconcile-protect: "true"` to it.
-
-## Next steps
-
-Try the tutorial:
-> [!div class="nextstepaction"]
-> [Create an Azure Red Hat OpenShift cluster](tutorial-create-cluster.md)
openshift Howto Restrict Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-restrict-egress.md
The following FQDNs are proxied through the service, and will not need additiona
## List of optional FQDNs
-### INSTALLING AND DOWNLOADING PACKAGES AND TOOLS
+### ADDITIONAL CONTAINER IMAGES
- **`registry.redhat.io`**: Used to provide images for things such as Operator Hub.
+- **`*.quay.io`**: May be used to download images from the Red Hat managed Quay registry. Also a possible fall-back target for ARO required system images. If your firewall cannot use wildcards, you can find the [full list of subdomains in the Red Hat documentation.](https://docs.openshift.com/container-platform/latest/installing/install_config/configuring-firewall.html)
In OpenShift Container Platform, customers can opt out of reporting health and u
### OTHER POSSIBLE OPENSHIFT REQUIREMENTS -- **`*.quay.io`**: May be used to download images from the Red Hat managed Quay registry. Also a possible fall-back target for ARO required system images. If your firewall cannot use wildcards, you can find the [full list of subdomains in the Red Hat documentation.](https://docs.openshift.com/container-platform/latest/installing/install_config/configuring-firewall.html) - **`mirror.openshift.com`**: Required to access mirrored installation content and images. This site is also a source of release image signatures. - **`*.apps.<cluster_name>.<base_domain>`** (OR EQUIVALENT ARO URL): When allowlisting domains, this is used in your corporate network to reach applications deployed in OpenShift, or to access the OpenShift console. - **`api.openshift.com`**: Used by the cluster for release graph parsing. https://access.redhat.com/labs/ocpupgradegraph/ can be used as an alternative.
openshift Howto Run Privileged Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-run-privileged-containers.md
- Title: Run privileged containers in an Azure Red Hat OpenShift cluster | Microsoft Docs
-description: Run privileged containers to monitor security and compliance.
---- Previously updated : 12/05/2019
-keywords: aro, openshift, aquasec, twistlock, red hat
-#Customer intent: As a customer, I want to monitor security compliance of my ARO clusters.
--
-# Run privileged containers in an Azure Red Hat OpenShift cluster
-
-> [!IMPORTANT]
-> Azure Red Hat OpenShift 3.11 will be retired 30 June 2022. Support for creation of new Azure Red Hat OpenShift 3.11 clusters continues through 30 November 2020. Following retirement, remaining Azure Red Hat OpenShift 3.11 clusters will be shut down to prevent security vulnerabilities.
->
-> Follow this guide to [create an Azure Red Hat OpenShift 4 cluster](tutorial-create-cluster.md).
-> If you have specific questions, [please contact us](mailto:arofeedback@microsoft.com).
-
-You can't run arbitrary privileged containers on Azure Red Hat OpenShift clusters.
-Two security monitoring and compliance solutions are allowed to run on ARO clusters.
-This document describes the differences from the generic OpenShift deployment documentation of the security product vendors.
--
-Read through these instructions before following the vendor's instructions.
-Section titles in product-specific steps below refer directly to section titles in the vendors' documentation.
-
-## Before you begin
-
-The documentation of most security products assumes you have cluster-admin privileges.
-Customer admins don't have all privileges in Azure Red Hat OpenShift. Permissions required to modify cluster-wide resources are limited.
-
-First, ensure the user is logged in to the cluster as a customer admin, by running
-`oc get scc`. All users that are members of the customer admin group have permissions to view the Security Context Constraints (SCCs) on the cluster.
-
-Next, ensure that the `oc` binary version is `3.11.154`.
-```
-oc version
-oc v3.11.154
-kubernetes v1.11.0+d4cacc0
-features: Basic-Auth GSSAPI Kerberos SPNEGO
-
-Server https://openshift.aqua-test.osadev.cloud:443
-openshift v3.11.154
-kubernetes v1.11.0+d4cacc0
-```
-
-## Product-specific steps for Aqua Security
-The base instructions that are going to be modified can be found in the [Aqua Security deployment documentation](https://docs.aquasec.com/docs/openshift-red-hat). The steps here will run in conjunction to the Aqua deployment documentation.
-
-The first step is to annotate the required SCCs that will be updated. These annotations prevent the cluster's Sync Pod from reverting any changes to these SSCs.
-
-```
-oc annotate scc hostaccess openshift.io/reconcile-protect=true
-oc annotate scc privileged openshift.io/reconcile-protect=true
-```
-
-### Step 1: Prepare prerequisites
-Remember to log in to the cluster as an ARO Customer Admin instead of the cluster-admin role.
-
-Create the project and the service account.
-```
-oc new-project aqua-security
-oc create serviceaccount aqua-account -n aqua-security
-```
-
-Instead of assigning the cluster-reader role, assign the customer-admin-cluster role to the aqua-account with the following command.
-```
-oc adm policy add-cluster-role-to-user customer-admin-cluster system:serviceaccount:aqua-security:aqua-account
-oc adm policy add-scc-to-user privileged system:serviceaccount:aqua-security:aqua-account
-oc adm policy add-scc-to-user hostaccess system:serviceaccount:aqua-security:aqua-account
-```
-
-Continue following the remaining instructions in Step 1. Those instructions describe setting up the secret for the Aqua registry.
-
-### Step 2: Deploy the Aqua Server, Database, and Gateway
-Follow the steps provided in the Aqua documentation for installing the aqua-console.yaml.
-
-Modify the provided `aqua-console.yaml`. Remove the top two objects labeled, `kind: ClusterRole` and `kind: ClusterRoleBinding`. These resources won't be created as the customer admin doesn't have permission at this time to modify `ClusterRole` and `ClusterRoleBinding` objects.
-
-The second modification will be to the `kind: Route` portion of the `aqua-console.yaml`. Replace the following yaml for the `kind: Route` object in the `aqua-console.yaml` file.
-```
-apiVersion: route.openshift.io/v1
-kind: Route
-metadata:
- labels:
- app: aqua-web
- name: aqua-web
- namespace: aqua-security
-spec:
- port:
- targetPort: aqua-web
- tls:
- insecureEdgeTerminationPolicy: Redirect
- termination: edge
- to:
- kind: Service
- name: aqua-web
- weight: 100
- wildcardPolicy: None
-```
-
-Follow the remaining instructions.
-
-### Step 3: Login to the Aqua Server
-This section isn't modified in any way. Follow the Aqua documentation.
-
-Use the following command to get the Aqua Console address.
-```
-oc get route aqua-web -n aqua-security
-```
-
-### Step 4: Deploy Aqua Enforcers
-Set the following fields when deploying enforcers:
-
-| Field | Value |
-| -- | - |
-| Orchestrator | OpenShift |
-| ServiceAccount | aqua-account |
-| Project | aqua-security |
-
-## Product-specific steps for Prisma Cloud / Twistlock
-
-The base instructions we're going to modify can be found in the [Prisma Cloud deployment documentation](https://docs.paloaltonetworks.com/prisma/prisma-cloud/19-11/prisma-cloud-compute-edition-admin/install/install_openshift.html)
-
-Start by installing the `twistcli` tool as described in the "Install Prisma Cloud" and "Download the Prisma Cloud software" sections.
-
-Create a new OpenShift project
-```
-oc new-project twistlock
-```
-
-Skip the optional section "Push the Prisma Cloud images to a private registry". It won't work on Azure Red Hat OpenShift. Use the online registry instead.
-
-You can follow the official documentation while applying the corrections described below.
-Start with the "Install Console" section.
-
-### Install Console
-
-During `oc create -f twistlock_console.yaml` in Step 2, you'll get an Error when creating the namespace.
-You can safely ignore it, the namespace has been created previously with the `oc new-project` command.
-
-Use `azure-disk` for storage type.
-
-### Create an external route to Console
-
-You can either follow the documentation, or the instructions below if you prefer the oc command.
-Copy the following Route definition to a file called twistlock_route.yaml on your computer
-```
-apiVersion: route.openshift.io/v1
-kind: Route
-metadata:
- labels:
- name: console
- name: twistlock-console
- namespace: twistlock
-spec:
- port:
- targetPort: mgmt-http
- tls:
- insecureEdgeTerminationPolicy: Redirect
- termination: edge
- to:
- kind: Service
- name: twistlock-console
- weight: 100
- wildcardPolicy: None
-```
-then run:
-```
-oc create -f twistlock_route.yaml
-```
-
-You can get the URL assigned to Twistlock console with this command:
-`oc get route twistlock-console -n twistlock`
-
-### Configure console
-
-Follow the Twistlock documentation.
-
-### Install Defender for Cloud
-
-During `oc create -f defender.yaml` in Step 2, you'll get Errors when creating the Cluster Role and Cluster Role Binding.
-You can ignore them.
-
-Defenders will be deployed only on compute nodes. You don't have to limit them with a node selector.
openshift Howto Setup Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-setup-environment.md
- Title: Set up your Azure Red Hat OpenShift development environment
-description: Here are the prerequisites for working with Microsoft Azure Red Hat OpenShift.
-keywords: red hat openshift setup set up
-- Previously updated : 11/04/2019---
-#Customer intent: As a developer, I need to understand the prerequisites for working with Azure Red Hat OpenShift
--
-# Set up your Azure Red Hat OpenShift dev environment
-
-> [!IMPORTANT]
-> Azure Red Hat OpenShift 3.11 will be retired 30 June 2022. Support for creation of new Azure Red Hat OpenShift 3.11 clusters continues through 30 November 2020. Following retirement, remaining Azure Red Hat OpenShift 3.11 clusters will be shut down to prevent security vulnerabilities.
->
-> Follow this guide to [create an Azure Red Hat OpenShift 4 cluster](tutorial-create-cluster.md).
-> If you have specific questions, [please contact us](mailto:arofeedback@microsoft.com).
-
-To build and run Microsoft Azure Red Hat OpenShift applications, you'll need to:
-
-* Install version 2.0.65 (or higher) of the Azure CLI (or use the Azure Cloud Shell).
-* Register for the `AROGA` feature and associated resource providers.
-* Create an Azure Active Directory (Azure AD) tenant.
-* Create an Azure AD application object.
-* Create an Azure AD user.
-
-The following instructions will walk you through all of these prerequisites.
-
-## Install the Azure CLI
-
-Azure Red Hat OpenShift requires version 2.0.65 or higher of the Azure CLI. If you've already installed the Azure CLI, you can check which version you have by running:
-
-```azurecli
-az --version
-```
-
-The first line of output will have the CLI version, for example `azure-cli (2.0.65)`.
-
-Here are instructions for [installing the Azure CLI](/cli/azure/install-azure-cli) if you require a new installation or an upgrade.
-
-Alternately, you can use the [Azure Cloud Shell](../cloud-shell/overview.md). When using the Azure Cloud Shell, be sure to select the **Bash** environment if you plan to follow along with the [Create and manage an Azure Red Hat OpenShift cluster](tutorial-create-cluster.md) tutorial series.
-
-## Register providers and features
-
-The `Microsoft.ContainerService AROGA` feature, `Microsoft.Solutions`, `Microsoft.Compute`, `Microsoft.Storage`, `Microsoft.KeyVault` and `Microsoft.Network` providers must be registered to your subscription manually before deploying your first Azure Red Hat OpenShift cluster.
-
-To register these providers and features manually, use the following instructions from a Bash shell if you've installed the CLI, or from the Azure Cloud Shell (Bash) session in your Azure portal:
-
-1. If you have multiple Azure subscriptions, specify the relevant subscription ID:
-
- ```azurecli
- az account set --subscription <SUBSCRIPTION ID>
- ```
-
-1. Register the Microsoft.ContainerService AROGA feature:
-
- ```azurecli
- az feature register --namespace Microsoft.ContainerService -n AROGA
- ```
-
-1. Register the Microsoft.Storage provider:
-
- ```azurecli
- az provider register -n Microsoft.Storage --wait
- ```
-
-1. Register the Microsoft.Compute provider:
-
- ```azurecli
- az provider register -n Microsoft.Compute --wait
- ```
-
-1. Register the Microsoft.Solutions provider:
-
- ```azurecli
- az provider register -n Microsoft.Solutions --wait
- ```
-
-1. Register the Microsoft.Network provider:
-
- ```azurecli
- az provider register -n Microsoft.Network --wait
- ```
-
-1. Register the Microsoft.KeyVault provider:
-
- ```azurecli
- az provider register -n Microsoft.KeyVault --wait
- ```
-
-1. Refresh the registration of the Microsoft.ContainerService resource provider:
-
- ```azurecli
- az provider register -n Microsoft.ContainerService --wait
- ```
-
-## Create an Azure Active Directory (Azure AD) tenant
-
-The Azure Red Hat OpenShift service requires an associated Azure Active Directory (Azure AD) tenant that represents your organization and its relationship to Microsoft. Your Azure AD tenant enables you to register, build, and manage apps, as well as use other Azure services.
-
-If you don't have an Azure AD to use as the tenant for your Azure Red Hat OpenShift cluster, or you wish to create a tenant for testing, follow the instructions in [Create an Azure AD tenant for your Azure Red Hat OpenShift cluster](howto-create-tenant.md) before continuing with this guide.
-
-## Create an Azure AD user, security group and application object
-
-Azure Red Hat OpenShift requires permissions to perform tasks on your cluster, such as configuring storage. These permissions are represented through a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object). You'll also want to create a new Active Directory user for testing apps running on your Azure Red Hat OpenShift cluster.
-
-Follow the instructions in [Create an Azure AD app object and user](howto-aad-app-configuration.md) to create a service principal, generate a client secret and authentication callback URL for your app, and create a new Azure AD security group and user to access the cluster.
-
-## Next steps
-
-You're now ready to use Azure Red Hat OpenShift!
-
-Try the tutorial:
-> [!div class="nextstepaction"]
-> [Create an Azure Red Hat OpenShift cluster](tutorial-create-cluster.md)
-
-[azure-cli-install]: /cli/azure/install-azure-cli
openshift Intro Openshift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/intro-openshift.md
Azure Red Hat OpenShift offers a Service Level Agreement to guarantee that the s
Learn the prerequisites for Azure Red Hat OpenShift: > [!div class="nextstepaction"]
-> [Set up your dev environment](tutorial-create-cluster.md)
+> [Create an OpenShift Cluster](tutorial-create-cluster.md)
openshift Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/migration.md
- Title: Migrate from an Azure Red Hat OpenShift 3.11 to Azure Red Hat OpenShift 4
-description: Migrate from an Azure Red Hat OpenShift 3.11 to Azure Red Hat OpenShift 4
---- Previously updated : 08/13/2020
-keywords: migration, aro, openshift, red hat
-#Customer intent: As a customer, I want to migrate from an existing Azure Red Hat OpenShift 3.11 cluster to an Azure Red Hat OpenShift 4 cluster.
--
-# Migrate from Azure Red Hat OpenShift 3.11 to Azure Red Hat OpenShift 4
-
-Azure Red Hat OpenShift on OpenShift 4 brings Kubernetes 1.16 on Red Hat Core OS, private clusters, bring your own virtual network support, and full cluster admin role. In addition, many new features are now available such as support for the operator framework, the Operator Hub, and OpenShift Service Mesh.
-
-To successfully transition from Azure Red Hat OpenShift 3.11 to Azure Red Hat OpenShift 4, make sure to review the [differences in storage, networking, logging, security, and monitoring](https://docs.openshift.com/container-platform/4.4/migration/migrating_3_4/planning-migration-3-to-4.html).
-
-In this article, we'll demonstrate how to migrate from an Azure Red Hat OpenShift 3.11 cluster to an Azure Red Hat 4 cluster.
-
-> [!NOTE]
-> Red Hat OpenShift migration tools such as the Control Plane Migration Assistance Tool and the Cluster Application Migration Tool (CAM) cannot be used with Azure Red Hat OpenShift 3.11 clusters.
-
-## Before you begin
-
-This article assumes you have an existing Azure Red Hat OpenShift 3.11 cluster.
-
-## Create a target Azure Red Hat OpenShift 4 cluster
-
-First, [create the Azure Red Hat OpenShift 4 cluster](tutorial-create-cluster.md) you would like to use as the target cluster. Here, we'll use the basic configuration. If you're interested in different settings, see the [Create an Azure Red Hat OpenShift 4 Cluster tutorial](tutorial-create-cluster.md).
-
-Create a virtual network with two empty subnets for the master and worker nodes.
-
-```azurecli-interactive
- az network vnet create \
- --resource-group $RESOURCEGROUP \
- --name aro-vnet \
- --address-prefixes 10.0.0.0/22
-
- az network vnet subnet create \
- --resource-group $RESOURCEGROUP \
- --vnet-name aro-vnet \
- --name master-subnet \
- --address-prefixes 10.0.0.0/23 \
- --service-endpoints Microsoft.ContainerRegistry
-
- az network vnet subnet create \
- --resource-group $RESOURCEGROUP \
- --vnet-name aro-vnet \
- --name worker-subnet \
- --address-prefixes 10.0.2.0/23 \
- --service-endpoints Microsoft.ContainerRegistry
-```
-
-Then, use the following command to create the cluster.
-
-```azurecli-interactive
-az aro create \
- --resource-group $RESOURCEGROUP \
- --name $CLUSTER \
- --vnet aro-vnet \
- --master-subnet master-subnet \
- --worker-subnet worker-subnet \
- # --domain foo.example.com # [OPTIONAL] custom domain
- # --pull-secret @pull-secret.txt # [OPTIONAL]
-```
-
-## Configure the target OpenShift 4 cluster
-
-### Authentication
-
-For users to interact with Azure Red Hat OpenShift, they must first authenticate to the cluster. The authentication layer identifies the user associated with requests to the Azure Red Hat OpenShift API. The authorization layer then uses information about the requesting user to determine if the request is allowed.
-
-When an Azure Red Hat OpenShift 4 cluster is created, a temporary administrative user is created. [Connect to your cluster](tutorial-connect-cluster.md), add users and groups and [configure the appropriate permissions](https://docs.openshift.com/container-platform/4.6/authentication/understanding-authentication.html) for both.
-
-### Networking
-
-Azure Red Hat OpenShift 4 uses a few different operators to set up the network in your cluster: [Cluster Network Operator](https://docs.openshift.com/container-platform/4.6/networking/cluster-network-operator.html#nw-cluster-network-operator_cluster-network-operator), [DNS Operator](https://docs.openshift.com/container-platform/4.6/networking/dns-operator.html), and the [Ingress Operator](https://docs.openshift.com/container-platform/4.6/networking/ingress-operator.html). For more information on setting up networking in an Azure Red Hat OpenShift 4 cluster, see the [Networking Diagram](concepts-networking.md) and [Understanding Networking](https://docs.openshift.com/container-platform/4.6/networking/understanding-networking.html).
-
-### Storage
-Azure Red Hat OpenShift 4 supports the following PersistentVolume plug-ins:
--- AWS Elastic Block Store (EBS)-- Azure Disk-- Azure File-- GCE Persistent Disk-- HostPath-- iSCSI-- Local volume-- NFS-- Red Hat OpenShift Container Storage-
-For information on configuring these storage types, see [Configuring persistent storage](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/storage/configuring-persistent-storage).
-
-### Registry
-
-Azure Red Hat OpenShift 4 can build images from your source code, deploy them, and manage their lifecycle. To enable this, Azure Red Hat OpenShift provides 4 an [internal, integrated container image registry](https://docs.openshift.com/container-platform/4.5/registry/registry-options.html) that can be deployed in your Azure Red Hat OpenShift environment to locally manage images.
-
-If you're using external registries such as [Azure Container Registry](../container-registry/index.yml), [Red Hat Quay registries](https://docs.openshift.com/container-platform/4.5/registry/registry-options.html#registry-quay-overview_registry-options), or an [authentication enabled Red Hat registry](https://docs.openshift.com/container-platform/4.5/registry/registry-options.html#registry-authentication-enabled-registry-overview_registry-options), follow steps to supply credentials to the cluster to allow the cluster to access the repositories.
-
-### Monitoring
-
-Azure Red Hat OpenShift includes a pre-configured, pre-installed, and self-updating monitoring stack that is based on the Prometheus open source project and its wider eco-system. It provides monitoring of cluster components and includes a set of alerts to immediately notify the cluster administrator about any occurring problems and a set of Grafana dashboards. The cluster monitoring stack is only supported for monitoring Azure Red Hat OpenShift clusters. For more information, see [Cluster monitoring for Azure Red Hat OpenShift](https://docs.openshift.com/container-platform/4.5/monitoring/cluster_monitoring/about-cluster-monitoring.html).
-
-If you have been using [Azure Monitor for Containers for Azure Red Hat OpenShift 3.11](../azure-monitor/containers/container-insights-azure-redhat-setup.md), you can also enable Azure Monitor for Containers for [Azure Red Hat OpenShift 4 clusters](../azure-monitor/containers/container-insights-azure-redhat4-setup.md) and continue using the same Log Analytics workspace.
-
-## Move your DNS or load-balancer configuration to the new cluster
-
-If you're using Azure Traffic Manager, add endpoints to refer to your target cluster and prioritize these endpoints.
-
-## Deploy application to your target cluster
-
-Once you have your target cluster properly configured for your workload, [connect to your cluster](tutorial-connect-cluster.md) and create the necessary applications, components, or services for your projects. Azure Red Hat OpenShift enables you to create these from Git, container images, the Red Hat Developer Catalog, a Dockerfile, a YAML/JSON definition, or by selecting a database service from the Catalog.
-
-## Delete your source cluster
-Once you've confirmed that your Azure Red Hat OpenShift 4 cluster is properly set up, delete your Azure Red Hat OpenShift 3.11 cluster.
-
-```azurecli
-az aro delete --name $CLUSTER_NAME
- --resource-group $RESOURCE_GROUP
- [--no-wait]
- [--yes]
-```
-
-## Next steps
-Check out Red Hat OpenShift documentation [here](https://docs.openshift.com/container-platform/4.6/welcome/https://docsupdatetracker.net/index.html).
openshift Supported Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/supported-resources.md
- Title: Supported resources for Azure Red Hat OpenShift 3.11
-description: Understand which Azure regions and virtual machine sizes are supported by Microsoft Azure Red Hat OpenShift.
---- Previously updated : 05/15/2019--
-# Azure Red Hat OpenShift resources
-
-This topic lists the Azure regions and virtual machine sizes supported by the Microsoft Azure Red Hat OpenShift 3.11 service.
-
-## Azure regions
-
-See [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=openshift&regions=all) for a current list of regions where you can deploy Azure Red Hat OpenShift clusters.
-
-## Virtual machine sizes
-
-Here are the supported virtual machine sizes you can specify for the compute nodes in your Azure Red Hat OpenShift cluster.
-
-> [!Important]
-> Each VM has a different number of drives that can be attached. This may not be as immediately clear as memory or CPU size.
-> Not all VM sizes are available in all regions. Even if the API supports the size you specify, you might get an error if the size is not available in the region you specify.
-> See [Current list of supported VM sizes per region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines) for more information.
-
-## Compute node sizes
-
-The following compute node sizes are supported by the Azure Red Hat OpenShift REST API:
-
-|Size|vCPU|RAM|
-|-|-|-|
-|Standard D4s v3|4|16 GB|
-|Standard D8s v3|8|32 GB|
-|Standard D16s v3|16|64 GB|
-|Standard D32s v3|32|128 GB|
-|-|-|-|
-|Standard E4s v3|4|32 GB|
-|Standard E8s v3|8|64 GB|
-|Standard E16s v3|16|128 GB|
-|Standard E32s v3|32|256 GB|
-|-|-|-|
-|Standard F8s v2|8|16 GB|
-|Standard F16s v2|16|32 GB|
-|Standard F32s v2|32|64 GB|
-
-## Master node sizes
-
-The following master / infrastructure node sizes are supported by the Azure Red Hat OpenShift REST API:
-
-|Size|vCPU|RAM|
-|-|-|-|
-|Standard D4s v3|4|16 GB|
-|Standard D8s v3|8|32 GB|
-|Standard D16s v3|16|64 GB|
-|Standard D32s v3|32|128 GB|
-
-## Next steps
-
-Try the [Create a Azure Red Hat OpenShift cluster](tutorial-create-cluster.md) tutorial.
orbital Register Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/register-spacecraft.md
Sign in to the [Azure portal](https://aka.ms/orbital/portal).
| Subscription | Select your subscription | | Resource Group | Select your resource group | | Name | Enter spacecraft name |
- | Region | Select **West US 2** |
+ | Region | Enter region, e.g. West US 2 |
| NORAD ID | Enter NORAD ID | | TLE title line | Enter TLE title line | | TLE line 1 | Enter TLE line 1 |
postgresql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-azure-ad-authentication.md
-# Azure Active Directory Authentication with PostgreSQL Flexible Server Preview
+# Azure Active Directory Authentication with PostgreSQL Flexible Server
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
-> [!NOTE]
-> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview.
Microsoft Azure Active Directory (Azure AD) authentication is a mechanism of connecting to Azure Database for PostgreSQL using identities defined in Azure AD. With Azure AD authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management.
The following table provides a list of high-level Azure AD features and capabili
| Disable Password Authentication | Not Available | Available | | Service Principal can act as group member | No | Yes | | Audit Azure AD Logins | No | Yes |
-| PG bouncer support | No | Planned for GA |
+| PG bouncer support | No | March 2023 |
## How Azure AD Works In Flexible Server
Once you've authenticated against the Active Directory, you then retrieve a toke
- Multiple Azure AD principals (a user, group, service principal or managed identity) can be configured as Azure AD Administrator for an Azure Database for PostgreSQL server at any time. - Azure AD groups must be a mail enabled security group for authentication to work.-- In preview, `Azure Active Directory Authentication only` is supported post server creation, this option is currently disabled during server creation experience - Only an Azure AD administrator for PostgreSQL can initially connect to the Azure Database for PostgreSQL using an Azure Active Directory account. The Active Directory administrator can configure subsequent Azure AD database users. - If an Azure AD principal is deleted from Azure AD, it still remains as PostgreSQL role, but it will no longer be able to acquire new access token. In this case, although the matching role still exists in the database it won't be able to authenticate to the server. Database administrators need to transfer ownership and drop roles manually.
Once you've authenticated against the Active Directory, you then retrieve a toke
- Azure Database for PostgreSQL Flexible Server matches access tokens to the database role using the userΓÇÖs unique Azure Active Directory user ID, as opposed to using the username. If an Azure AD user is deleted and a new user is created with the same name, Azure Database for PostgreSQL Flexible Server considers that a different user. Therefore, if a user is deleted from Azure AD and a new user is added with the same name the new user won't be able to connect with the existing role.
+## Limitations
+
+- PG bouncer is currently not supported, and we are planning to release this very soon..
+
+- GA versions of Terraform/CLI/API will be released soon. You can use preview API 2022-12-01 version until then.
+
+-
## Next steps - To learn how to create and populate Azure AD, and then configure Azure AD with Azure Database for PostgreSQL, see [Configure and sign in with Azure AD for Azure Database for PostgreSQL](how-to-configure-sign-in-azure-ad-authentication.md).
postgresql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication.md
-# Use Azure AD for authentication with Azure Database for PostgreSQL - Flexible Server (preview)
+# Use Azure AD for authentication with Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)] In this article, you'll configure Azure Active Directory (Azure AD) access for authentication with Azure Database for PostgreSQL - Flexible Server. You'll also learn how to use an Azure AD token with Azure Database for PostgreSQL - Flexible Server.
-> [!NOTE]
-> Azure Active Directory authentication for Azure Database for PostgreSQL - Flexible Server is currently in preview.
- You can configure Azure AD authentication for Azure Database for PostgreSQL - Flexible Server either during server provisioning or later. Only Azure AD administrator users can create or enable users for Azure AD-based authentication. We recommend not using the Azure AD administrator for regular database operations because that role has elevated user permissions (for example, CREATEDB). You can have multiple Azure AD admin users with Azure Database for PostgreSQL - Flexible Server. Azure AD admin users can be a user, a group, or service principal.
You're now authenticated to your PostgreSQL server through Azure AD authenticati
## Next steps - Review the overall concepts for [Azure AD authentication with Azure Database for PostgreSQL - Flexible Server](concepts-azure-ad-authentication.md).-- Learn how to [Manage Azure Active Directory users - Azure Database for PostgreSQL - Flexible Server](how-to-manage-azure-ad-users.md).
+- Learn how to [Manage Azure Active Directory users - Azure Database for PostgreSQL - Flexible Server](how-to-manage-azure-ad-users.md).
postgresql How To Manage Azure Ad Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-azure-ad-users.md
-# Manage Azure Active Directory roles in Azure Database for PostgreSQL - Flexible Server Preview
+# Manage Azure Active Directory roles in Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
This article describes how you can create an Azure Active Directory (Azure AD) e
> This guide assumes you already enabled Azure Active Directory authentication on your PostgreSQL Flexible server. > See [How to Configure Azure AD Authentication](./how-to-configure-sign-in-azure-ad-authentication.md)
-> [!NOTE]
-> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview.
- If you like to learn about how to create and manage Azure subscription users and their privileges, you can visit the [Azure role-based access control (Azure RBAC) article](../../role-based-access-control/built-in-roles.md) or review [how to customize roles](../../role-based-access-control/custom-roles.md). ## Create or Delete Azure AD administrators using Azure portal or Azure Resource Manager (ARM) API
select * from pgaadauth_create_principal_with_oid('accounting_application', '000
## Enable Azure AD authentication for an existing PostgreSQL role using SQL
-Azure Database for PostgreSQL Flexible Servers uses Security Labels associated with database roles to store Azure AD mapping. During preview, we don't provide a function to associate existing Azure AD roles.
+Azure Database for PostgreSQL Flexible Servers uses Security Labels associated with database roles to store Azure AD mapping.
You can use the following SQL to assign security label:
private-5g-core Region Move Private Mobile Network Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/region-move-private-mobile-network-resources.md
+
+ Title: Move Azure Private 5G Core private mobile network resources between regions
+
+description: In this how-to guide, you'll learn how to move your private mobile network resources to a different region.
++++ Last updated : 01/04/2023+++
+# Move your private mobile network resources to a different region
+
+In this how-to guide, you'll learn how to move your private mobile network resources to a different region. This involves exporting your resources from the source region's resource group and recreating them in a new resource group deployed in the target region.
+
+You might move your resources to another region for a number of reasons. For example, to take advantage of a new Azure region, to create a backup of your deployment, to meet internal policy and governance requirements, or in response to capacity planning requirements.
+
+If you also want to move your Arc-enabled Kubernetes cluster, contact your support representative.
+
+## Prerequisites
+
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.
+- Ensure Azure Private 5G Core supports the region to which you want to move your resources. <!-- Refer to [Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/) -->
+- Verify pricing and charges associated with the target region to which you want to move your resources.
+- Choose a name for your new resource group in the target region. This must be different to the source region's resource group name.
+
+## Back up deployment information
+
+The following list contains the data that will be lost over the region move. Back up any information you'd like to preserve; after the move, you can use this information to reconfigure your deployment.
+
+1. For security reasons, your SIM configuration won't be carried over a region move. Refer to [Collect the required information for your SIMs](provision-sims-azure-portal.md#collect-the-required-information-for-your-sims) to take a backup of all the information you'll need to recreate your SIMs.
+1. If you want to keep using the same credentials when signing in to [distributed tracing](distributed-tracing.md), save a copy of the current password to a secure location.
+1. If you want to keep using the same credentials when signing in to the [packet core dashboards](packet-core-dashboards.md), save a copy of the current password to a secure location.
+1. Any customizations made to the packet core dashboards won't be carried over the region move. Refer to [Exporting a dashboard](https://grafana.com/docs/grafana/v6.1/reference/export_import/#exporting-a-dashboard) in the Grafana documentation to save a backed-up copy of your dashboards.
+1. Most UEs will automatically re-register and recreate any sessions after the region move completes. If you have any special devices that require manual operations to recover from a packet core outage, gather a list of these UEs and their recovery steps.
+
+## Prepare to move your resources
+
+### Remove SIMs and custom location
+
+> [!IMPORTANT]
+> Completing this step will initiate an outage in the source region.
+>
+> If you want your source deployment to stay operational during the region move, skip this step and move to [Generate template](#generate-template). You'll need to make additional modifications to the template in [Prepare template](#prepare-template).
+
+Before moving your resources, you'll need to delete all SIMs in your deployment. You'll also need to uninstall all packet core instances you want to move by changing their **Custom ARC location** field to **None**.
+
+1. Follow [Delete SIMs](manage-existing-sims.md#delete-sims) to delete all the SIMs in your deployment.
+1. For each site that you want to move, follow [Modify the packet core instance in a site](modify-packet-core.md) to modify your packet core instance with the changes below. You can ignore the sections about attaching and modifying data networks.
+
+ 1. In *Modify the packet core configuration*, make a note of the custom location value in the **Custom ARC location** field.
+ 1. Set the **Custom ARC location** field to **None**.
+ 1. In *Submit and verify changes*, the packet core will be uninstalled.
+
+### Generate template
+
+Your mobile network resources can now be exported via an Azure Resource Manager (ARM) template.
+
+1. Navigate to the resource group containing your private mobile network resources.
+1. In the resource menu, select **Export template**.
+
+ :::image type="content" source="media/region-move/region-move-export-template.png" alt-text="Screenshot of the Azure portal showing the resource menu Export template option.":::
+
+1. Once Azure finishes generating the template, select **Download**.
+
+ :::image type="content" source="media/region-move/region-move-download-template.png" alt-text="Screenshot of the Azure portal showing the option to download a template.":::
+
+## Move resources to a new region
+
+### Prepare template
+
+You'll need to customize your template to ensure all your resources are correctly deployed to the new region.
+
+1. Open the *template.json* file you downloaded in [Generate template](#generate-template).
+1. Find every instance of the original region's code name and replace it with the target region you're moving your deployment to. This involves updating the **location** parameter for every resource. See [Region code names](region-code-names.md) for instructions on how to obtain the target region's code name.
+1. Find every instance of the original region's resource group name and replace it with the target region's resource group name you defined in [Prerequisites](#prerequisites).
+1. If you skipped [Remove SIMs and custom location](#remove-sims-and-custom-location) because you need your deployment to stay online in the original region, make the additional changes to the template:
+ 1. Remove all the SIM resources.
+ 1. Remove all custom location entries, including any dependencies from other resources.
+1. Remove any other resources you don't want to move to the target region.
+
+### Deploy template
+
+1. [Create a resource group](/azure/azure-resource-manager/management/manage-resource-groups-portal) in the target region. Use the resource group name you defined in [Prerequisites](#prerequisites).
+1. Deploy the *template.json* file you downloaded in [Generate template](#generate-template).
+
+ - If you want to use the Azure portal, follow the instructions to deploy resources from a custom template in [Deploy resources with ARM templates and Azure portal](/azure/azure-resource-manager/templates/deploy-portal).
+ - If you want to use PowerShell, navigate to the folder containing the *template.json* file and deploy using the command:
+
+ ```azurepowershell
+ az deployment group create --resource-group <new resource group name> --template-file template.json
+ ```
+
+1. In the Azure portal, navigate to the new resource group and verify that your resources have been successfully recreated.
+
+## Configure custom location
+
+You can now install your packet core instances in the new region.
+
+For each site in your deployment, follow [Modify the packet core instance in a site](modify-packet-core.md) to reconfigure your packet core custom location. In *Modify the packet core configuration*, set the **Custom ARC location** field to the custom location value you noted down in [Remove SIMs and custom location](#remove-sims-and-custom-location). You can ignore the sections about attaching and modifying data networks.
+
+## Restore backed up deployment information
+
+Configure your deployment in the new region using the information you gathered in [Back up deployment information](#back-up-deployment-information).
+
+1. Retrieve your backed up SIM information and recreate your SIMs by following one of:
+
+ - [Provision new SIMs for Azure Private 5G Core Preview - Azure portal](provision-sims-azure-portal.md)
+ - [Provision new SIMs for Azure Private 5G Core Preview - ARM template](provision-sims-arm-template.md)
+
+1. Follow [Access the distributed tracing web GUI](distributed-tracing.md#access-the-distributed-tracing-web-gui) to restore access to distributed tracing.
+1. Follow [Access the packet core dashboards](packet-core-dashboards.md#access-the-packet-core-dashboards) to restore access to your packet core dashboards.
+1. If you backed up any packet core dashboards, follow [Importing a dashboard](https://grafana.com/docs/grafana/v6.1/reference/export_import/#importing-a-dashboard) in the Grafana documentation to restore them.
+1. If you have UEs that require manual operations to recover from a packet core outage, follow their recovery steps.
+
+## Verify
+
+Use [Azure Monitor](monitor-private-5g-core-with-log-analytics.md) or the [packet core dashboards](packet-core-dashboards.md) to confirm your deployment is operating normally after the region move.
+
+## Next steps
+
+If you no longer require a deployment in the source region, [delete the original resource group](/azure/azure-resource-manager/management/manage-resource-groups-portal).
+<!-- TODO: Learn more about reliability in Azure Private 5G Core. -->
private-multi-access-edge-compute-mec Partner Programs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-multi-access-edge-compute-mec/partner-programs.md
Our system integrators and non-operator MSP partners include:
- American Tower - Capgemini - Cognizant-- Expeto - Federated Wireless - Fujitsu - HCL
Azure Private MEC platform partners provide critical hardware and software compo
Network functions partners include software vendors that provide mobile packet core, firewalls, routers, SD-WAN, vRAN, and RAN optimization. The breadth of third-party network functions available enable customers to securely integrate the Azure private MEC solution into their existing edge and cloud environments. The Azure private MEC current network function partners include:
-|Mobile Packet Core |Firewall, Routers, & SD-WAN |RAN Partners (software) |
-|||||
-|Celona | 128 Technology | AirHop |
-|Expeto | Arista | ASOCS |
-|HSS by HPE | Fortinet | Celona |
-| Nokia Digital Automation Cloud | NetFoundry | Commscope|
+|Firewall, Routers, & SD-WAN |RAN Partners (software) |
+|||
+| 128 Technology | AirHop |
+| Arista | ASOCS |
+| Fortinet | Celona |
+| NetFoundry | Commscope|
| | Nuage Networks by Nokia | Nokia| | |Palo Alto Networks | | | |Versa Networks | |
SIM & Device partners provide wireless authentication technologies and embedded
|SIM|Devices |RAN (hardware)| |||| |Commscope | Cradlepoint by Ericsson |ASOCS |
-|G+D | Multitech |Celona |
-|Gemalto | Sierra Wireless |Commscope |
-|IDEMIA | |Nokia |
+|G+D | Multitech |Commscope |
+|Gemalto | Sierra Wireless |Nokia|
+|IDEMIA | | |
| JCI US (Contour Networks) | || ||||
purview Concept Resource Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-resource-sets.md
Previously updated : 05/09/2022 Last updated : 01/23/2023 # Understanding resource sets
These properties can be found on the asset details page of the resource set.
### Turning on advanced resource sets
-Advanced resource sets is off by default in all new Microsoft Purview instances. Advanced resource sets can be enabled from **Account information** in the management hub.
+Advanced resource sets is off by default in all new Microsoft Purview instances. Advanced resource sets can be enabled from **Account information** in the management hub. Only those users who are added to the Data Curator role at root collection, can manage Advanced Resource Sets settings.
:::image type="content" source="media/concept-resource-sets/advanced-resource-set-toggle.png" alt-text="Turn on Advanced resource set." border="true":::
role-based-access-control Transfer Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/transfer-subscription.md
Several Azure resources have a dependency on a subscription or a directory. Depe
| Azure Policy | Yes | No | All Azure Policy objects, including custom definitions, assignments, exemptions, and compliance data. | You must [export](../governance/policy/how-to/export-resources.md), import, and re-assign definitions. Then, create new policy assignments and any needed [policy exemptions](../governance/policy/concepts/exemption-structure.md). | | Azure Active Directory Domain Services | Yes | No | | You cannot transfer an Azure AD Domain Services managed domain to a different directory. For more information, see [Frequently asked questions (FAQs) about Azure Active Directory (AD) Domain Services](../active-directory-domain-services/faqs.yml) | | App registrations | Yes | Yes | | |
+| Microsoft Dev Box | Yes | No | | You cannot transfer a dev box and its associated resources to a different directory. Once a subscription moves to another tenant, you will not be able to perform any actions on your dev box |
+| Azure Deployment Environments | Yes | No | | You cannot transfer an environment and its associated resources to a different directory. Once a subscription moves to another tenant, you will not be able to perform any actions on your environment |
> [!WARNING] > If you are using encryption at rest for a resource, such as a storage account or SQL database, that has a dependency on a key vault that is being transferred, it can lead to an unrecoverable scenario. If you have this situation, you should take steps to use a different key vault or temporarily disable customer-managed keys to avoid this unrecoverable scenario.
search Search Security Manage Encryption Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-manage-encryption-keys.md
- Previously updated : 12/14/2022+ Last updated : 01/20/2023
Once you create the encrypted object on the search service, you can use it as yo
<a name="encryption-enforcement-policy"></a>
-## 6 Set up policy
+## 6 - Set up policy
-Azure Cognitive Search has an optional [built-in policy](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F76a56461-9dc0-40f0-82f5-2453283afa2f) to enforce usage of CMK on individual objects defined in a search service. In this step, you'll apply this policy to your search service and set up your search service to enforce this policy.
+Azure policies help to enforce organizational standards and to assess compliance at-scale. Azure Cognitive Search has an optional [built-in policy for service-wide CMK enforcement](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F76a56461-9dc0-40f0-82f5-2453283afa2f).
+
+In this section, you'll set the policy that defines a CMK standard for your search service. Then, you'll set up your search service to enforce this policy.
> [!NOTE] > Policy set up requires the preview [Services - Create or Update API](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update).
Azure Cognitive Search has an optional [built-in policy](https://portal.azure.co
:::image type="content" source="media/search-security-manage-encryption-keys/assign-policy.png" alt-text="Screenshot of assigning built-in CMK policy." border="true":::
-1. Set up the [policy scope](../governance/policy/concepts/scope.md). In the **Parameters** section, uncheck **Only show parameters...** and set **Effect** to **Deny**
+1. Set up the [policy scope](../governance/policy/concepts/scope.md). In the **Parameters** section, uncheck **Only show parameters...** and set **Effect** to [**Deny**](/azure/governance/policy/concepts/effects#deny).
+
+ During evaluation of the request, a request that matches a deny policy definition is marked as non-compliant. Assuming the standard for your service is CMK encryption, "deny" means that requests that *don't* specify CMK encryption are non-compliant.
:::image type="content" source="media/search-security-manage-encryption-keys/effect-deny.png" alt-text="Screenshot of changing built-in CMK policy effect to deny." border="true"::: 1. Finish creating the policy.
-1. Call the [Services - Create or Update API](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update) to enable CMK policy enforcement.
+1. Call the [Services - Create or Update API](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update) to enable CMK policy enforcement at the service level.
```http PATCH https://management.azure.com/subscriptions/[subscriptionId]/resourceGroups/[resourceGroupName]/providers/Microsoft.Search/searchServices/[serviceName]?api-version=2021-04-01-preview
search Tutorial Csharp Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-create-load-index.md
Continue to build your search-enabled website by following these steps:
## Create an Azure Cognitive Search resource
-Create a new search resource using PowerShell and the **Az.Search** module.
+Create a new search resource using PowerShell and the **Az.Search** module. In this section, you'll also create a query key used for read-access to the index, and get the built-in admin key used for adding objects.
1. In Visual Studio Code, open a new terminal window.
search Tutorial Javascript Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-create-load-index.md
Continue to build your search-enabled website by following these steps:
## Create an Azure Cognitive Search resource
-Create a new search resource using PowerShell and the **Az.Search** module.
+Create a new search resource using PowerShell and the **Az.Search** module. In this section, you'll also create a query key used for read-access to the index, and get the built-in admin key used for adding objects.
1. In Visual Studio Code, open a new terminal window.
search Tutorial Python Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-create-load-index.md
Continue to build your search-enabled website by following these steps:
## Create an Azure Cognitive Search resource
-Create a new search resource using PowerShell and the **Az.Search** module.
+Create a new search resource using PowerShell and the **Az.Search** module. In this section, you'll also create a query key used for read-access to the index, and get the built-in admin key used for adding objects.
1. In Visual Studio Code, open a new terminal window.
sentinel Cef Name Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/cef-name-mapping.md
The following tables map Common Event Format (CEF) field names to the names they use in Microsoft Sentinel's CommonSecurityLog, and may be helpful when you are working with a CEF data source in Microsoft Sentinel.
+> [!IMPORTANT]
+>
+> On **February 28th 2023**, we will introduce [changes to the CommonSecurityLog table schema](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/upcoming-changes-to-the-commonsecuritylog-table/ba-p/3643232). This means that custom queries will require being reviewed and updated. Out-of-the-box content (detections, hunting queries, workbooks, parsers, etc.) will be updated by Microsoft Sentinel.
+ For more information, see [Connect your external solution using Common Event Format](connect-common-event-format.md). > [!NOTE]
sentinel Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd.md
Microsoft Sentinel currently supports connections to GitHub and Azure DevOps rep
- An **Owner** role in the resource group that contains your Microsoft Sentinel workspace *or* a combination of **User Access Administrator** and **Sentinel Contributor** roles to create the connection - Contributor access to your GitHub or Azure DevOps repository - Actions enabled for GitHub and Pipelines enabled for Azure DevOps
+- Third-party application access via OAuth enabled for Azure DevOps [application connection policies](/azure/devops/organizations/accounts/change-application-access-policies#manage-a-policy).
- Ensure custom content files you want to deploy to your workspaces are in relevant [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/index.yml). For more information, see [Validate your content](ci-cd-custom-content.md#validate-your-content)
For more information, see:
- [Customize repository deployments](ci-cd-custom-deploy.md) - [Discover and deploy Microsoft Sentinel solutions (Public preview)](sentinel-solutions-deploy.md)-- [Microsoft Sentinel data connectors](connect-data-sources.md)
+- [Microsoft Sentinel data connectors](connect-data-sources.md)
sentinel Connect Cef Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-ama.md
This article describes how to use the **Common Event Format (CEF) via AMA** conn
The connector uses the Azure Monitor Agent (AMA), which uses Data Collection Rules (DCRs). With DCRs, you can filter the logs before they're ingested, for quicker upload, efficient analysis, and querying.
+> [!IMPORTANT]
+>
+> The CEF via AMA connector is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ The AMA is installed on a Linux machine that acts as a log forwarder, and the AMA collects the logs in the CEF format. - [Set up the connector](#set-up-the-common-event-format-cef-via-ama-connector) - [Learn more about the connector](#how-collection-works-with-the-common-event-format-cef-via-ama-connector) > [!IMPORTANT]
-> The CEF via AMA connector is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-> [!NOTE]
-> On February 28th 2023, we will introduce [changes to the CommonSecurityLog table schema](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/upcoming-changes-to-the-commonsecuritylog-table/ba-p/3643232). This means that custom queries will require being reviewed and updated. Out-of-the-box content (detections, hunting queries, workbooks, parsers, etc.) will be updated by Microsoft Sentinel.
+>
+> On **February 28th 2023**, we will introduce [changes to the CommonSecurityLog table schema](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/upcoming-changes-to-the-commonsecuritylog-table/ba-p/3643232). This means that custom queries will require being reviewed and updated. Out-of-the-box content (detections, hunting queries, workbooks, parsers, etc.) will be updated by Microsoft Sentinel.
## Overview
sentinel Connect Common Event Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-common-event-format.md
Many networking and security devices and appliances send their system logs over the Syslog protocol in a specialized format known as Common Event Format (CEF). This format includes more information than the standard Syslog format, and it presents the information in a parsed key-value arrangement. The Log Analytics Agent accepts CEF logs and formats them especially for use with Microsoft Sentinel, before forwarding them on to your Microsoft Sentinel workspace. > [!IMPORTANT]
-> The Log Analytics agent will be [retired on **31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you start planning your migration to the AMA. For more information, see [AMA migration for Microsoft Sentinel](ama-migrate.md).
+>
+> Upcoming changes:
+> - On **February 28th, 2023** we will introduce [changes to the CommonSecurityLog table schema](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/upcoming-changes-to-the-commonsecuritylog-table/ba-p/3643232).
+> - This means that custom queries will require review and update. Out-of-the-box content (detections, hunting queries, workbooks, parsers, etc.) will be updated by Microsoft Sentinel.
+> - Data that has been streamed and ingested before the change will still be available in its former columns and formats. Old columns will therefore remain in the schema.
+> - On **31 August, 2024**, the [Log Analytics agent will be retired](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you start planning your migration to the AMA. For more information, see [AMA migration for Microsoft Sentinel](ama-migrate.md).
This article describes the process of using CEF-formatted logs to connect your data sources. For information about data connectors that use this method, see [Microsoft Sentinel data connectors reference](data-connectors-reference.md).
sentinel Troubleshooting Cef Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/troubleshooting-cef-syslog.md
For more information, see [Connect your external solution using Common Event For
If you've deployed your connector using a method different than the documented procedure and are having issues, we recommend that you purge the deployment and install again as documented.
+> [!IMPORTANT]
+>
+> On **February 28th 2023**, we will introduce [changes to the CommonSecurityLog table schema](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/upcoming-changes-to-the-commonsecuritylog-table/ba-p/3643232). This means that custom queries will require being reviewed and updated. Out-of-the-box content (detections, hunting queries, workbooks, parsers, etc.) will be updated by Microsoft Sentinel.
+ ## How to use this article When information in this article is relevant only for Syslog or only for CEF connectors, we've organized the page into tabs. Make sure that you're using the instructions on the correct tab for your connector type.
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
This article lists recent features added for Microsoft Sentinel, and new feature
The listed features were released in the last three months. For information about earlier features delivered, see our [Tech Community blogs](https://techcommunity.microsoft.com/t5/azure-sentinel/bg-p/AzureSentinelBlog/label-name/What's%20New).
+See these [important announcements](#announcements) about recent changes to features and services.
+ [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)] ## January 2023
SOC managers, automation engineers, and senior analysts can use Microsoft Sentin
- Learn how analysts can [use tasks to handle incident workflow](work-with-tasks.md). - Learn how to add tasks to groups of incidents automatically using [automation rules](create-tasks-automation-rule.md) or [playbooks](create-tasks-playbook.md). - ### Common Event Format (CEF) via AMA (Preview) The [Common Event Format (CEF) via AMA](connect-cef-ama.md) connector allows you to quickly filter and upload logs over CEF from multiple on-premises appliances to Microsoft Sentinel via the Azure Monitor Agent (AMA).
A [new version of the Microsoft Sentinel Logstash plugin](connect-logstash-data-
- Can forward logs from external data sources into both custom tables and standard tables. - Provides performance improvements, compression, and better telemetry and error handling.
-## October 2022
+## Announcements
-- [Account enrichment fields removed from Azure AD Identity Protection connector](#account-enrichment-fields-removed-from-azure-ad-identity-protection-connector) - [Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)](#microsoft-365-defender-now-integrates-azure-active-directory-identity-protection-aadip)-- [Out of the box anomaly detection on the SAP audit log (Preview)](#out-of-the-box-anomaly-detection-on-the-sap-audit-log-preview)-- [IoT device entity page (Preview)](#iot-device-entity-page-preview)
+- [Account enrichment fields removed from Azure AD Identity Protection connector](#account-enrichment-fields-removed-from-azure-ad-identity-protection-connector)
+- [Name fields removed from UEBA UserPeerAnalytics table](#name-fields-removed-from-ueba-userpeeranalytics-table)
+
+### Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)
+
+As of **October 24, 2022**, [Microsoft 365 Defender](/microsoft-365/security/defender/) integrates [Azure Active Directory Identity Protection (AADIP)](../active-directory/identity-protection/index.yml) alerts and incidents. Customers can choose between three levels of integration:
+
+- **Show high-impact alerts only (Default)** includes only alerts about known malicious or highly suspicious activities that might require attention. These alerts are chosen by Microsoft security researchers and are mostly of Medium and High severities.
+- **Show all alerts** includes all AADIP alerts, including activity that might not be unwanted or malicious.
+- **Turn off all alerts** disables any AADIP alerts from appearing in your Microsoft 365 Defender incidents.
+
+Microsoft Sentinel customers (who are also AADIP subscribers) with [Microsoft 365 Defender integration](microsoft-365-defender-sentinel-integration.md) enabled now automatically receive AADIP alerts and incidents in their Microsoft Sentinel incidents queue. Depending on your configuration, this may affect you as follows:
+
+- If you already have your AADIP connector enabled in Microsoft Sentinel, and you've enabled incident creation, you may receive duplicate incidents. To avoid this, you have a few choices, listed here in descending order of preference:
+
+ | Preference | Action in Microsoft 365 Defender | Action in Microsoft Sentinel |
+ | - | - | - |
+ | **1** | Keep the default AADIP integration of **Show high-impact alerts only**. | Disable any [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. |
+ | **2** | Choose the **Show all alerts** AADIP integration. | Create automation rules to automatically close incidents with unwanted alerts.<br><br>Disable any [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. |
+ | **3** | Don't use Microsoft 365 Defender for AADIP alerts:<br>Choose the **Turn off all alerts** option for AADIP integration. | Leave enabled those [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. |
+
+- If you don't have your [AADIP connector](data-connectors-reference.md#azure-active-directory-identity-protection) enabled, you must enable it. Be sure **not** to enable incident creation on the connector page. If you don't enable the connector, you may receive AADIP incidents without any data in them.
+
+- If you're first enabling your Microsoft 365 Defender connector now, the AADIP connection was made automatically behind the scenes. You won't need to do anything else.
### Account enrichment fields removed from Azure AD Identity Protection connector
In the meantime, or if you've built any custom queries or rules directly referen
| project-away AadTenantId, AadUserId, AccountTenantId, AccountObjectId ```
-For information on looking up data to replace enrichment fields removed from the UEBA UserPeerAnalytics table, See [Heads up: Name fields being removed from UEBA UserPeerAnalytics table](#heads-up-name-fields-being-removed-from-ueba-userpeeranalytics-table) for a sample query.
-
-### Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)
-
-As of **October 24, 2022**, [Microsoft 365 Defender](/microsoft-365/security/defender/) will be integrating [Azure Active Directory Identity Protection (AADIP)](../active-directory/identity-protection/index.yml) alerts and incidents. Customers can choose between three levels of integration:
--- **Show high-impact alerts only (Default)** includes only alerts about known malicious or highly suspicious activities that might require attention. These alerts are chosen by Microsoft security researchers and are mostly of Medium and High severities.-- **Show all alerts** includes all AADIP alerts, including activity that might not be unwanted or malicious.-- **Turn off all alerts** disables any AADIP alerts from appearing in your Microsoft 365 Defender incidents.-
-Microsoft Sentinel customers (who are also AADIP subscribers) with [Microsoft 365 Defender integration](microsoft-365-defender-sentinel-integration.md) enabled will automatically start receiving AADIP alerts and incidents in their Microsoft Sentinel incidents queue. Depending on your configuration, this may affect you as follows:
--- If you already have your AADIP connector enabled in Microsoft Sentinel, and you've enabled incident creation, you may receive duplicate incidents. To avoid this, you have a few choices, listed here in descending order of preference:-
- | Preference | Action in Microsoft 365 Defender | Action in Microsoft Sentinel |
- | - | - | - |
- | **1** | Keep the default AADIP integration of **Show high-impact alerts only**. | Disable any [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. |
- | **2** | Choose the **Show all alerts** AADIP integration. | Create automation rules to automatically close incidents with unwanted alerts.<br><br>Disable any [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. |
- | **3** | Don't use Microsoft 365 Defender for AADIP alerts:<br>Choose the **Turn off all alerts** option for AADIP integration. | Leave enabled those [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. |
--- If you don't have your [AADIP connector](data-connectors-reference.md#azure-active-directory-identity-protection) enabled, you must enable it. Be sure **not** to enable incident creation on the connector page. If you don't enable the connector, you may receive AADIP incidents without any data in them.--- If you're first enabling your Microsoft 365 Defender connector now, the AADIP connection will be made automatically behind the scenes. You won't need to do anything else.-
-### Out of the box anomaly detection on the SAP audit log (Preview)
-
-The Microsoft Sentinel for SAP solution now includes the [**SAP - Dynamic Anomaly Detection analytics** rule](https://aka.ms/Sentinel4sapDynamicAnomalyAuditRuleBlog), adding an out of the box capability to identify suspicious anomalies across the SAP audit log events.
-
-Learn how to [use the new rule for anomaly detection](sap/configure-audit-log-rules.md#anomaly-detection).
-
-### IoT device entity page (Preview)
-
-The new [IoT device entity page](entity-pages.md) is designed to help the SOC investigate incidents that involve IoT/OT devices in their environment, by providing the full OT/IoT context through Microsoft Defender for IoT to Sentinel. This enables SOC teams to detect and respond more quickly across all domains to the entire attack timeline.
-
-Learn more about [investigating IoT device entities in Microsoft Sentinel](iot-advanced-threat-monitoring.md).
-
-## September 2022
--- [Create automation rule conditions based on custom details (Preview)](#create-automation-rule-conditions-based-on-custom-details-preview)-- [Add advanced "Or" conditions to automation rules (Preview)](#add-advanced-or-conditions-to-automation-rules-preview)-- [Heads up: Name fields being removed from UEBA UserPeerAnalytics table](#heads-up-name-fields-being-removed-from-ueba-userpeeranalytics-table)-- [Windows DNS Events via AMA connector (Preview)](#windows-dns-events-via-ama-connector-preview)-- [Create and delete incidents manually (Preview)](#create-and-delete-incidents-manually-preview)-- [Add entities to threat intelligence (Preview)](#add-entities-to-threat-intelligence-preview)-
-### Create automation rule conditions based on custom details (Preview)
+For information on looking up data to replace enrichment fields removed from the UEBA UserPeerAnalytics table, See [Name fields removed from UEBA UserPeerAnalytics table](#name-fields-removed-from-ueba-userpeeranalytics-table) for a sample query.
-You can set the value of a [custom detail surfaced in an incident](surface-custom-details-in-alerts.md) as a condition of an automation rule. Recall that custom details are data points in raw event log records that can be surfaced and displayed in alerts and the incidents generated from them. Through custom details you can get to the actual relevant content in your alerts without having to dig through query results.
+### Name fields removed from UEBA UserPeerAnalytics table
-Learn how to [add a condition based on a custom detail](create-manage-use-automation-rules.md#conditions-based-on-custom-details-preview).
-
-### Add advanced "Or" conditions to automation rules (Preview)
-
-You can now add OR conditions or condition groups to automation rules. These conditions allow you to combine several rules with identical actions into a single rule, greatly increasing your SOC's efficiency.
-
-For more information, see [Add advanced conditions to Microsoft Sentinel automation rules](add-advanced-conditions-to-automation-rules.md).
-
-### Heads up: Name fields being removed from UEBA UserPeerAnalytics table
-
-As of **September 30, 2022**, the UEBA engine will no longer perform automatic lookups of user IDs and resolve them into names. This change will result in the removal of four name fields from the *UserPeerAnalytics* table:
+As of **September 30, 2022**, the UEBA engine no longer performs automatic lookups of user IDs and resolves them into names. This change resulted in the removal of four name fields from the *UserPeerAnalytics* table:
- UserName - UserPrincipalName - PeerUserName - PeerUserPrincipalName
-The corresponding ID fields remain part of the table, and any built-in queries and other operations will execute the appropriate name lookups in other ways (using the IdentityInfo table), so you shouldnΓÇÖt be affected by this change in nearly all circumstances.
+The corresponding ID fields remain part of the table, and any built-in queries and other operations execute the appropriate name lookups in other ways (using the IdentityInfo table), so you shouldnΓÇÖt be affected by this change in nearly all circumstances.
The only exception to this is if youΓÇÖve built custom queries or rules directly referencing any of these name fields. In this scenario, you can incorporate the following lookup queries into your own, so you can access the values that would have been in these name fields.
UserPeerAnalytics
| project AccountTenantId, AccountObjectId, PeerUserPrincipalNameIdentityInfo, PeerUserNameIdentityInfo ) on $left.AADTenantId == $right.AccountTenantId, $left.PeerUserId == $right.AccountObjectId ```
-If your original query referenced the user or peer names (not just their IDs), substitute this query in its entirety for the table name (ΓÇ£UserPeerAnalyticsΓÇ¥) in your original query.
-
-### Windows DNS Events via AMA connector (Preview)
-
-You can now use the new [Windows DNS Events via AMA connector](connect-dns-ama.md) to stream and filter events from your Windows Domain Name System (DNS) server logs to the `ASimDnsActivityLog` normalized schema table. You can then dive into your data to protect your DNS servers from threats and attacks.
-
-### Create and delete incidents manually (Preview)
-
-Microsoft Sentinel **incidents** have two main sources:
--- They are generated automatically by detection mechanisms that operate on the logs and alerts that Sentinel ingests from its connected data sources.--- They are ingested directly from other connected Microsoft security services (such as [Microsoft 365 Defender](microsoft-365-defender-sentinel-integration.md)) that created them.-
-However, in some cases, data from sources *not ingested into Microsoft Sentinel*, or events not recorded in any log, may justify launching an investigation. For this reason, Microsoft Sentinel now allows security analysts to manually create incidents from scratch for any type of event, regardless of its source or associated data, in order to manage and document the investigation.
-
-Since this capability raises the possibility that you'll create an incident in error, Microsoft Sentinel also allows you to delete incidents right from the portal as well.
--- [Learn more about creating incidents manually](create-incident-manually.md).-- [Learn more about deleting incidents](delete-incident.md).-
-### Add entities to threat intelligence (Preview)
-
-Microsoft Sentinel now allows you to flag entities as malicious, right from within the investigation graph. You'll then be able to view this indicator both in Logs and in the Threat Intelligence blade in Sentinel.
-
-Learn how to [add an entity to your threat intelligence](add-entity-to-threat-intelligence.md).
+If your original query referenced the user or peer names (not just their IDs), substitute this query in its entirety for the table name (ΓÇ£UserPeerAnalyticsΓÇ¥) in your original query.
## Next steps
service-bus-messaging Service Bus Performance Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-performance-improvements.md
There are some challenges with having a greedy approach, that is, keeping the pr
## Multiple queues or topics
-If a single queue or topic can't handle the expected, use multiple messaging entities. When using multiple entities, create a dedicated client for each entity, instead of using the same client for all entities.
+If a single queue or topic can't handle the expected number of messages, use multiple messaging entities. When using multiple entities, create a dedicated client for each entity, instead of using the same client for all entities.
More queues or topics mean that you have more entities to manage at deployment time. From a scalability perspective, there really isn't too much of a difference that you would notice as Service Bus already spreads the load across multiple logs internally, so if you use six queues or topics or two queues or topics won't make a material difference.
service-bus-messaging Service Bus Python How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-python-how-to-use-topics-subscriptions.md
description: This tutorial shows you how to send messages to Azure Service Bus t
documentationcenter: python Previously updated : 02/16/2022 Last updated : 01/17/2023 ms.devlang: python-+ # Send messages to an Azure Service Bus topic and receive messages from subscriptions to the topic (Python)
> * [JavaScript](service-bus-nodejs-how-to-use-topics-subscriptions.md) > * [Python](service-bus-python-how-to-use-topics-subscriptions.md)
-This article shows you how to use Python to send messages a Service Bus topic and receive messages from a subscription to the topic.
-> [!NOTE]
-> This quick start provides step-by-step instructions for a simple scenario of sending a batch of messages to a Service Bus topic and receiving those messages from a subscription of the topic. You can find pre-built JavaScript and TypeScript samples for Azure Service Bus in the [Azure SDK for Python repository on GitHub](https://github.com/azure/azure-sdk-for-python/tree/main/sdk/servicebus/azure-servicebus/samples).
+In this tutorial, you complete the following steps:
+
+1. Create a Service Bus namespace, using the Azure portal.
+2. Create a Service Bus topic, using the Azure portal.
+3. Create a Service Bus subscription to that topic, using the Azure portal.
+4. Write a Python application to use the [azure-servicebus](https://pypi.org/project/azure-servicebus/) package to:
+ * Send a set of messages to the topic.
+ * Receive those messages from the subscription.
+> [!NOTE]
+> This quickstart provides step-by-step instructions for a simple scenario of sending a batch of messages to a Service Bus topic and receiving those messages from a subscription of the topic. You can find pre-built Python samples for Azure Service Bus in the [Azure SDK for Python repository on GitHub](https://github.com/azure/azure-sdk-for-python/tree/main/sdk/servicebus/azure-servicebus/samples).
## Prerequisites-- An Azure subscription. You can activate your [Visual Studio or MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A85619ABF) or sign-up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF).-- Follow steps in the [Quickstart: Use the Azure portal to create a Service Bus topic and subscriptions to the topic](service-bus-quickstart-topics-subscriptions-portal.md). Note down the connection string, topic name, and a subscription name. You'll use only one subscription for this quickstart. -- Python 3.5 or higher, with the [Azure Python SDK][Azure Python package] package installed. For more information, see the [Python Installation Guide](/azure/developer/python/sdk/azure-sdk-install).+
+- An [Azure subscription](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF).
+- Python 3.7 or higher, with the [Azure Python SDK](/azure/developer/python/sdk/azure-sdk-overview) package installed.
+
+>[!NOTE]
+> This tutorial works with samples that you can copy and run using Python. For instructions on how to create a Python application, see [Create and deploy a Python application to an Azure Website](../app-service/quickstart-python.md). For more information about installing packages used in this tutorial, see the [Python Installation Guide](/azure/developer/python/sdk/azure-sdk-install).
++++
+## Code setup
+
+### [Passwordless (Recommended)](#tab/passwordless)
+
+To follow this quickstart using passwordless authentication and your own Azure account:
+
+* Install the [Azure CLI](/cli/azure/install-azure-cli).
+* Sign in with your Azure account at the terminal or command prompt with `az login`.
+* Use the same account when you add the appropriate role to your resource later in the tutorial.
+* Run the tutorial code in the same terminal or command prompt.
+
+>[!IMPORTANT]
+> Make sure you sign in with `az login`. The `DefaultAzureCredential` class in the passwordless code uses the Azure CLI credentials to authenticate with Azure Active Directory (Azure AD).
+
+To use the passwordless code, you'll need to specify a:
+
+* fully qualified service bus namespace, for example: *\<service-bus-namespace>.servicebus.windows.net*
+* topic name
+* subscription name
+
+### [Connection string](#tab/connection-string)
+
+To follow this quickstart using a connection string to authenticate, you don't use your own Azure account. Instead, you'll use the connection string for the service bus namespace.
+
+To use the connection code, you'll need to specify a:
+
+* connection string
+* topic name
+* subscription name
+++
+### Use pip to install packages
+
+### [Passwordless (Recommended)](#tab/passwordless)
+
+1. To install the required Python packages for this Service Bus tutorial, open a command prompt that has Python in its path. Change the directory to the folder where you want to have your samples.
+
+1. Install packages:
+
+ ```shell
+ pip install azure-servicebus
+ pip install azure-identity
+ pip install aiohttp
+ ```
+
+### [Connection string](#tab/connection-string)
+
+1. To install the required Python packages for this Service Bus tutorial, open a command prompt that has Python in its path. Change the directory to the folder where you want to have your samples.
+
+1. Install package:
+
+ ```bash
+ pip install azure-servicebus
+ ```
++ ## Send messages to a topic
-1. Add the following import statement.
+The following sample code shows you how to send a batch of messages to a Service Bus topic. See code comments for details.
+
+Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/), create a file *send.py*, and add the following code into it.
+
+### [Passwordless (Recommended)](#tab/passwordless)
+
+1. Add the following `import` statements.
+
+ ```python
+ import asyncio
+ from azure.servicebus.aio import ServiceBusClient
+ from azure.servicebus import ServiceBusMessage
+ from azure.identity.aio import DefaultAzureCredential
+ ```
+
+2. Add the constants and define a credential.
+
+ ```python
+ FULLY_QUALIFIED_NAMESPACE = "FULLY_QUALIFIED_NAMESPACE"
+ TOPIC_NAME = "TOPIC_NAME"
+
+ credential = DefaultAzureCredential()
+ ```
+
+ > [!IMPORTANT]
+ > - Replace `FULLY_QUALIFIED_NAMESPACE` with the fully qualified namespace for your Service Bus namespace.
+ > - Replace `TOPIC_NAME` with the name of the topic.
+
+ In the preceding code, you used the Azure Identity client library's `DefaultAzureCredential` class. When the app runs locally during development, `DefaultAzureCredential` will automatically discover and authenticate to Azure using the account you logged into the Azure CLI with. When the app is deployed to Azure, `DefaultAzureCredential` can authenticate your app to Azure AD via a managed identity without any code changes.
+
+3. Add a method to send a single message.
+
+ ```python
+ async def send_single_message(sender):
+ # Create a Service Bus message
+ message = ServiceBusMessage("Single Message")
+ # send the message to the topic
+ await sender.send_messages(message)
+ print("Sent a single message")
+ ```
+
+ The sender is an object that acts as a client for the topic you created. You'll create it later and send as an argument to this function.
+
+4. Add a method to send a list of messages.
+
+ ```python
+ async def send_a_list_of_messages(sender):
+ # Create a list of messages
+ messages = [ServiceBusMessage("Message in list") for _ in range(5)]
+ # send the list of messages to the topic
+ await sender.send_messages(messages)
+ print("Sent a list of 5 messages")
+ ```
+
+5. Add a method to send a batch of messages.
+
+ ```python
+ async def send_batch_message(sender):
+ # Create a batch of messages
+ async with sender:
+ batch_message = await sender.create_message_batch()
+ for _ in range(10):
+ try:
+ # Add a message to the batch
+ batch_message.add_message(ServiceBusMessage("Message inside a ServiceBusMessageBatch"))
+ except ValueError:
+ # ServiceBusMessageBatch object reaches max_size.
+ # New ServiceBusMessageBatch object can be created here to send more data.
+ break
+ # Send the batch of messages to the topic
+ await sender.send_messages(batch_message)
+ print("Sent a batch of 10 messages")
+ ```
+
+6. Create a Service Bus client and then a topic sender object to send messages.
+
+ ```Python
+ async def run():
+ # create a Service Bus client using the credential.
+ async with ServiceBusClient(
+ fully_qualified_namespace=FULLY_QUALIFIED_NAMESPACE,
+ credential=credential,
+ logging_enable=True) as servicebus_client:
+ # Get a Topic Sender object to send messages to the topic
+ sender = servicebus_client.get_topic_sender(topic_name=TOPIC_NAME)
+ async with sender:
+ # Send one message
+ await send_single_message(sender)
+ # Send a list of messages
+ await send_a_list_of_messages(sender)
+ # Send a batch of messages
+ await send_batch_message(sender)
+ # Close credential when no longer needed.
+ await credential.close()
+
+ asyncio.run(run())
+ print("Done sending messages")
+ print("--")
+ ```
+
+### [Connection string](#tab/connection-string)
+
+1. Add the following `import` statements.
```python
- from azure.servicebus import ServiceBusClient, ServiceBusMessage
+ import asyncio
+ from azure.servicebus.aio import ServiceBusClient
+ from azure.servicebus import ServiceBusMessage
```+ 2. Add the following constants. ```python
- CONNECTION_STR = "<NAMESPACE CONNECTION STRING>"
- TOPIC_NAME = "<TOPIC NAME>"
- SUBSCRIPTION_NAME = "<SUBSCRIPTION NAME>"
+ NAMESPACE_CONNECTION_STR = "NAMESPACE_CONNECTION_STRING"
+ TOPIC_NAME = "TOPIC_NAME"
``` > [!IMPORTANT]
- > - Replace `<NAMESPACE CONNECTION STRING>` with the connection string for your namespace.
- > - Replace `<TOPIC NAME>` with the name of the topic.
- > - Replace `<SUBSCRIPTION NAME>` with the name of the subscription to the topic.
+ > - Replace `NAMESPACE_CONNECTION_STRING` with the connection string for your namespace.
+ > - Replace `TOPIC_NAME` with the name of the topic.
+ 3. Add a method to send a single message. ```python
- def send_single_message(sender):
- # create a Service Bus message
+ async def send_single_message(sender):
+ # Create a Service Bus message
message = ServiceBusMessage("Single Message") # send the message to the topic
- sender.send_messages(message)
+ await sender.send_messages(message)
print("Sent a single message") ```
- The sender is a object that acts as a client for the topic you created. You'll create it later and send as an argument to this function.
+ The sender is an object that acts as a client for the topic you created. You'll create it later and send as an argument to this function.
+ 4. Add a method to send a list of messages. ```python
- def send_a_list_of_messages(sender):
- # create a list of messages
+ async def send_a_list_of_messages(sender):
+ # Create a list of messages
messages = [ServiceBusMessage("Message in list") for _ in range(5)] # send the list of messages to the topic
- sender.send_messages(messages)
+ await sender.send_messages(messages)
print("Sent a list of 5 messages") ```+ 5. Add a method to send a batch of messages. ```python
- def send_batch_message(sender):
- # create a batch of messages
- batch_message = sender.create_message_batch()
- for _ in range(10):
- try:
- # add a message to the batch
- batch_message.add_message(ServiceBusMessage("Message inside a ServiceBusMessageBatch"))
- except ValueError:
- # ServiceBusMessageBatch object reaches max_size.
- # New ServiceBusMessageBatch object can be created here to send more data.
- break
- # send the batch of messages to the topic
- sender.send_messages(batch_message)
+ async def send_batch_message(sender):
+ # Create a batch of messages
+ async with sender:
+ batch_message = await sender.create_message_batch()
+ for _ in range(10):
+ try:
+ # Add a message to the batch
+ batch_message.add_message(ServiceBusMessage("Message inside a ServiceBusMessageBatch"))
+ except ValueError:
+ # ServiceBusMessageBatch object reaches max_size.
+ # New ServiceBusMessageBatch object can be created here to send more data.
+ break
+ # Send the batch of messages to the topic
+ await sender.send_messages(batch_message)
print("Sent a batch of 10 messages") ```+ 6. Create a Service Bus client and then a topic sender object to send messages. ```python
- # create a Service Bus client using the connection string
- servicebus_client = ServiceBusClient.from_connection_string(conn_str=CONNECTION_STR, logging_enable=True)
- with servicebus_client:
- # get a Topic Sender object to send messages to the topic
- sender = servicebus_client.get_topic_sender(topic_name=TOPIC_NAME)
- with sender:
- # send one message
- send_single_message(sender)
- # send a list of messages
- send_a_list_of_messages(sender)
- # send a batch of messages
- send_batch_message(sender)
+ async def run():
+ # create a Service Bus client using the connection string
+ async with ServiceBusClient.from_connection_string(
+ conn_str=NAMESPACE_CONNECTION_STR,
+ logging_enable=True) as servicebus_client:
+ # Get a Topic Sender object to send messages to the topic
+ sender = servicebus_client.get_topic_sender(topic_name=TOPIC_NAME)
+ async with sender:
+ # Send one message
+ await send_single_message(sender)
+ # Send a list of messages
+ await send_a_list_of_messages(sender)
+ # Send a batch of messages
+ await send_batch_message(sender)
+ asyncio.run(run())
print("Done sending messages") print("--") ```
-
+++ ## Receive messages from a subscription
-Add the following code after the print statement. This code continually receives new messages until it doesn't receive any new messages for 5 (`max_wait_time`) seconds.
-
-```python
-with servicebus_client:
- # get the Subscription Receiver object for the subscription
- receiver = servicebus_client.get_subscription_receiver(topic_name=TOPIC_NAME, subscription_name=SUBSCRIPTION_NAME, max_wait_time=5)
- with receiver:
- for msg in receiver:
- print("Received: " + str(msg))
- # complete the message so that the message is removed from the subscription
- receiver.complete_message(msg)
-```
-## Full code
-
-```python
-from azure.servicebus import ServiceBusClient, ServiceBusMessage
-
-CONNECTION_STR = "<NAMESPACE CONNECTION STRING>"
-TOPIC_NAME = "<TOPIC NAME>"
-SUBSCRIPTION_NAME = "<SUBSCRIPTION NAME>"
-
-def send_single_message(sender):
- message = ServiceBusMessage("Single Message")
- sender.send_messages(message)
- print("Sent a single message")
-
-def send_a_list_of_messages(sender):
- messages = [ServiceBusMessage("Message in list") for _ in range(5)]
- sender.send_messages(messages)
- print("Sent a list of 5 messages")
-
-def send_batch_message(sender):
- batch_message = sender.create_message_batch()
- for _ in range(10):
- try:
- batch_message.add_message(ServiceBusMessage("Message inside a ServiceBusMessageBatch"))
- except ValueError:
- # ServiceBusMessageBatch object reaches max_size.
- # New ServiceBusMessageBatch object can be created here to send more data.
- break
- sender.send_messages(batch_message)
- print("Sent a batch of 10 messages")
-
-servicebus_client = ServiceBusClient.from_connection_string(conn_str=CONNECTION_STR, logging_enable=True)
-
-with servicebus_client:
- sender = servicebus_client.get_topic_sender(topic_name=TOPIC_NAME)
- with sender:
- send_single_message(sender)
- send_a_list_of_messages(sender)
- send_batch_message(sender)
-
-print("Done sending messages")
-print("--")
-
-with servicebus_client:
- receiver = servicebus_client.get_subscription_receiver(topic_name=TOPIC_NAME, subscription_name=SUBSCRIPTION_NAME, max_wait_time=5)
- with receiver:
- for msg in receiver:
- print("Received: " + str(msg))
- receiver.complete_message(msg)
-```
+The following sample code shows you how to receive messages from a subscription. This code continually receives new messages until it doesn't receive any new messages for 5 (`max_wait_time`) seconds.
+
+Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/), create a file *recv.py*, and add the following code into it.
+
+### [Passwordless (Recommended)](#tab/passwordless)
+
+1. Similar to the send sample, add `import` statements, define constants that you should replace with your own values, and define a credential.
+
+ ```python
+ import asyncio
+ from azure.servicebus.aio import ServiceBusClient
+ from azure.identity.aio import DefaultAzureCredential
+
+ FULLY_QUALIFIED_NAMESPACE = "FULLY_QUALIFIED_NAMESPACE"
+ SUBSCRIPTION_NAME = "SUBSCRIPTION_NAME"
+ TOPIC_NAME = "TOPIC_NAME"
+
+ credential = DefaultAzureCredential()
+ ```
+
+2. Create a Service Bus client and then a subscription receiver object to receive messages.
+
+ ```python
+ async def run():
+ # create a Service Bus client using the credential
+ async with ServiceBusClient(
+ fully_qualified_namespace=FULLY_QUALIFIED_NAMESPACE,
+ credential=credential,
+ logging_enable=True) as servicebus_client:
+
+ async with servicebus_client:
+ # get the Subscription Receiver object for the subscription
+ receiver = servicebus_client.get_subscription_receiver(topic_name=TOPIC_NAME,
+ subscription_name=SUBSCRIPTION_NAME, max_wait_time=5)
+ async with receiver:
+ received_msgs = await receiver.receive_messages(max_wait_time=5, max_message_count=20)
+ for msg in received_msgs:
+ print("Received: " + str(msg))
+ # complete the message so that the message is removed from the subscription
+ await receiver.complete_message(msg)
+ # Close credential when no longer needed.
+ await credential.close()
+ ```
+
+3. Call the `run` method.
+
+ ```python
+ asyncio.run(run())
+ ```
+
+### [Connection string](#tab/connection-string)
+
+1. Similar to the send sample, add `import` statements and define constants that you should replace with your own values.
+
+ ```python
+ import asyncio
+ from azure.servicebus.aio import ServiceBusClient
+
+ NAMESPACE_CONNECTION_STR = "NAMESPACE_CONNECTION_STRING"
+ SUBSCRIPTION_NAME = "SUBSCRIPTION_NAME"
+ TOPIC_NAME = "TOPIC_NAME"
+ ```
+
+2. Create a Service Bus client and then a subscription receiver object to receive messages.
+
+ ```python
+ async def run():
+ # create a Service Bus client using the connection string
+ async with ServiceBusClient.from_connection_string(
+ conn_str=NAMESPACE_CONNECTION_STR,
+ logging_enable=True) as servicebus_client:
+
+ async with servicebus_client:
+ # get the Subscription Receiver object for the subscription
+ receiver = servicebus_client.get_subscription_receiver(topic_name=TOPIC_NAME,
+ subscription_name=SUBSCRIPTION_NAME, max_wait_time=5)
+ async with receiver:
+ received_msgs = await receiver.receive_messages(max_wait_time=5, max_message_count=20)
+ for msg in received_msgs:
+ print("Received: " + str(msg))
+ # complete the message so that the message is removed from the subscription
+ receiver.complete_message(msg)
+ ```
+
+3. Call the `run` method.
+
+ ```python
+ asyncio.run(run())
+ ```
++ ## Run the app
-When you run the application, you should see the following output:
+
+Open a command prompt that has Python in its path, and then run the code to send and receive messages for a subscription under a topic.
+
+```shell
+python send.py; python recv.py
+```
+
+You should see the following output:
```console Sent a single message
service-health Resource Health Checks Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-checks-resource-types.md
Title: Supported Resource Types through Azure Resource Health | Microsoft Docs description: Supported Resource Types through Azure Resource health Previously updated : 12/07/2021 Last updated : 01/23/2023 # Resource types and health checks in Azure resource health
Below is a complete list of all the checks executed through resource health by r
|| |<ul><li>Are core services available on the HDInsight cluster?</li><li>Can the HDInsight cluster access the key for BYOK encryption at rest?</li></ul>|
+## Microsoft.HybridCompute/machines
+|Executed Checks|
+||
+|<ul><li>Is the agent on your server connected to Azure and sending heartbeats?</li></ul>|
+ ## Microsoft.IoTCentral/IoTApps |Executed Checks| ||
static-web-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/overview.md
Azure Static Web Apps is a service that automatically builds and deploys full st
:::image type="content" source="media/overview/azure-static-web-apps-overview.png" alt-text="Azure Static Web Apps overview diagram.":::
-The workflow of Azure Static Web Apps is tailored to a developer's daily workflow. Apps are built and deployed based off of code changes.
+The workflow of Azure Static Web Apps is tailored to a developer's daily workflow. Apps are built and deployed based on code changes.
When you create an Azure Static Web Apps resource, Azure interacts directly with GitHub or Azure DevOps, to monitor a branch of your choice. Every time you push commits or accept pull requests into the watched branch, a build automatically runs and your app and API deploys to Azure.
storage Point In Time Restore Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/point-in-time-restore-overview.md
Previously updated : 06/22/2022 Last updated : 01/23/2023
Point-in-time restore for block blobs has the following limitations and known is
- Performing a customer-managed failover on a storage account resets the earliest possible restore point for that storage account. For example, suppose you have set the retention period to 30 days. If more than 30 days have elapsed since the failover, then you can restore to any point within that 30 days. However, if fewer than 30 days have elapsed since the failover, then you cannot restore to a point prior to the failover, regardless of the retention period. For example, if it's been 10 days since the failover, then the earliest possible restore point is 10 days in the past, not 30 days in the past. - Snapshots are not created or deleted as part of a restore operation. Only the base blob is restored to its previous state. - Point-in-time restore is not supported for hierarchical namespaces or operations via Azure Data Lake Storage Gen2.
+- Point-in-time restore is not supported when a private endpoint is enabled on the storage account.
> [!IMPORTANT] > If you restore block blobs to a point that is earlier than September 22, 2020, preview limitations for point-in-time restore will be in effect. Microsoft recommends that you choose a restore point that is equal to or later than September 22, 2020 to take advantage of the generally available point-in-time restore feature.
storage Manage Storage Analytics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/manage-storage-analytics-logs.md
You can instruct Azure Storage to save diagnostics logs for read, write, and del
3. Ensure **Status** is set to **On**, and select the **services** for which you'd like to enable logging. > [!div class="mx-imgBorder"]
- > ![Configure logging in the Azure portal.](./media/manage-storage-analytics-logs/enable-diagnostics.png)
+ > ![Configure logging in the Azure portal.](./media/manage-storage-analytics-logs/enable-diagnostics-retention.png)
4. To retain logs, ensure that the **Delete data** check box is selected. Then, set the number of days that you would like log data to be retained by moving the slider control beneath the check box, or by directly modifying the value that appears in the text box next to the slider control. The default for new storage accounts is seven days. If you do not want to set a retention policy, leave the **Delete data** checkbox unchecked. If there is no retention policy, it is up to you to delete the log data.
Log data can accumulate in your account over time which can increase the cost of
3. Ensure that the **Delete data** check box is selected. Then, set the number of days that you would like log data to be retained by moving the slider control beneath the check box, or by directly modifying the value that appears in the text box next to the slider control. > [!div class="mx-imgBorder"]
- > ![Modify the retention period in the Azure portal](./media/manage-storage-analytics-logs/modify-retention-period.png)
+ > ![Modify the retention period in the Azure portal](./media/manage-storage-analytics-logs/enable-diagnostics-retention.png)
The default number of days for new storage accounts is seven days. If you do not want to set a retention policy, leave the **Delete data** checkbox unchecked. If there is no retention policy, it is up to you to delete the monitoring data.
storage Storage Account Migrate Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-migrate-classic.md
+
+ Title: Migrate a classic storage account
+
+description: Learn how to migrate your classic storage accounts to the Azure Resource Manager deployment model. All classic accounts must be migrated by August 1, 2024.
+++++ Last updated : 01/20/2023+++++
+# Migrate a classic storage account to Azure Resource Manager
+
+Microsoft will retire classic storage accounts on August 1, 2024. To preserve the data in any classic storage accounts, you must migrate them to the Azure Resource Manager deployment model by that date. After you migrate your account, all of the benefits of the Azure Resource Manager deployment model will be available for that account. For more information about the deployment models, see [Resource Manager and classic deployment](../../azure-resource-manager/management/deployment-models.md).
+
+This article describes how to migrate your classic storage accounts to the Azure Resource Manager deployment model.
+
+## Migrate a classic storage account
+
+# [Portal](#tab/azure-portal)
+
+To migrate a classic storage account to the Azure Resource Manager deployment model with the Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Navigate to your classic storage account.
+1. In the **Settings** section, click **Migrate to ARM**.
+1. Click on **Validate** to determine migration feasibility.
+
+ :::image type="content" source="./media/storage-account-migrate-classic/validate-storage-account.png" alt-text="Screenshot showing how to migrate your classic storage account to Azure Resource Manager.":::
+
+1. After a successful validation, click on **Prepare** to begin the migration.
+1. Type **yes** to confirm, then select **Commit** to complete the migration.
+
+# [PowerShell](#tab/azure-powershell)
+
+To migrate a classic storage account to the Azure Resource Manager deployment model with PowerShell, first validate that the account is ready for migration by running the following command. Remember to replace the placeholder values in brackets with your own values:
+
+```azurepowershell
+$storageAccountName = "<storage-account>"
+Move-AzureStorageAccount -Validate -StorageAccountName $storageAccountName
+```
+
+Next, prepare the account for migration:
+
+```azurepowershell
+Move-AzureStorageAccount -Prepare -StorageAccountName $storageAccountName
+```
+
+Check the configuration for the prepared storage account with either Azure PowerShell or the Azure portal. If you're not ready for migration, use the following command to revert your account to its previous state:
+
+```azurepowershell
+Move-AzureStorageAccount -Abort -StorageAccountName $storageAccountName
+```
+
+Finally, when you are satisfied with the prepared configuration, move forward with the migration and commit the resources with the following command:
+
+```azurepowershell
+Move-AzureStorageAccount -Commit -StorageAccountName $storageAccountName
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+To migrate a classic storage account to the Azure Resource Manager deployment model with the Azure CLI, first prepare the account for migration by running the following command. Remember to replace the placeholder values in brackets with your own values:
+
+```azurecli
+azure storage account prepare-migration <storage-account>
+```
+
+Check the configuration for the prepared storage account with either Azure CLI or the Azure portal. If you're not ready for migration, use the following command to revert your account to its previous state:
+
+```azurecli
+azure storage account abort-migration <storage-account>
+```
+
+Finally, when you are satisfied with the prepared configuration, move forward with the migration and commit the resources with the following command:
+
+```azurecli
+azure storage account commit-migration <storage-account>
+```
+++
+## See also
+
+- [Create a storage account](storage-account-create.md)
+- [Move an Azure Storage account to another region](storage-account-move.md)
+- [Upgrade to a general-purpose v2 storage account](storage-account-upgrade.md)
+- [Get storage account configuration information](storage-account-get-info.md)
synapse-analytics Security White Paper Threat Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-threat-protection.md
Alert notifications include details of the incident, and recommendations on how
## Vulnerability assessment
-[SQL vulnerability assessment](/azure/azure-sql/database/sql-vulnerability-assessment) is part of the Microsoft Defender for SQL offering. It continually monitors the data warehouse, ensuring that databases are always maintained at a high level of security and that organizational policies are met. It provides a comprehensive security report along with actionable remediation steps for each issue found, making it easy to proactively manage database security stature even if you're not a security expert.
+[SQL vulnerability assessment](/sql/relational-databases/security/sql-vulnerability-assessment) is part of the Microsoft Defender for SQL offering. It continually monitors the data warehouse, ensuring that databases are always maintained at a high level of security and that organizational policies are met. It provides a comprehensive security report along with actionable remediation steps for each issue found, making it easy to proactively manage database security stature even if you're not a security expert.
> [!NOTE] > SQL vulnerability assessment applies to Azure Synapse and dedicated SQL pool (formerly SQL DW). It doesn't apply to serverless SQL pool or Apache Spark pool.
synapse-analytics Tutorial Score Model Predict Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-score-model-predict-spark-pool.md
Make sure all prerequisites are in place before following these steps for using
1. **Import libraries:** Import the following libraries to use PREDICT in spark session. ```python
- #Import libraries
- from pyspark.sql.functions import col, pandas_udf,udf,lit
- from azureml.core import Workspace
- from azureml.core.authentication import ServicePrincipalAuthentication
- import azure.synapse.ml.predict as pcontext
- import azure.synapse.ml.predict.utils._logger as synapse_predict_logger
+ #Import libraries
+ from pyspark.sql.functions import col, pandas_udf,udf,lit
+ from azureml.core import Workspace
+ from azureml.core.authentication import ServicePrincipalAuthentication
+ import azure.synapse.ml.predict as pcontext
+ import azure.synapse.ml.predict.utils._logger as synapse_predict_logger
``` 2. **Set parameters using variables:** Synapse ADLS data path and model URI need to be set using input variables. You also need to define runtime which is "mlflow" and the data type of model output return. Please note that all data types which are supported in PySpark are supported through PREDICT also.
Make sure all prerequisites are in place before following these steps for using
> Before running this script, update it with the URI for ADLS Gen2 data file along with model output return data type and ADLS/AML URI for the model file. ```python
- #Set input data path
- DATA_FILE = "abfss://<filesystemname>@<account name>.dfs.core.windows.net/<file path>"
-
- #Set model URI
- #Set AML URI, if trained model is registered in AML
- AML_MODEL_URI = "<aml model uri>" #In URI ":x" signifies model version in AML. You can choose which model version you want to run. If ":x" is not provided then by default latest version will be picked.
-
- #Set ADLS URI, if trained model is uploaded in ADLS
- ADLS_MODEL_URI = "abfss://<filesystemname>@<account name>.dfs.core.windows.net/<model mlflow folder path>"
-
- #Define model return type
- RETURN_TYPES = "<data_type>" # for ex: int, float etc. PySpark data types are supported
-
- #Define model runtime. This supports only mlflow
- RUNTIME = "mlflow"
+ #Set input data path
+ DATA_FILE = "abfss://<filesystemname>@<account name>.dfs.core.windows.net/<file path>"
+
+ #Set model URI
+ #Set AML URI, if trained model is registered in AML
+ AML_MODEL_URI = "<aml model uri>" #In URI ":x" signifies model version in AML. You can choose which model version you want to run. If ":x" is not provided then by default latest version will be picked.
+
+ #Set ADLS URI, if trained model is uploaded in ADLS
+ ADLS_MODEL_URI = "abfss://<filesystemname>@<account name>.dfs.core.windows.net/<model mlflow folder path>"
+
+ #Define model return type
+ RETURN_TYPES = "<data_type>" # for ex: int, float etc. PySpark data types are supported
+
+ #Define model runtime. This supports only mlflow
+ RUNTIME = "mlflow"
``` 3. **Ways to authenticate AML workspace:** If the model is stored in the default ADLS account of Synapse workspace, then you do not need any further authentication setup. If the model is registered in Azure Machine Learning, then you can choose either of the following two supported ways of authentication.
Make sure all prerequisites are in place before following these steps for using
- **Through service principal:** You can use service principal client ID and secret directly to authenticate to AML workspace. Service principal must have "Contributor" access to the AML workspace. ```python
- #AML workspace authentication using service principal
- AZURE_TENANT_ID = "<tenant_id>"
- AZURE_CLIENT_ID = "<client_id>"
- AZURE_CLIENT_SECRET = "<client_secret>"
-
- AML_SUBSCRIPTION_ID = "<subscription_id>"
- AML_RESOURCE_GROUP = "<resource_group_name>"
- AML_WORKSPACE_NAME = "<aml_workspace_name>"
-
- svc_pr = ServicePrincipalAuthentication(
- tenant_id=AZURE_TENANT_ID,
- service_principal_id=AZURE_CLIENT_ID,
- service_principal_password=AZURE_CLIENT_SECRET
- )
-
- ws = Workspace(
- workspace_name = AML_WORKSPACE_NAME,
- subscription_id = AML_SUBSCRIPTION_ID,
- resource_group = AML_RESOURCE_GROUP,
- auth=svc_pr
- )
+ #AML workspace authentication using service principal
+ AZURE_TENANT_ID = "<tenant_id>"
+ AZURE_CLIENT_ID = "<client_id>"
+ AZURE_CLIENT_SECRET = "<client_secret>"
+
+ AML_SUBSCRIPTION_ID = "<subscription_id>"
+ AML_RESOURCE_GROUP = "<resource_group_name>"
+ AML_WORKSPACE_NAME = "<aml_workspace_name>"
+
+ svc_pr = ServicePrincipalAuthentication(
+ tenant_id=AZURE_TENANT_ID,
+ service_principal_id=AZURE_CLIENT_ID,
+ service_principal_password=AZURE_CLIENT_SECRET
+ )
+
+ ws = Workspace(
+ workspace_name = AML_WORKSPACE_NAME,
+ subscription_id = AML_SUBSCRIPTION_ID,
+ resource_group = AML_RESOURCE_GROUP,
+ auth=svc_pr
+ )
``` - **Through linked service:** You can use linked service to authenticate to AML workspace. Linked service can use "service principal" or Synapse workspace's "Managed Service Identity (MSI)" for authentication. "Service principal" or "Managed Service Identity (MSI)" must have "Contributor" access to the AML workspace. ```python
- #AML workspace authentication using linked service
- from notebookutils.mssparkutils import azureML
- ws = azureML.getWorkspace("<linked_service_name>") # "<linked_service_name>" is the linked service name, not AML workspace name. Also, linked service supports MSI and service principal both
+ #AML workspace authentication using linked service
+ from notebookutils.mssparkutils import azureML
+ ws = azureML.getWorkspace("<linked_service_name>") # "<linked_service_name>" is the linked service name, not AML workspace name. Also, linked service supports MSI and service principal both
``` 4. **Enable PREDICT in spark session:** Set the spark configuration `spark.synapse.ml.predict.enabled` to `true` to enable the library. ```python
- #Enable SynapseML predict
- spark.conf.set("spark.synapse.ml.predict.enabled","true")
+ #Enable SynapseML predict
+ spark.conf.set("spark.synapse.ml.predict.enabled","true")
``` 5. **Bind model in spark session:** Bind model with required inputs so that the model can be referred in the spark session. Also define alias so that you can use same alias in the PREDICT call.
Make sure all prerequisites are in place before following these steps for using
> Update model alias and model uri in this script before running it. ```python
- #Bind model within Spark session
- model = pcontext.bind_model(
- return_types=RETURN_TYPES,
- runtime=RUNTIME,
- model_alias="<random_alias_name>", #This alias will be used in PREDICT call to refer this model
- model_uri=ADLS_MODEL_URI, #In case of AML, it will be AML_MODEL_URI
- aml_workspace=ws #This is only for AML. In case of ADLS, this parameter can be removed
- ).register()
+ #Bind model within Spark session
+ model = pcontext.bind_model(
+ return_types=RETURN_TYPES,
+ runtime=RUNTIME,
+ model_alias="<random_alias_name>", #This alias will be used in PREDICT call to refer this model
+ model_uri=ADLS_MODEL_URI, #In case of AML, it will be AML_MODEL_URI
+ aml_workspace=ws #This is only for AML. In case of ADLS, this parameter can be removed
+ ).register()
``` 6. **Read data from ADLS:** Read data from ADLS. Create spark dataframe and a view on top of data frame.
Make sure all prerequisites are in place before following these steps for using
> Update view name in this script before running it. ```python
- #Read data from ADLS
- df = spark.read \
- .format("csv") \
- .option("header", "true") \
- .csv(DATA_FILE,
- inferSchema=True)
- df.createOrReplaceTempView('<view_name>')
+ #Read data from ADLS
+ df = spark.read \
+ .format("csv") \
+ .option("header", "true") \
+ .csv(DATA_FILE,
+ inferSchema=True)
+ df.createOrReplaceTempView('<view_name>')
``` 7. **Generate score using PREDICT:** You can call PREDICT three ways, using Spark SQL API, using User define function (UDF), and using Transformer API. Following are examples.
Make sure all prerequisites are in place before following these steps for using
> Update the model alias name, view name, and comma separated model input column name in this script before running it. Comma separated model input columns are the same as those used while training the model. ```python
- #Call PREDICT using Spark SQL API
-
- predictions = spark.sql(
- """
- SELECT PREDICT('<random_alias_name>',
- <comma_separated_model_input_column_name>) AS predict
- FROM <view_name>
- """
- ).show()
+ #Call PREDICT using Spark SQL API
+
+ predictions = spark.sql(
+ """
+ SELECT PREDICT('<random_alias_name>',
+ <comma_separated_model_input_column_name>) AS predict
+ FROM <view_name>
+ """
+ ).show()
``` ```python
- #Call PREDICT using user defined function (UDF)
-
- df = df[<comma_separated_model_input_column_name>] # for ex. df["empid","empname"]
-
- df.withColumn("PREDICT",model.udf(lit("<random_alias_name>"),*df.columns)).show()
+ #Call PREDICT using user defined function (UDF)
+
+ df = df[<comma_separated_model_input_column_name>] # for ex. df["empid","empname"]
+
+ df.withColumn("PREDICT",model.udf(lit("<random_alias_name>"),*df.columns)).show()
``` ```python
- #Call PREDICT using Transformer API
-
- columns = [<comma_separated_model_input_column_name>] # for ex. df["empid","empname"]
-
- tranformer = model.create_transformer().setInputCols(columns).setOutputCol("PREDICT")
-
- tranformer.transform(df).show()
+ #Call PREDICT using Transformer API
+
+ columns = [<comma_separated_model_input_column_name>] # for ex. df["empid","empname"]
+
+ tranformer = model.create_transformer().setInputCols(columns).setOutputCol("PREDICT")
+
+ tranformer.transform(df).show()
``` ## Sklearn example using PREDICT
Make sure all prerequisites are in place before following these steps for using
1. Import libraries and read the training dataset from ADLS. ```python
- # Import libraries and read training dataset from ADLS
-
- import fsspec
- import pandas
- from fsspec.core import split_protocol
-
- adls_account_name = 'xyz' #Provide exact ADLS account name
- adls_account_key = 'xyz' #Provide exact ADLS account key
-
- fsspec_handle = fsspec.open('abfs[s]://<container>/<path-to-file>', account_name=adls_account_name, account_key=adls_account_key)
-
- with fsspec_handle.open() as f:
- train_df = pandas.read_csv(f)
+ # Import libraries and read training dataset from ADLS
+
+ import fsspec
+ import pandas
+ from fsspec.core import split_protocol
+
+ adls_account_name = 'xyz' #Provide exact ADLS account name
+ adls_account_key = 'xyz' #Provide exact ADLS account key
+
+ fsspec_handle = fsspec.open('abfs[s]://<container>/<path-to-file>', account_name=adls_account_name, account_key=adls_account_key)
+
+ with fsspec_handle.open() as f:
+ train_df = pandas.read_csv(f)
``` 1. Train model and generate mlflow artifacts. ```python
- # Train model and generate mlflow artifacts
-
- import os
- import shutil
- import mlflow
- import json
- from mlflow.utils import model_utils
- import numpy as np
- import pandas as pd
- from sklearn.linear_model import LinearRegression
-
-
- class LinearRegressionModel():
- _ARGS_FILENAME = 'args.json'
- FEATURES_KEY = 'features'
- TARGETS_KEY = 'targets'
- TARGETS_PRED_KEY = 'targets_pred'
-
- def __init__(self, fit_intercept, nb_input_features=9, nb_output_features=1):
- self.fit_intercept = fit_intercept
- self.nb_input_features = nb_input_features
- self.nb_output_features = nb_output_features
-
- def get_args(self):
- args = {
- 'nb_input_features': self.nb_input_features,
- 'nb_output_features': self.nb_output_features,
- 'fit_intercept': self.fit_intercept
- }
- return args
-
- def create_model(self):
- self.model = LinearRegression(fit_intercept=self.fit_intercept)
-
- def train(self, dataset):
-
- features = np.stack([sample for sample in iter(
- dataset[LinearRegressionModel.FEATURES_KEY])], axis=0)
-
- targets = np.stack([sample for sample in iter(
- dataset[LinearRegressionModel.TARGETS_KEY])], axis=0)
-
-
- self.model.fit(features, targets)
-
- def predict(self, dataset):
- features = np.stack([sample for sample in iter(
- dataset[LinearRegressionModel.FEATURES_KEY])], axis=0)
- targets_pred = self.model.predict(features)
- return targets_pred
-
- def save(self, path):
- if os.path.exists(path):
- shutil.rmtree(path)
-
- # save the sklearn model with mlflow
- mlflow.sklearn.save_model(self.model, path)
-
- # save args
- self._save_args(path)
-
- def _save_args(self, path):
- args_filename = os.path.join(path, LinearRegressionModel._ARGS_FILENAME)
- with open(args_filename, 'w') as f:
- args = self.get_args()
- json.dump(args, f)
-
-
- def train(train_df, output_model_path):
- print(f"Start to train LinearRegressionModel.")
-
- # Initialize input dataset
- dataset = train_df.to_numpy()
- datasets = {}
- datasets['targets'] = dataset[:, -1]
- datasets['features'] = dataset[:, :9]
-
- # Initialize model class obj
- model_class = LinearRegressionModel(fit_intercept=10)
- with mlflow.start_run(nested=True) as run:
- model_class.create_model()
- model_class.train(datasets)
- model_class.save(output_model_path)
- print(model_class.predict(datasets))
-
-
- train(train_df, './artifacts/output')
+ # Train model and generate mlflow artifacts
+
+ import os
+ import shutil
+ import mlflow
+ import json
+ from mlflow.utils import model_utils
+ import numpy as np
+ import pandas as pd
+ from sklearn.linear_model import LinearRegression
++
+ class LinearRegressionModel():
+ _ARGS_FILENAME = 'args.json'
+ FEATURES_KEY = 'features'
+ TARGETS_KEY = 'targets'
+ TARGETS_PRED_KEY = 'targets_pred'
+
+ def __init__(self, fit_intercept, nb_input_features=9, nb_output_features=1):
+ self.fit_intercept = fit_intercept
+ self.nb_input_features = nb_input_features
+ self.nb_output_features = nb_output_features
+
+ def get_args(self):
+ args = {
+ 'nb_input_features': self.nb_input_features,
+ 'nb_output_features': self.nb_output_features,
+ 'fit_intercept': self.fit_intercept
+ }
+ return args
+
+ def create_model(self):
+ self.model = LinearRegression(fit_intercept=self.fit_intercept)
+
+ def train(self, dataset):
+
+ features = np.stack([sample for sample in iter(
+ dataset[LinearRegressionModel.FEATURES_KEY])], axis=0)
+
+ targets = np.stack([sample for sample in iter(
+ dataset[LinearRegressionModel.TARGETS_KEY])], axis=0)
++
+ self.model.fit(features, targets)
+
+ def predict(self, dataset):
+ features = np.stack([sample for sample in iter(
+ dataset[LinearRegressionModel.FEATURES_KEY])], axis=0)
+ targets_pred = self.model.predict(features)
+ return targets_pred
+
+ def save(self, path):
+ if os.path.exists(path):
+ shutil.rmtree(path)
+
+ # save the sklearn model with mlflow
+ mlflow.sklearn.save_model(self.model, path)
+
+ # save args
+ self._save_args(path)
+
+ def _save_args(self, path):
+ args_filename = os.path.join(path, LinearRegressionModel._ARGS_FILENAME)
+ with open(args_filename, 'w') as f:
+ args = self.get_args()
+ json.dump(args, f)
++
+ def train(train_df, output_model_path):
+ print(f"Start to train LinearRegressionModel.")
+
+ # Initialize input dataset
+ dataset = train_df.to_numpy()
+ datasets = {}
+ datasets['targets'] = dataset[:, -1]
+ datasets['features'] = dataset[:, :9]
+
+ # Initialize model class obj
+ model_class = LinearRegressionModel(fit_intercept=10)
+ with mlflow.start_run(nested=True) as run:
+ model_class.create_model()
+ model_class.train(datasets)
+ model_class.save(output_model_path)
+ print(model_class.predict(datasets))
++
+ train(train_df, './artifacts/output')
``` 1. Store model MLFLOW artifacts in ADLS or register in AML. ```python
- # Store model MLFLOW artifacts in ADLS
-
- STORAGE_PATH = 'abfs[s]://<container>/<path-to-store-folder>'
-
- protocol, _ = split_protocol(STORAGE_PATH)
- print (protocol)
-
- storage_options = {
- 'account_name': adls_account_name,
- 'account_key': adls_account_key
- }
- fs = fsspec.filesystem(protocol, **storage_options)
- fs.put(
- './artifacts/output',
- STORAGE_PATH,
- recursive=True, overwrite=True)
+ # Store model MLFLOW artifacts in ADLS
+
+ STORAGE_PATH = 'abfs[s]://<container>/<path-to-store-folder>'
+
+ protocol, _ = split_protocol(STORAGE_PATH)
+ print (protocol)
+
+ storage_options = {
+ 'account_name': adls_account_name,
+ 'account_key': adls_account_key
+ }
+ fs = fsspec.filesystem(protocol, **storage_options)
+ fs.put(
+ './artifacts/output',
+ STORAGE_PATH,
+ recursive=True, overwrite=True)
``` ```python
- # Register model MLFLOW artifacts in AML
-
- from azureml.core import Workspace, Model
- from azureml.core.authentication import ServicePrincipalAuthentication
-
- AZURE_TENANT_ID = "xyz"
- AZURE_CLIENT_ID = "xyz"
- AZURE_CLIENT_SECRET = "xyz"
-
- AML_SUBSCRIPTION_ID = "xyz"
- AML_RESOURCE_GROUP = "xyz"
- AML_WORKSPACE_NAME = "xyz"
-
- svc_pr = ServicePrincipalAuthentication(
- tenant_id=AZURE_TENANT_ID,
- service_principal_id=AZURE_CLIENT_ID,
- service_principal_password=AZURE_CLIENT_SECRET
- )
-
- ws = Workspace(
- workspace_name = AML_WORKSPACE_NAME,
- subscription_id = AML_SUBSCRIPTION_ID,
- resource_group = AML_RESOURCE_GROUP,
- auth=svc_pr
- )
-
- model = Model.register(
- model_path="./artifacts/output",
- model_name="xyz",
- workspace=ws,
- )
+ # Register model MLFLOW artifacts in AML
+
+ from azureml.core import Workspace, Model
+ from azureml.core.authentication import ServicePrincipalAuthentication
+
+ AZURE_TENANT_ID = "xyz"
+ AZURE_CLIENT_ID = "xyz"
+ AZURE_CLIENT_SECRET = "xyz"
+
+ AML_SUBSCRIPTION_ID = "xyz"
+ AML_RESOURCE_GROUP = "xyz"
+ AML_WORKSPACE_NAME = "xyz"
+
+ svc_pr = ServicePrincipalAuthentication(
+ tenant_id=AZURE_TENANT_ID,
+ service_principal_id=AZURE_CLIENT_ID,
+ service_principal_password=AZURE_CLIENT_SECRET
+ )
+
+ ws = Workspace(
+ workspace_name = AML_WORKSPACE_NAME,
+ subscription_id = AML_SUBSCRIPTION_ID,
+ resource_group = AML_RESOURCE_GROUP,
+ auth=svc_pr
+ )
+
+ model = Model.register(
+ model_path="./artifacts/output",
+ model_name="xyz",
+ workspace=ws,
+ )
``` 1. Set required parameters using variables. ```python
- # If using ADLS uploaded model
-
- import pandas as pd
- from pyspark.sql import SparkSession
- from pyspark.sql.functions import col, pandas_udf,udf,lit
- import azure.synapse.ml.predict as pcontext
- import azure.synapse.ml.predict.utils._logger as synapse_predict_logger
-
- DATA_FILE = "abfss://xyz@xyz.dfs.core.windows.net/xyz.csv"
- ADLS_MODEL_URI_SKLEARN = "abfss://xyz@xyz.dfs.core.windows.net/mlflow/sklearn/ e2e_linear_regression/"
- RETURN_TYPES = "INT"
- RUNTIME = "mlflow"
+ # If using ADLS uploaded model
+
+ import pandas as pd
+ from pyspark.sql import SparkSession
+ from pyspark.sql.functions import col, pandas_udf,udf,lit
+ import azure.synapse.ml.predict as pcontext
+ import azure.synapse.ml.predict.utils._logger as synapse_predict_logger
+
+ DATA_FILE = "abfss://xyz@xyz.dfs.core.windows.net/xyz.csv"
+ ADLS_MODEL_URI_SKLEARN = "abfss://xyz@xyz.dfs.core.windows.net/mlflow/sklearn/ e2e_linear_regression/"
+ RETURN_TYPES = "INT"
+ RUNTIME = "mlflow"
``` ```python
- # If using AML registered model
-
- from pyspark.sql.functions import col, pandas_udf,udf,lit
- from azureml.core import Workspace
- from azureml.core.authentication import ServicePrincipalAuthentication
- import azure.synapse.ml.predict as pcontext
- import azure.synapse.ml.predict.utils._logger as synapse_predict_logger
-
- DATA_FILE = "abfss://xyz@xyz.dfs.core.windows.net/xyz.csv"
- AML_MODEL_URI_SKLEARN = "aml://xyz"
- RETURN_TYPES = "INT"
- RUNTIME = "mlflow"
+ # If using AML registered model
+
+ from pyspark.sql.functions import col, pandas_udf,udf,lit
+ from azureml.core import Workspace
+ from azureml.core.authentication import ServicePrincipalAuthentication
+ import azure.synapse.ml.predict as pcontext
+ import azure.synapse.ml.predict.utils._logger as synapse_predict_logger
+
+ DATA_FILE = "abfss://xyz@xyz.dfs.core.windows.net/xyz.csv"
+ AML_MODEL_URI_SKLEARN = "aml://xyz"
+ RETURN_TYPES = "INT"
+ RUNTIME = "mlflow"
``` 1. Enable SynapseML PREDICT functionality in spark session. ```python
- spark.conf.set("spark.synapse.ml.predict.enabled","true")
+ spark.conf.set("spark.synapse.ml.predict.enabled","true")
``` 1. Bind model in spark session. ```python
- # If using ADLS uploaded model
-
- model = pcontext.bind_model(
- return_types=RETURN_TYPES,
- runtime=RUNTIME,
- model_alias="sklearn_linear_regression",
- model_uri=ADLS_MODEL_URI_SKLEARN,
- ).register()
+ # If using ADLS uploaded model
+
+ model = pcontext.bind_model(
+ return_types=RETURN_TYPES,
+ runtime=RUNTIME,
+ model_alias="sklearn_linear_regression",
+ model_uri=ADLS_MODEL_URI_SKLEARN,
+ ).register()
``` ```python
- # If using AML registered model
-
- model = pcontext.bind_model(
- return_types=RETURN_TYPES,
- runtime=RUNTIME,
- model_alias="sklearn_linear_regression",
- model_uri=AML_MODEL_URI_SKLEARN,
- aml_workspace=ws
- ).register()
+ # If using AML registered model
+
+ model = pcontext.bind_model(
+ return_types=RETURN_TYPES,
+ runtime=RUNTIME,
+ model_alias="sklearn_linear_regression",
+ model_uri=AML_MODEL_URI_SKLEARN,
+ aml_workspace=ws
+ ).register()
``` 1. Load test data from ADLS. ```python
- # Load data from ADLS
-
- df = spark.read \
- .format("csv") \
- .option("header", "true") \
- .csv(DATA_FILE,
- inferSchema=True)
- df = df.select(df.columns[:9])
- df.createOrReplaceTempView('data')
- df.show(10)
+ # Load data from ADLS
+
+ df = spark.read \
+ .format("csv") \
+ .option("header", "true") \
+ .csv(DATA_FILE,
+ inferSchema=True)
+ df = df.select(df.columns[:9])
+ df.createOrReplaceTempView('data')
+ df.show(10)
``` 1. Call PREDICT to generate the score. ```python
- # Call PREDICT
-
- predictions = spark.sql(
- """
- SELECT PREDICT('sklearn_linear_regression', *) AS predict FROM data
- """
- ).show()
+ # Call PREDICT
+
+ predictions = spark.sql(
+ """
+ SELECT PREDICT('sklearn_linear_regression', *) AS predict FROM data
+ """
+ ).show()
```
synapse-analytics Quickstart Transform Data Using Spark Job Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-transform-data-using-spark-job-definition.md
Last updated 02/15/2022
-# Quickstart: Transform data using Apache Spark job definition.
+# Quickstart: Transform data using Apache Spark job definition
In this quickstart, you'll use Azure Synapse Analytics to create a pipeline using Apache Spark job definition.
synapse-analytics Synapse Spark Sql Pool Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md
The Azure Synapse Dedicated SQL Pool Connector for Apache Spark in Azure Synapse
At a high-level, the connector provides the following capabilities: * Read from Azure Synapse Dedicated SQL Pool:
- * Read large data sets from Synapse Dedicated SQL Pool Tables (Internal and External) and Views.
+ * Read large data sets from Synapse Dedicated SQL Pool Tables (Internal and External) and views.
* Comprehensive predicate push down support, where filters on DataFrame get mapped to corresponding SQL predicate push down. * Support for column pruning.
+ * Support for query push down.
* Write to Azure Synapse Dedicated SQL Pool: * Ingest large volume data to Internal and External table types. * Supports following DataFrame save mode preferences:
This section presents reference code templates to describe how to use and invoke
##### [Scala](#tab/scala) ```Scala
-synapsesql(tableName:String) => org.apache.spark.sql.DataFrame
+synapsesql(tableName:String="") => org.apache.spark.sql.DataFrame
``` ##### [Python](#tab/python) ```python
-synapsesql(table_name: str) -> org.apache.spark.sql.DataFrame
+synapsesql(table_name: str="") -> org.apache.spark.sql.DataFrame
```
-#### Read using Azure AD based authentication
+#### Read from a table using Azure AD based authentication
##### [Scala](#tab/scala1)
dfToReadFromTable.show()
```
-#### Read using basic authentication
+#### Read from a query using Azure AD based authentication
+> [!Note]
+> Restrictions while reading from query:
+> * Table name and query cannot be specified at the same time.
+> * Only select queries are allowed. DDL and DML SQLs are not allowed.
+> * The select and filter options on dataframe are not pushed down to the SQL dedicated pool when a query is specified.
+> * Read from a query is only available in Spark 3.1 and 3.2. It is not available in Spark 2.4.
##### [Scala](#tab/scala2)
import org.apache.spark.sql.DataFrame
import com.microsoft.spark.sqlanalytics.utils.Constants import org.apache.spark.sql.SqlAnalyticsConnector._ +
+// Read from a query
+// Query can be provided either as an argument to synapsesql or as a Constant - Constants.QUERY
+val dfToReadFromQueryAsOption:DataFrame = spark.read.
+ // Name of the SQL Dedicated Pool or database where to run the query
+ // Database can be specified as a Spark Config - spark.sqlanalyticsconnector.dw.database or as a Constant - Constants.DATABASE
+ option(Constants.DATABASE, "<database_name>").
+ //If `Constants.SERVER` is not provided, the `<database_name>` from the three-part table name argument
+ //to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point.
+ option(Constants.SERVER, "<sql-server-name>.sql.azuresynapse.net").
+ //Defaults to storage path defined in the runtime configurations
+ option(Constants.TEMP_FOLDER, "abfss://<container_name>@<storage_account_name>.dfs.core.windows.net/<some_base_path_for_temporary_staging_folders>")
+ //query from which data will be read
+ .option(Constants.QUERY, "select <column_name>, count(*) as cnt from <schema_name>.<table_name> group by <column_name>")
+ synapsesql()
+
+val dfToReadFromQueryAsArgument:DataFrame = spark.read.
+ // Name of the SQL Dedicated Pool or database where to run the query
+ // Database can be specified as a Spark Config - spark.sqlanalyticsconnector.dw.database or as a Constant - Constants.DATABASE
+ option(Constants.DATABASE, "<database_name>")
+ //If `Constants.SERVER` is not provided, the `<database_name>` from the three-part table name argument
+ //to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point.
+ option(Constants.SERVER, "<sql-server-name>.sql.azuresynapse.net").
+ //Defaults to storage path defined in the runtime configurations
+ option(Constants.TEMP_FOLDER, "abfss://<container_name>@<storage_account_name>.dfs.core.windows.net/<some_base_path_for_temporary_staging_folders>")
+ //query from which data will be read
+ .synapsesql("select <column_name>, count(*) as counts from <schema_name>.<table_name> group by <column_name>")
++
+//Show contents of the dataframe
+dfToReadFromQueryAsOption.show()
+dfToReadFromQueryAsArgument.show()
+```
+
+##### [Python](#tab/python2)
+
+```python
+# Add required imports
+import com.microsoft.spark.sqlanalytics
+from com.microsoft.spark.sqlanalytics.Constants import Constants
+from pyspark.sql.functions import col
+
+# Name of the SQL Dedicated Pool or database where to run the query
+# Database can be specified as a Spark Config or as a Constant - Constants.DATABASE
+spark.conf.set("spark.sqlanalyticsconnector.dw.database", "<database_name>")
+
+# Read from a query
+# Query can be provided either as an argument to synapsesql or as a Constant - Constants.QUERY
+dfToReadFromQueryAsOption = (spark.read
+ # Name of the SQL Dedicated Pool or database where to run the query
+ # Database can be specified as a Spark Config - spark.sqlanalyticsconnector.dw.database or as a Constant - Constants.DATABASE
+ .option(Constants.DATABASE, "<database_name>")
+ # If `Constants.SERVER` is not provided, the `<database_name>` from the three-part table name argument
+ # to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point.
+ .option(Constants.SERVER, "<sql-server-name>.sql.azuresynapse.net")
+ # Defaults to storage path defined in the runtime configurations
+ .option(Constants.TEMP_FOLDER, "abfss://<container_name>@<storage_account_name>.dfs.core.windows.net/<some_base_path_for_temporary_staging_folders>")
+ # query from which data will be read
+ .option(Constants.QUERY, "select <column_name>, count(*) as cnt from <schema_name>.<table_name> group by <column_name>")
+ .synapsesql()
+)
+
+dfToReadFromQueryAsArgument = (spark.read
+ # Name of the SQL Dedicated Pool or database where to run the query
+ # Database can be specified as a Spark Config - spark.sqlanalyticsconnector.dw.database or as a Constant - Constants.DATABASE
+ .option(Constants.DATABASE, "<database_name>")
+ # If `Constants.SERVER` is not provided, the `<database_name>` from the three-part table name argument
+ # to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point.
+ .option(Constants.SERVER, "<sql-server-name>.sql.azuresynapse.net")
+ # Defaults to storage path defined in the runtime configurations
+ .option(Constants.TEMP_FOLDER, "abfss://<container_name>@<storage_account_name>.dfs.core.windows.net/<some_base_path_for_temporary_staging_folders>")
+ # query from which data will be read
+ .synapsesql("select <column_name>, count(*) as counts from <schema_name>.<table_name> group by <column_name>")
+)
+
+# Show contents of the dataframe
+dfToReadFromQueryAsOption.show()
+dfToReadFromQueryAsArgument.show()
+```
++
+#### Read from a table using basic authentication
+
+##### [Scala](#tab/scala3)
+
+```Scala
+//Use case is to read data from an internal table in Synapse Dedicated SQL Pool DB
+//Azure Active Directory based authentication approach is preferred here.
+import org.apache.spark.sql.DataFrame
+import com.microsoft.spark.sqlanalytics.utils.Constants
+import org.apache.spark.sql.SqlAnalyticsConnector._
+ //Read from existing internal table val dfToReadFromTable:DataFrame = spark.read. //If `Constants.SERVER` is not provided, the `<database_name>` from the three-part table name argument
val dfToReadFromTable:DataFrame = spark.read.
//Set user's password to the database option(Constants.PASSWORD, "<user_password>"). //Set name of the data source definition that is defined with database scoped credentials.
- //Data extracted from the SQL query will be staged to the storage path defined on the data source's location setting.
+ //Data extracted from the table will be staged to the storage path defined on the data source's location setting.
option(Constants.DATA_SOURCE, "<data_source_name>"). //Three-part table name from where data will be read. synapsesql("<database_name>.<schema_name>.<table_name>"). //Column-pruning i.e., query select column values.
- select("<some_column_1>", "<some_column_5>", "<some_column_n>").
+ select("<some_column_1>", "<some_column_5>", "<some_column_n>").
//Push-down filter criteria that gets translated to SQL Push-down Predicates. filter(col("Title").startsWith("E")). //Fetch a sample of 10 records limit(10)
+
//Show contents of the dataframe dfToReadFromTable.show() ```
-##### [Python](#tab/python2)
+##### [Python](#tab/python3)
```python # Add required imports
dfToReadFromTable = (spark.read
.option(Constants.PASSWORD, "<user_password>") # Set name of the data source definition that is defined with database scoped credentials. # https://learn.microsoft.com/sql/t-sql/statements/create-external-data-source-transact-sql?view=sql-server-ver15&tabs=dedicated#h-create-external-data-source-to-access-data-in-azure-storage-using-the-abfs-interface
- # Data extracted from the SQL query will be staged to the storage path defined on the data source's location setting.
+ # Data extracted from the table will be staged to the storage path defined on the data source's location setting.
.option(Constants.DATA_SOURCE, "<data_source_name>") # Three-part table name from where data will be read. .synapsesql("<database_name>.<schema_name>.<table_name>")
dfToReadFromTable = (spark.read
# Push-down filter criteria that gets translated to SQL Push-down Predicates. .filter(col("Title").contains("E")) # Fetch a sample of 10 records
- .limit(10))
+ .limit(10)
+ )
# Show contents of the dataframe dfToReadFromTable.show()
dfToReadFromTable.show()
```
+#### Read from a query using basic authentication
+
+##### [Scala](#tab/scala4)
+
+```Scala
+//Use case is to read data from an internal table in Synapse Dedicated SQL Pool DB
+//Azure Active Directory based authentication approach is preferred here.
+import org.apache.spark.sql.DataFrame
+import com.microsoft.spark.sqlanalytics.utils.Constants
+import org.apache.spark.sql.SqlAnalyticsConnector._
+
+// Name of the SQL Dedicated Pool or database where to run the query
+// Database can be specified as a Spark Config or as a Constant - Constants.DATABASE
+spark.conf.set("spark.sqlanalyticsconnector.dw.database", "<database_name>")
+
+// Read from a query
+// Query can be provided either as an argument to synapsesql or as a Constant - Constants.QUERY
+val dfToReadFromQueryAsOption:DataFrame = spark.read.
+ //Name of the SQL Dedicated Pool or database where to run the query
+ //Database can be specified as a Spark Config - spark.sqlanalyticsconnector.dw.database or as a Constant - Constants.DATABASE
+ option(Constants.DATABASE, "<database_name>").
+ //If `Constants.SERVER` is not provided, the `<database_name>` from the three-part table name argument
+ //to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point.
+ option(Constants.SERVER, "<sql-server-name>.sql.azuresynapse.net").
+ //Set database user name
+ option(Constants.USER, "<user_name>").
+ //Set user's password to the database
+ option(Constants.PASSWORD, "<user_password>").
+ //Set name of the data source definition that is defined with database scoped credentials.
+ //Data extracted from the SQL query will be staged to the storage path defined on the data source's location setting.
+ option(Constants.DATA_SOURCE, "<data_source_name>").
+ //Query where data will be read.
+ option(Constants.QUERY, "select <column_name>, count(*) as counts from <schema_name>.<table_name> group by <column_name>" ).
+ synapsesql()
+
+val dfToReadFromQueryAsArgument:DataFrame = spark.read.
+ //Name of the SQL Dedicated Pool or database where to run the query
+ //Database can be specified as a Spark Config - spark.sqlanalyticsconnector.dw.database or as a Constant - Constants.DATABASE
+ option(Constants.DATABASE, "<database_name>").
+ //If `Constants.SERVER` is not provided, the `<database_name>` from the three-part table name argument
+ //to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point.
+ option(Constants.SERVER, "<sql-server-name>.sql.azuresynapse.net").
+ //Set database user name
+ option(Constants.USER, "<user_name>").
+ //Set user's password to the database
+ option(Constants.PASSWORD, "<user_password>").
+ //Set name of the data source definition that is defined with database scoped credentials.
+ //Data extracted from the SQL query will be staged to the storage path defined on the data source's location setting.
+ option(Constants.DATA_SOURCE, "<data_source_name>").
+ //Query where data will be read.
+ synapsesql("select <column_name>, count(*) as counts from <schema_name>.<table_name> group by <column_name>")
+
+
+//Show contents of the dataframe
+dfToReadFromQueryAsOption.show()
+dfToReadFromQueryAsArgument.show()
+```
+
+##### [Python](#tab/python4)
+
+```python
+# Add required imports
+import com.microsoft.spark.sqlanalytics
+from com.microsoft.spark.sqlanalytics.Constants import Constants
+from pyspark.sql.functions import col
+
+# Name of the SQL Dedicated Pool or database where to run the query
+# Database can be specified as a Spark Config or as a Constant - Constants.DATABASE
+spark.conf.set("spark.sqlanalyticsconnector.dw.database", "<database_name>")
+
+# Read from a query
+# Query can be provided either as an argument to synapsesql or as a Constant - Constants.QUERY
+dfToReadFromQueryAsOption = (spark.read
+ # Name of the SQL Dedicated Pool or database where to run the query
+ # Database can be specified as a Spark Config - spark.sqlanalyticsconnector.dw.database or as a Constant - Constants.DATABASE
+ .option(Constants.DATABASE, "<database_name>")
+ # If `Constants.SERVER` is not provided, the `<database_name>` from the three-part table name argument
+ # to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point.
+ .option(Constants.SERVER, "<sql-server-name>.sql.azuresynapse.net")
+ # Set database user name
+ .option(Constants.USER, "<user_name>")
+ # Set user's password to the database
+ .option(Constants.PASSWORD, "<user_password>")
+ # Set name of the data source definition that is defined with database scoped credentials.
+ # https://docs.microsoft.com/sql/t-sql/statements/create-external-data-source-transact-sql?view=sql-server-ver15&tabs=dedicated#h-create-external-data-source-to-access-data-in-azure-storage-using-the-abfs-interface
+ # Data extracted from the SQL query will be staged to the storage path defined on the data source's location setting.
+ .option(Constants.DATA_SOURCE, "<data_source_name>")
+ # Query from where data will be read.
+ .option(Constants.QUERY, "select <column_name>, count(*) as counts from <schema_name>.<table_name> group by <column_name>")
+ .synapsesql()
+ )
+
+dfToReadFromQueryAsArgument = (spark.read
+ # Name of the SQL Dedicated Pool or database where to run the query
+ # Database can be specified as a Spark Config - spark.sqlanalyticsconnector.dw.database or as a Constant - Constants.DATABASE
+ .option(Constants.DATABASE, "<database_name>")
+ # If `Constants.SERVER` is not provided, the `<database_name>` from the three-part table name argument
+ # to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point.
+ .option(Constants.SERVER, "<sql-server-name>.sql.azuresynapse.net")
+ # Set database user name
+ .option(Constants.USER, "<user_name>")
+ # Set user's password to the database
+ .option(Constants.PASSWORD, "<user_password>")
+ # Set name of the data source definition that is defined with database scoped credentials.
+ # https://docs.microsoft.com/sql/t-sql/statements/create-external-data-source-transact-sql?view=sql-server-ver15&tabs=dedicated#h-create-external-data-source-to-access-data-in-azure-storage-using-the-abfs-interface
+ # Data extracted from the SQL query will be staged to the storage path defined on the data source's location setting.
+ .option(Constants.DATA_SOURCE, "<data_source_name>")
+ .synapsesql("select <column_name>, count(*) as counts from <schema_name>.<table_name> group by <column_name>")
+ )
+
+# Show contents of the dataframe
+dfToReadFromQueryAsOption.show()
+dfToReadFromQueryAsArgument.show()
+
+```
+++ ### Write to Azure Synapse Dedicated SQL Pool #### Write Request - `synapsesql` method signature
update-center Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/support-matrix.md
Asia | South East Asia
Australia | Australia East Canada | Canada Central Europe | North Europe </br> West Europe
+France | France Central
Japan | Japan East
-United Kingdom | UK South
-United States | East US </br> East US 2</br> North Central US </br> South Central US </br> West Central US </br> West US </br> West US 2 </br>
+Korea | Korea Central
+United Kingdom | UK South </br> UK West
+United States | East US </br> East US 2</br> North Central US </br> South Central US </br> West Central US </br> West US </br> West US 2 </br> West US 3
virtual-desktop App Attach Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-azure-portal.md
host pool.
8. When you're done, select **Add**.
-## Publish MSIX apps to an app group
+## Publish MSIX apps to an application group
-Next, you'll need to publish the apps into the package. You'll need to do this for both desktop and remote app application groups.
-
-If you already have an MSIX image, skip ahead to [Publish MSIX apps to an app group](#publish-msix-apps-to-an-app-group). If you want to test legacy applications, follow the instructions in [Create an MSIX package from a desktop installer on a VM](/windows/msix/packaging-tool/create-app-package-msi-vm/) to convert the legacy application to an MSIX package.
+Next, you'll need to publish the apps to an application group. You'll need to do this for both desktop and remote app application groups.
To publish the apps:
To publish the apps:
2. Select the application group you want to publish the apps to. >[!NOTE]
- >MSIX applications can be delivered with MSIX app attach to both remote app and desktop app groups
+ >MSIX applications can be delivered with MSIX app attach to both remote app and desktop app groups. When a MSIX package is assigned to a remote app group and desktop app group from the same host pool the desktop app group will be displayed in the feed.
3. Once you're in the app group, select the **Applications** tab. The **Applications** grid will display all existing apps within the app group.
To publish the apps:
- **Icon path** - **Icon index**
- - **Show in web feed**
6. When you're done, select **Save**.
->[!NOTE]
->When a user is assigned to remote app group and desktop app group from the same host pool the desktop app group will be displayed in the feed.
- ## Assign a user to an app group After assigning MSIX apps to an app group, you'll need to grant users access to them. You can assign access by adding users or user groups to an app group with published MSIX applications. Follow the instructions in [Manage app groups with the Azure portal](manage-app-groups.md) to assign your users to an app group.
virtual-desktop App Attach File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-file-share.md
To assign session hosts VMs permissions for the storage account and file share:
6. Join the storage account to AD DS by following the instructions in [Part one: enable AD DS authentication for your Azure file shares](../storage/files/storage-files-identity-ad-ds-enable.md#option-one-recommended-use-azfileshybrid-powershell-module).
-7. Assign the synced AD DS group to Azure AD, and assign the storage account the Storage File Data SMB Share Reader role.
+7. Assign the synced AD DS group the Storage File Data SMB Share Reader role on the storage account .
8. Mount the file share to any session host by following the instructions in [Part two: assign share-level permissions to an identity](../storage/files/storage-files-identity-ad-ds-assign-permissions.md).
-9. Grant NTFS permissions on the file share to the AD DS group.
-
-10. Set up NTFS permissions for the user accounts. You'll need an organizational unit (OU) sourced from the AD DS that the accounts in the VM belong to.
-
-Ensure your session hosts VMs have **Modify** NTFS permissions. You must have an OU container that's sourced from Active Directory Domain Services (AD DS), and your users must be members of that OU to use these permissions.
+9. Grant **Modify** NTFS permissions on the file share to the AD DS group.
## Next steps Once you're finished, here are some other resources you might find helpful:
+- [Add and publish MSIX app attach packages with the Azure portal](app-attach-azure-portal.md)
- Ask our community questions about this feature at the [Azure Virtual Desktop TechCommunity](https://techcommunity.microsoft.com/t5/Windows-Virtual-Desktop/bd-p/WindowsVirtualDesktop). - You can also leave feedback for Azure Virtual Desktop at the [Azure Virtual Desktop feedback hub](https://support.microsoft.com/help/4021566/windows-10-send-feedback-to-microsoft-with-feedback-hub-app). - [MSIX app attach glossary](app-attach-glossary.md)
virtual-desktop Fslogix Profile Container Configure Azure Files Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/fslogix-profile-container-configure-azure-files-active-directory.md
To use Active Directory accounts for the share permissions of your file share, y
1. In the box for **Azure Active Directory Domain Services**, select **Set up**.
-1. Tick the box to **Enable Azure Active Directory Domain Services (Azure AD DS) for this file share**, then select **Save**. An Organizational Unit (OU) called **AzureFilesConfig** will be created at the root of your domain and a computer account named the same as the storage account will be created in that OU.
+1. Tick the box to **Enable Azure Active Directory Domain Services (Azure AD DS) for this file share**, then select **Save**. An Organizational Unit (OU) called **AzureFilesConfig** will be created at the root of your domain and a user account named the same as the storage account will be created in that OU. This account will be used as the Azure Files service account.
virtual-desktop Host Pool Load Balancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/host-pool-load-balancing.md
Azure Virtual Desktop supports two load-balancing algorithms. Each algorithm det
The following load-balancing algorithms are available in Azure Virtual Desktop: -- Breadth-first load balancing allows you to evenly distribute user sessions across the session hosts in a host pool.-- Depth-first load balancing allows you to saturate a session host with user sessions in a host pool. Once the first session host reaches its session limit threshold, the load balancer directs any new user connections to the next session host in the host pool until it reaches its limit, and so on.
+- Breadth-first load balancing allows you to evenly distribute user sessions across the session hosts in a host pool. You don't have to specify a maximum session limit for the number of sessions.
+- Depth-first load balancing allows you to saturate a session host with user sessions in a host pool. You have to specify a maximum session limit for the number of sessions. Once the first session host reaches its session limit threshold, the load balancer directs any new user connections to the next session host in the host pool until it reaches its limit, and so on.
Each host pool can only configure one type of load-balancing specific to it. However, both load-balancing algorithms share the following behaviors no matter which host pool they're in:
The depth-first load-balancing algorithm allows you to saturate one session host
The depth-first algorithm first queries session hosts that allow new connections and haven't gone over their maximum session limit. The algorithm then selects the session host with highest number of sessions. If there's a tie, the algorithm selects the first session host in the query.
->[!IMPORTANT]
->The depth-first load balancing algorithm distributes sessions to session hosts based on the maximum session host limit. This parameter is required when you use the depth-first load balancing algorithm. For the best possible user experience, make sure to change the maximum session host limit parameter to a number that best suits your environment.
+> [!IMPORTANT]
+> The maximum session limit parameter is required when you use the depth-first load balancing algorithm. For the best possible user experience, make sure to change the maximum session host limit parameter to a number that best suits your environment.
+>
+> Once all session hosts have reached the maximum session limit, you will need to increase the limit or deploy more session hosts.
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
The following table summarizes identity scenarios that Azure Virtual Desktop cur
| Identity scenario | Session hosts | User accounts | |--|--|--|
-| Azure AD + AD DS | Joined to AD DS | In AD DS and Azure AD, synchronized |
+| Azure AD + AD DS | Joined to AD DS | In Azure AD and AD DS, synchronized |
+| Azure AD + AD DS | Joined to Azure AD | In Azure AD and AD DS, synchronized|
| Azure AD + Azure AD DS | Joined to Azure AD DS | In Azure AD and Azure AD DS, synchronized | | Azure AD + Azure AD DS + AD DS | Joined to Azure AD DS | In Azure AD and AD DS, synchronized |
+| Azure AD + Azure AD DS | Joined to Azure AD | In Azure AD and Azure AD DS, synchronized|
| Azure AD only | Joined to Azure AD | In Azure AD | > [!NOTE]
-> If you're planning on using Azure AD only with [FSLogix Profile Container](/fslogix/configure-profile-container-tutorial), you will need to [store profiles on Azure Files](create-profile-container-azure-ad.md), which is currently in public preview. In this scenario, user accounts must be [hybrid identities](../active-directory/hybrid/whatis-hybrid-identity.md), which means you'll also need AD DS and [Azure AD Connect](../active-directory/hybrid/whatis-azure-ad-connect.md). You must create these accounts in AD DS and synchronize them to Azure AD. The service doesn't currently support environments where users are managed with Azure AD and synchronized to Azure AD DS.
+> If you're planning on using Azure AD only with [FSLogix Profile Container](/fslogix/configure-profile-container-tutorial), you will need to [store profiles on Azure Files](create-profile-container-azure-ad.md). In this scenario, user accounts must be [hybrid identities](../active-directory/hybrid/whatis-hybrid-identity.md), which means you'll also need AD DS and [Azure AD Connect](../active-directory/hybrid/whatis-azure-ad-connect.md). You must create these accounts in AD DS and synchronize them to Azure AD. The service doesn't currently support environments where users are managed with Azure AD and synchronized to Azure AD DS.
> [!IMPORTANT] > The user account must exist in the Azure AD tenant you use for Azure Virtual Desktop. Azure Virtual Desktop doesn't support [B2B](../active-directory/external-identities/what-is-b2b.md), [B2C](../active-directory-b2c/overview.md), or personal Microsoft accounts.
virtual-machines Dedicated Host Compute Optimized Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-compute-optimized-skus.md
Previously updated : 12/01/2021 Last updated : 01/23/2023 # Compute Optimized Azure Dedicated Host SKUs
virtual-machines Dedicated Host General Purpose Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-general-purpose-skus.md
Previously updated : 12/01/2021 Last updated : 01/23/2023 # General Purpose Azure Dedicated Host SKUs
virtual-machines Dedicated Host Gpu Optimized Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-gpu-optimized-skus.md
Previously updated : 10/01/2021 Last updated : 01/23/2023 # GPU Optimized Azure Dedicated Host SKUs
virtual-machines Dedicated Host Memory Optimized Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-memory-optimized-skus.md
Previously updated : 12/01/2021 Last updated : 01/23/2023 # Memory Optimized Azure Dedicated Host SKUs
virtual-machines Dedicated Host Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-migration-guide.md
Previously updated : 3/15/2021 Last updated : 01/23/2023 # Azure Dedicated Host SKU Retirement Migration Guide
virtual-machines Dedicated Host Storage Optimized Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-storage-optimized-skus.md
Previously updated : 12/01/2021 Last updated : 01/23/2023 # Storage Optimized Azure Dedicated Host SKUs
virtual-machines Maintenance Configurations Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations-cli.md
az maintenance configuration create \
### Guest VMs
-This example creates a maintenance configuration named *myConfig* scoped to guest machines (VMs and Arc enabled servers) with a scheduled window of 2 hours every 20 days.
+This example creates a maintenance configuration named *myConfig* scoped to guest machines (VMs and Arc enabled servers) with a scheduled window of 2 hours every 20 days. To learn more about this maintenance configurations on guest VMs see [Guest](maintenance-configurations.md#guest).
```azurecli-interactive az maintenance configuration create \
virtual-machines Sap Hana High Availability Scale Out Hsr Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-scale-out-hsr-suse.md
Configure and prepare your OS by doing the following steps:
> [!TIP] > Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more details see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
-3. **[A]** SUSE delivers special resource agents for SAP HANA and by default agents for SAP HANA ScaleUp are installed. Uninstall the packages for ScaleUp, if installed and install the packages for scenario SAP HANAScaleOut. The step needs to be performed on all cluster VMs, including the majority maker.
+3. **[A]** SUSE delivers special resource agents for SAP HANA and by default agents for SAP HANA scale-up are installed. Uninstall the packages for scale-up, if installed and install the packages for scenario SAP HANA scale-out. The step needs to be performed on all cluster VMs, including the majority maker.
```bash
- # Uninstall ScaleUp packages and patterns
- zypper remove patterns-sap-hana
- zypper remove SAPHanaSR
- zypper remove SAPHanaSR-doc
- zypper remove yast2-sap-ha
- # Install the ScaleOut packages and patterns
- zypper in SAPHanaSR-ScaleOut SAPHanaSR-ScaleOut-doc
- zypper in -t pattern ha_sles
+ # Uninstall scale-up packages and patterns
+ sudo zypper remove patterns-sap-hana
+ sudo zypper remove SAPHanaSR SAPHanaSR-doc yast2-sap-ha
+ # Install the scale-out packages and patterns
+ sudo zypper in SAPHanaSR-ScaleOut SAPHanaSR-ScaleOut-doc
+ sudo zypper in -t pattern ha_sles
``` 4. **[AH]** Prepare the VMs - apply the recommended settings per SAP note [2205917] for SUSE Linux Enterprise Server for SAP Applications.
Create a dummy file system cluster resource, which will monitor and report failu
`on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced.
-## Create SAP HANA cluster resources
+## Implement HANA hooks SAPHanaSR and susChkSrv
-1. **[1,2]** Install the HANA "system replication hook". The hook needs to be installed on one HANA DB node on each system replication site.
+This important step is to optimize the integration with the cluster and detection when a cluster failover is possible. It is highly recommended to configure the SAPHanaSR Python hook. For HANA 2.0 SP5 and above, implementing both SAPHanaSR and susChkSrv hook is recommended.
- 1. Prepare the hook as `root`
- ```bash
- mkdir -p /hana/shared/myHooks
- cp /usr/share/SAPHanaSR-ScaleOut/SAPHanaSR.py /hana/shared/myHooks
- chown -R hn1adm:sapsys /hana/shared/myHooks
- ```
+SusChkSrv extends the functionality of the main SAPHanaSR HA provider. It acts in the situation when HANA process hdbindexserver crashes. If a single process crashes typically HANA tries to restart it. Restarting the indexserver process can take a long time, during which the HANA database is not responsive.
- 2. Stop HANA on both system replication sites. Execute as <sid\>adm:
- ```bash
- sapcontrol -nr 03 -function StopSystem
- ```
+With susChkSrv implemented, an immediate and configurable action is executed, instead of waiting on hdbindexserver process to restart on the same node. In HANA scale-out susChkSrv acts for every HANA VM independently. The configured action will kill HANA or fence the affected VM, which triggers a failover by SAPHanaSR in the configured timeout period.
+
+> [!NOTE]
+> susChkSrv Python hook requires SAP HANA 2.0 SP5 and SAPHanaSR-ScaleOut version 0.184.1 or higher must be installed.
+
+1. **[1,2]** Stop HANA on both system replication sites. Execute as <sid\>adm:
+
+```bash
+sapcontrol -nr 03 -function StopSystem
+```
+
+2. **[1,2]** Adjust `global.ini` on each cluster site. If the requirements for susChkSrv hook are not met, remove the entire block `[ha_dr_provider_suschksrv]` from below section.
+You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid values are [ ignore | stop | kill | fence ].
- 3. Adjust `global.ini`
```bash # add to global.ini [ha_dr_provider_SAPHanaSR] provider = SAPHanaSR
- path = /hana/shared/myHooks
+ path = /usr/share/SAPHanaSR-ScaleOut
execution_order = 1
+ [ha_dr_provider_suschksrv]
+ provider = susChkSrv
+ path = /usr/share/SAPHanaSR-ScaleOut
+ execution_order = 3
+ action_on_lost = kill
+
[trace] ha_dr_saphanasr = info ```
-2. **[AH]** The cluster requires sudoers configuration on the cluster node for <sid\>adm. In this example that is achieved by creating a new file. Execute the commands as `root`.
+Configuration pointing to the standard location /usr/share/SAPHanaSR-ScaleOut brings a benefit, that the python hook code is automatically updated through OS or package updates and it gets used by HANA at next restart. With an optional, own path, such as /hana/shared/myHooks you can decouple OS updates from the used hook version.
+
+3. **[AH]** The cluster requires sudoers configuration on the cluster node for <sid\>adm. In this example that is achieved by creating a new file. Execute the commands as `root` adapt the values of hn1/HN1 with correct SID.
+ ```bash cat << EOF > /etc/sudoers.d/20-saphana # SAPHanaSR-ScaleOut needs for srHook Cmnd_Alias SOK = /usr/sbin/crm_attribute -n hana_hn1_glob_srHook -v SOK -t crm_config -s SAPHanaSR Cmnd_Alias SFAIL = /usr/sbin/crm_attribute -n hana_hn1_glob_srHook -v SFAIL -t crm_config -s SAPHanaSR hn1adm ALL=(ALL) NOPASSWD: SOK, SFAIL
+ hn1adm ALL=(ALL) NOPASSWD: /usr/sbin/SAPHanaSR-hookHelper --sid=HN1 --case=fenceMe
EOF ```
-3. **[1,2]** Start SAP HANA on both replication sites. Execute as <sid\>adm.
+4. **[1,2]** Start SAP HANA on both replication sites. Execute as <sid\>adm.
```bash sapcontrol -nr 03 -function StartSystem ```
-4. **[1]** Verify the hook installation. Execute as <sid\>adm on the active HANA system replication site.
+5. **[1]** Verify the hook installation. Execute as <sid\>adm on the active HANA system replication site.
```bash cdtrace
- awk '/ha_dr_SAPHanaSR.*crm_attribute/ \
- { printf "%s %s %s %s\n",$2,$3,$5,$16 }' nameserver_*
-
- # 2021-03-31 01:02:42.695244 ha_dr_SAPHanaSR SFAIL
- # 2021-03-31 01:02:58.966856 ha_dr_SAPHanaSR SFAIL
- # 2021-03-31 01:03:04.453100 ha_dr_SAPHanaSR SFAIL
- # 2021-03-31 01:03:04.619768 ha_dr_SAPHanaSR SFAIL
- # 2021-03-31 01:03:04.743444 ha_dr_SAPHanaSR SFAIL
- # 2021-03-31 01:04:15.062181 ha_dr_SAPHanaSR SOK
+ awk '/ha_dr_SAPHanaSR.*crm_attribute/ \
+ { printf "%s %s %s %s\n",$2,$3,$5,$16 }' nameserver_*
+ # Example output
+ # 2021-03-31 01:02:42.695244 ha_dr_SAPHanaSR SFAIL
+ # 2021-03-31 01:02:58.966856 ha_dr_SAPHanaSR SFAIL
+ # 2021-03-31 01:03:04.453100 ha_dr_SAPHanaSR SFAIL
+ # 2021-03-31 01:03:04.619768 ha_dr_SAPHanaSR SFAIL
+ # 2021-03-31 01:03:04.743444 ha_dr_SAPHanaSR SFAIL
+ # 2021-03-31 01:04:15.062181 ha_dr_SAPHanaSR SOK
+ ```
+ Verify the susChkSrv hook installation. Execute as <sid\>adm on all HANA VMs
+ ```bash
+ cdtrace
+ egrep '(LOST:|STOP:|START:|DOWN:|init|load|fail)' nameserver_suschksrv.trc
+ # Example output
+ # 2023-01-19 08:23:10.581529 [1674116590-10005] susChkSrv.init() version 0.7.7, parameter info: action_on_lost=fence stop_timeout=20 kill_signal=9
+ # 2023-01-19 08:23:31.553566 [1674116611-14022] START: indexserver event looks like graceful tenant start
+ # 2023-01-19 08:23:52.834813 [1674116632-15235] START: indexserver event looks like graceful tenant start (indexserver started)
```
-5. **[1]** Create the HANA cluster resources. Execute the following commands as `root`.
+## Create SAP HANA cluster resources
+
+1. **[1]** Create the HANA cluster resources. Execute the following commands as `root`.
1. Make sure the cluster is already maintenance mode. 2. Next, create the HANA Topology resource.
Create a dummy file system cluster resource, which will monitor and report failu
sudo crm configure location loc_SAPHanaTop_not_on_majority_maker cln_SAPHanaTopology_HN1_HDB03 -inf: hana-s-mm ```
-6. **[1]** Configure additional cluster properties
+2. **[1]** Configure additional cluster properties
```bash sudo crm configure rsc_defaults resource-stickiness=1000 sudo crm configure rsc_defaults migration-threshold=50 ```
-7. **[1]** verify the communication between the HOOK and the cluster
+3. **[1]** verify the communication between the HOOK and the cluster
```bash crm_attribute -G -n hana_hn1_glob_srHook # Expected result
Create a dummy file system cluster resource, which will monitor and report failu
# scope=crm_config name=hana_hn1_glob_srHook value=SOK ```
-8. **[1]** Place the cluster out of maintenance mode. Make sure that the cluster status is ok and that all of the resources are started.
+4. **[1]** Place the cluster out of maintenance mode. Make sure that the cluster status is ok and that all of the resources are started.
```bash # Cleanup any failed resources - the following command is example crm resource cleanup rsc_SAPHana_HN1_HDB03