Updates from: 07/12/2023 01:13:25
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Xid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-xid.md
Get the custom policy starter packs from GitHub, then update the XML files in th
</InputClaims> <OutputClaims> <!-- Claims parsed from your REST API -->
- <OutputClaim ClaimTypeReferenceId="last_name" PartnerClaimType="givenName" />
- <OutputClaim ClaimTypeReferenceId="first_name" PartnerClaimType="surname" />
+ <OutputClaim ClaimTypeReferenceId="last_name" />
+ <OutputClaim ClaimTypeReferenceId="first_name" />
<OutputClaim ClaimTypeReferenceId="previous_name" /> <OutputClaim ClaimTypeReferenceId="year" /> <OutputClaim ClaimTypeReferenceId="month" />
active-directory Inbound Provisioning Api Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-concepts.md
+
+ Title: API-driven inbound provisioning concepts
+description: An overview of API-driven inbound provisioning.
+++++++ Last updated : 06/22/2023++++
+# API-driven inbound provisioning concepts (Public preview)
+
+This document provides a conceptual overview of the Azure AD API-driven inbound user provisioning.
+
+> [!IMPORTANT]
+> API-driven inbound provisioning is currently in public preview and is governed by [Preview Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Introduction
+
+Today enterprises have a variety of authoritative systems of record. To establish end-to-end identity lifecycle, strengthen security posture and stay compliant with regulations, identity data in Azure Active Directory must be kept in sync with workforce data managed in these systems of record. The *system of record* could be an HR app, a payroll app, a spreadsheet or SQL tables in a database hosted either on-premises or in the cloud.
+
+With API-driven inbound provisioning, the Azure AD provisioning service now supports integration with *any* system of record. Customers and partners can use *any* automation tool of their choice to retrieve workforce data from the system of record and ingest it into Azure AD. The IT admin has full control on how the data is processed and transformed with attribute mappings. Once the workforce data is available in Azure AD, the IT admin can configure appropriate joiner-mover-leaver business processes using [Lifecycle Workflows](../governance/what-are-lifecycle-workflows.md).
+
+## Supported scenarios
+
+Several inbound user provisioning scenarios are enabled using API-driven inbound provisioning. This diagram demonstrates the most common scenarios.
++
+### Scenario 1: Enable IT teams to import HR data extracts using any automation tool
+Flat files, CSV files and SQL staging tables are commonly used in enterprise integration scenarios. Employee, contractor and vendor information are periodically exported into one of these formats and an automation tool is used to sync this data with enterprise identity directories. With API-driven inbound provisioning, IT teams can use any automation tool of their choice (example: PowerShell scripts or Azure Logic Apps) to modernize and simplify this integration.
+
+### Scenario 2: Enable ISVs to build direct integration with Azure AD
+With API-driven inbound provisioning, HR ISVs can ship native synchronization experiences so that changes in the HR system automatically flow into Azure AD and connected on-premises Active Directory domains. For example, an HR app or student information systems app can send data to Azure AD as soon as a transaction is complete or as end-of-day bulk update.
+
+### Scenario 3: Enable system integrators to build more connectors to systems of record
+Partners can build custom HR connectors to meet different integration requirements around data flow from systems of record to Azure AD.
+
+In all the above scenarios, the integration is greatly simplified as Azure AD provisioning service takes over the responsibility of performing identity profile comparison, restricting the data sync to scoping logic configured by the IT admin and executing rule-based attribute flow and transformation managed in the Microsoft Entra admin portal.
+
+## End-to-end flow
+
+### Steps of the workflow
+
+1. IT Admin configures an API-driven inbound user provisioning app from the Microsoft Entra Enterprise App gallery.
+2. IT Admin provides endpoint access details to the API developer/partner/system integrator.
+3. The API developer/partner/system integrator builds an API client to send authoritative identity data to Azure AD.
+4. The API client reads identity data from the authoritative source.
+5. The API client sends a POST request to provisioning [/bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API endpoint associated with the provisioning app.
+ >[!NOTE]
+ > The API client doesn't need to perform any comparisons between the source attributes and the target attribute values to determine what operation (create/update/enable/disable) to invoke. This is automatically handled by the provisioning service. The API client simply uploads the identity data read from the source system by packaging it as bulk request using SCIM schema constructs.
+1. If successful, an ```Accepted 202 Status``` is returned.
+1. The Azure AD Provisioning Service processes the data received, applies the attribute mapping rules and completes user provisioning.
+1. Depending on the provisioning app configured, the user is provisioned either into on-premises Active Directory (for hybrid users) or Azure AD (for cloud-only users).
+1. The API Client then queries the provisioning logs API endpoint for the status of each record sent.
+1. If the processing of any record fails, the API client can check the error details and include records corresponding to the failed operations in the next bulk request (step 5).
+1. At any time, the IT Admin can check the status of the provisioning job and view events in the provisioning logs.
+
+### Key features of API-driven inbound user provisioning
+
+- Delivered as a provisioning app that that exposes an *asynchronous* Microsoft Graph provisioning [/bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API endpoint accessed using valid OAuth token.
+- Tenant admins must grant API clients interacting with this provisioning app the Graph permission `SynchronizationData-User.Upload`.
+- The Graph API endpoint accepts valid bulk request payloads using SCIM schema constructs.
+- With SCIM schema extensions, you can send any attribute in the bulk request payload.
+- The rate limit for the inbound provisioning API is 40 bulk upload requests per second. Each bulk request can contain a maximum of 50 user records, thereby supporting an upload rate of 2000 records per second.
+- Each API endpoint is associated with a specific provisioning app in Azure AD. You can integrate multiple data sources by creating a provisioning app for each data source.
+- Incoming bulk request payloads are processed in near real-time.
+- Admins can check provisioning progress by viewing the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md).
+- API clients can track progress by querying [provisioning logs API](/graph/api/resources/provisioningobjectsummary).
+
+## Next steps
+- [Configure API-driven inbound provisioning app](inbound-provisioning-api-configure-app.md)
+- [Frequently asked questions about API-driven inbound provisioning](inbound-provisioning-api-faqs.md)
+- [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](user-provisioning.md)
active-directory Inbound Provisioning Api Configure App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-configure-app.md
+
+ Title: Configure API-driven inbound provisioning app
+description: Learn how to configure API-driven inbound provisioning app.
+++++++ Last updated : 07/07/2023++++
+# Configure API-driven inbound provisioning app (Public preview)
+
+## Introduction
+This tutorial describes how to configure [API-driven inbound user provisioning](inbound-provisioning-api-concepts.md).
+
+> [!IMPORTANT]
+> API-driven inbound provisioning is currently in public preview and is governed by [Preview Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+This feature is available only when you configure the following Enterprise Gallery apps:
+* API-driven inbound user provisioning to Azure AD
+* API-driven inbound user provisioning to on-premises AD
+
+## Prerequisites
+To complete the steps in this tutorial, you need access to Microsoft Entra admin portal with the following roles:
+
+* Global administrator OR
+* Application administrator (if you're configuring inbound user provisioning to Azure AD) OR
+* Application administrator + Hybrid identity administrator (if you're configuring inbound user provisioning to on-premises Active Directory)
+
+If you're configuring inbound user provisioning to on-premises Active Directory, you need access to a Windows Server where you can install the provisioning agent for connecting to your Active Directory domain controller.
+
+## Create your API-driven provisioning app
+
+1. Log in to the [Microsoft Entra portal](<https://entra.microsoft.com>).
+2. Browse to **Azure Active Directory -> Applications -> Enterprise applications**.
+3. Click on **New application** to create a new provisioning application.
+ [![Screenshot of Entra Admin Center.](media/inbound-provisioning-api-configure-app/provisioning-entra-admin-center.png)](media/inbound-provisioning-api-configure-app/provisioning-entra-admin-center.png#lightbox)
+4. Enter **API-driven** in the search field, then select the application for your setup:
+ * **API-driven Inbound User Provisioning to On-Premises AD**: Select this app if you're provisioning hybrid identities (identities that need both on-premises AD and Azure AD account) from your system of record. Once these accounts are provisioned in on-premises AD, they are automatically synchronized to your Azure AD tenant using Azure AD Connect or Cloud Sync.
+ * **API-driven Inbound User Provisioning to Azure AD**: Select this app if you're provisioning cloud-only identities (identities that don't require on-premises AD accounts and only need Azure AD account) from your system of record.
+
+ [![Screenshot of API-driven provisioning apps.](media/inbound-provisioning-api-configure-app/api-driven-inbound-provisioning-apps.png)](media/inbound-provisioning-api-configure-app/api-driven-inbound-provisioning-apps.png#lightbox)
+
+5. In the **Name** field, rename the application to meet your naming requirements, then click **Create**.
+
+ [![Screenshot of create app.](media/inbound-provisioning-api-configure-app/provisioning-create-inbound-provisioning-app.png)](media/inbound-provisioning-api-configure-app/provisioning-create-inbound-provisioning-app.png#lightbox)
+
+ > [!NOTE]
+ > If you plan to ingest data from multiple sources, each with their own sync rules, you can create multiple apps and give each app a descriptive name; for example, Provision-Employees-From-CSV-to-AD or Provision-Contractors-From-SQL-to-AD.
+6. Once the application creation is successful, go to the Provisioning blade and click on **Get started**.
+ [![Screenshot of Get started button.](media/inbound-provisioning-api-configure-app/provisioning-overview-get-started.png)](media/inbound-provisioning-api-configure-app/provisioning-overview-get-started.png#lightbox)
+7. Switch the Provisioning Mode from Manual to **Automatic**.
+
+Depending on the app you selected, use one of the following sections to complete your setup:
+* [Configure API-driven inbound provisioning to on-premises AD](#configure-api-driven-inbound-provisioning-to-on-premises-ad)
+* [Configure API-driven inbound provisioning to Azure AD](#configure-api-driven-inbound-provisioning-to-azure-ad)
+
+## Configure API-driven inbound provisioning to on-premises AD
+
+1. After setting the Provisioning Mode to **Automatic**, click on **Save** to create the initial configuration of the provisioning job.
+1. Click on the information banner about the Azure AD Provisioning Agent.
+ [![Screenshot of provisioning agent banner.](media/inbound-provisioning-api-configure-app/provisioning-agent-banner.png)](media/inbound-provisioning-api-configure-app/provisioning-agent-banner.png#lightbox)
+1. Click **Accept terms & download** to download the Azure AD Provisioning Agent.
+1. Refer to the steps documented here to [install and configure the provisioning agent.](https://go.microsoft.com/fwlink/?linkid=2241216). This step registers your on-premises Active Directory domains with your Azure AD tenant.
+1. Once the agent registration is successful, select your domain in the drop-down **Active Directory domain** and specify the distinguished name of the OU where new user accounts are created by default.
+ [![Screenshot of Active Directory domain selected.](media/inbound-provisioning-api-configure-app/provisioning-select-active-directory-domain.png)](media/inbound-provisioning-api-configure-app/provisioning-select-active-directory-domain.png#lightbox)
+ > [!NOTE]
+ > If your AD domain is not visible in the **Active Directory Domain** dropdown list, reload the provisioning app in the browser. Click on **View on-premises agents for your domain** to ensure that your agent status is healthy.
+1. Click on **Test connection** to ensure that Azure AD can connect to the provisioning agent.
+1. Click on **Save** to save your changes.
+1. Once the save operation is successful, you'll see two more expansion panels ΓÇô one for **Mappings** and one for **Settings**. Before proceeding to the next step, provide a valid notification email ID and save the configuration again.
+ [![Screenshot of the notification email box.](media/inbound-provisioning-api-configure-app/provisioning-notification-email.png)](media/inbound-provisioning-api-configure-app/provisioning-notification-email.png#lightbox)
+ > [!NOTE]
+ > Providing the **Notification Email** in **Settings** is mandatory. If the Notification Email is left empty, then the provisioning goes into quarantine when you start the execution.
+1. Click on hyperlink in the **Mappings** expansion panel to view the default attribute mappings.
+ > [!NOTE]
+ > The default configuration in the **Attribute Mappings** page maps SCIM Core User and Enterprise User attributes to on-premises AD attributes. We recommend using the default mappings to get started and customizing these mappings later as you get more familiar with the overall data flow.
+1. Complete the configuration by following steps in the section [Start accepting provisioning requests](#start-accepting-provisioning-requests).
++
+## Configure API-driven inbound provisioning to Azure AD
+
+
+1. After setting the Provisioning Mode to **Automatic**, click on **Save** to create the initial configuration of the provisioning job.
+1. Once the save operation is successful, you will see two more expansion panels ΓÇô one for **Mappings** and one for **Settings**. Before proceeding to the next step, make sure you provide a valid notification email id and Save the configuration once more.
+
+ [![Screenshot of the notification email box.](media/inbound-provisioning-api-configure-app/provisioning-notification-email.png)](media/inbound-provisioning-api-configure-app/provisioning-notification-email.png#lightbox)
+
+ > [!NOTE]
+ > Providing the **Notification Email** in **Settings** is mandatory. If the Notification Email is left empty, then the provisioning goes into quarantine when you start the execution.
+1. Click on hyperlink in the **Mappings** expansion panel to view the default attribute mappings.
+ > [!NOTE]
+ > The default configuration in the **Attribute Mappings** page maps SCIM Core User and Enterprise User attributes to on-premises AD attributes. We recommend using the default mappings to get started and customizing these mappings later as you get more familiar with the overall data flow.
+1. Complete the configuration by following steps in the section [Start accepting provisioning requests](#start-accepting-provisioning-requests).
+
+## Start accepting provisioning requests
+
+1. Open the provisioning application's **Provisioning** -> **Overview** page.
+1. On this page, you can take the following actions:
+ - **Start provisioning** control button ΓÇô Click on this button to place the provisioning job in **listen mode** to process inbound bulk upload request payloads.
+ - **Stop provisioning** control button ΓÇô Use this option to pause/stop the provisioning job.
+ - **Restart provisioning** control button ΓÇô Use this option to purge any existing request payloads pending processing and start a new provisioning cycle.
+ - **Edit provisioning** control button ΓÇô Use this option to edit the job settings, attribute mappings and to customize the provisioning schema.
+ - **Provision on demand** control button ΓÇô This feature is not yet enabled in private preview.
+ - **Provisioning API Endpoint** URL text ΓÇô Copy the HTTPS URL value shown and save it in a Notepad or OneNote for use later with the API client.
+1. Expand the **Statistics to date** > **View technical information** panel and copy the **Provisioning API Endpoint** URL. Share this URL with your API developer after [granting access permission](inbound-provisioning-api-grant-access.md) to invoke the API.
+
+## Next steps
+- [Grant access to the inbound provisioning API](inbound-provisioning-api-grant-access.md)
+- [Frequently asked questions about API-driven inbound provisioning](inbound-provisioning-api-faqs.md)
+- [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](user-provisioning.md)
+
active-directory Inbound Provisioning Api Curl Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-curl-tutorial.md
+
+ Title: Quickstart API-driven inbound provisioning with cURL
+description: Learn how to get started with API-driven inbound provisioning using cURL.
+++++++ Last updated : 07/07/2023++++
+# Quickstart API-driven inbound provisioning with cURL (Public preview)
+
+## Introduction
+[cURL](https://curl.se/) is a popular, free, open-source, command-line tool used by API developers, and it is [available by default on Windows 10/11](https://curl.se/windows/microsoft.html). This tutorial describes how you can quickly test [API-driven inbound provisioning](inbound-provisioning-api-concepts.md) with cURL.
+
+## Pre-requisites
+
+* You have configured [API-driven inbound provisioning app](inbound-provisioning-api-configure-app.md).
+* You have [configured a service principal and it has access](inbound-provisioning-api-grant-access.md) to the inbound provisioning API.
+
+## Upload user data to the inbound provisioning API using cURL
+
+1. Retrieve the **client_id** and **client_secret** of the service principal that has access to the inbound provisioning API.
+1. Use OAuth **client_credentials** grant flow to get an access token. Replace the variables `[yourClientId]`, `[yourClientSecret]` and `[yourTenantId]` with values applicable to your setup and run the following cURL command. Copy the access token value generated
+ ```
+ curl -X POST -H "Content-Type: application/x-www-form-urlencoded" -d "client_id=[yourClientId]&scope=https%3A%2F%2Fgraph.microsoft.com%2F.default&client_secret=[yourClientSecret]&grant_type=client_credentials" "https://login.microsoftonline.com/[yourTenantId]/oauth2/v2.0/token"
+ ```
+1. Copy the bulk request payload from the example [Bulk upload using SCIM core user and enterprise user schema](/graph/api/synchronization-synchronizationjob-post-bulkupload#example-1-bulk-upload-using-scim-core-user-and-enterprise-user-schema) and save the contents in a file called scim-bulk-upload-users.json.
+1. Replace the variable `[InboundProvisioningAPIEndpoint]` with the provisioning API endpoint associated with your provisioning app. Use the `[AccessToken]` value from the previous step and run the following curl command to upload the bulk request to the provisioning API endpoint.
+ ```
+ curl -v "[InboundProvisioningAPIEndpoint]" -d @scim-bulk-upload-users.json -H "Authorization: Bearer [AccessToken]" -H "Content-Type: application/scim+json"
+ ```
+1. Upon successful upload, you will receive HTTP 202 Accepted response code.
+1. The provisioning service starts processing the bulk request payload immediately and you can see the provisioning details by accessing the provisioning logs of the inbound provisioning app.
+
+## Verify processing of the bulk request payload
+
+1. Log in to [Microsoft Entra portal](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials.
+1. Browse to **Azure Active Directory -> Applications -> Enterprise applications**.
+1. Under all applications, use the search filter text box to find and open your API-driven provisioning application.
+1. Open the Provisioning blade. The landing page displays the status of the last run.
+1. Click on **View provisioning logs** to open the provisioning logs blade. Alternatively, you can click on the menu option **Monitor -> Provisioning logs**.
+
+ [![Screenshot of provisioning logs in menu.](media/inbound-provisioning-api-curl-tutorial/access-provisioning-logs.png)](media/inbound-provisioning-api-curl-tutorial/access-provisioning-logs.png#lightbox)
+
+1. Click on any record in the provisioning logs to view additional processing details.
+1. The provisioning log details screen displays all the steps executed for a specific user.
+ [![Screenshot of provisioning logs details.](media/inbound-provisioning-api-curl-tutorial/provisioning-log-details.png)](media/inbound-provisioning-api-curl-tutorial/provisioning-log-details.png#lightbox)
+ * Under the **Import from API** step, see details of user data extracted from the bulk request.
+ * The **Match user** step shows details of any user match based on the matching identifier. If a user match happens, then the provisioning service performs an update operation. If there is no user match, then the provisioning service performs a create operation.
+ * The **Determine if User is in scope** step shows details of scoping filter evaluation. By default, all users are processed. If you have set a scoping filter (example, process only users belonging to the Sales department), the evaluation details of the scoping filter displays in this step.
+ * The **Provision User** step calls out the final processing step and changes applied to the user account.
+ * Use the **Modified properties** tab to view attribute updates.
+
+## Next steps
+- [Troubleshoot issues with the inbound provisioning API](inbound-provisioning-api-issues.md)
+- [API-driven inbound provisioning concepts](inbound-provisioning-api-concepts.md)
+- [Frequently asked questions about API-driven inbound provisioning](inbound-provisioning-api-faqs.md)
active-directory Inbound Provisioning Api Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-faqs.md
+
+ Title: Frequently asked questions (FAQs) about API-driven inbound provisioning
+description: Learn more about the capabilities and integration scenarios supported by API-driven inbound provisioning.
+++++++ Last updated : 06/26/2023++++
+# Frequently asked questions about API-driven inbound provisioning (Public preview)
+
+This article answers frequently asked questions (FAQs) about API-driven inbound provisioning.
+
+## How is the new inbound provisioning /bulkUpload API different from MS Graph Users API?
+
+There are significant differences between the provisioning /bulkUpload API and the MS Graph Users API endpoint.
+
+- **Payload format**: The MS Graph Users API endpoint expects data in OData format. The request payload format for the new inbound provisioning /bulkUpload API uses SCIM schema constructs. When invoking this API, set the 'Content-Type' header to `application/scim+json`.
+- **Operation end-result**:
+ - When identity data is sent to the MS Graph Users API endpoint, it's immediately processed, and a Create/Update/Delete operation takes place on the Azure AD user profile.
+ - Request data sent to the provisioning /bulkUpload API is processed *asynchronously* by the Azure AD provisioning service. The provisioning job applies scoping rules, attribute mapping and transformation configured by the IT admin. This initiates a ```Create/Update/Delete``` operation on the Azure AD user profile or the on-premises AD user profile.
+- **IT admin retains control**: With API-driven inbound provisioning, the IT admin has more control on how the incoming identity data is processed and mapped to Azure AD attributes. They can define scoping rules to exclude certain types of identity data (for example, contractor data) and use transformation functions to derive new values before setting the attribute values on the user profile.
++
+## Is the inbound provisioning /bulkUpload API a standard SCIM endpoint?
+
+The MS Graph inbound provisioning /bulkUpload API uses SCIM schema in the request payload, but it's *not* a standardized SCIM API endpoint. Here's why.
+
+Typically, a SCIM service endpoint processes HTTP requests (POST, PUT, GET) with SCIM payload and translates them to respective operations of (Create, Update, Lookup) on the identity store. The SCIM service endpoint places the onus of specifying the operation semantics, whether to Create/Update/Delete an identity, on the SCIM API client. This mechanism works well for scenarios where the API client is aware what operation it would like to perform for users in the identity store.
+
+The MS Graph inbound provisioning /bulkUpload is designed to handle a different enterprise identity integration scenario shaped by three unique requirements:
+
+1. Ability to asynchronously process records in bulk (for example, processing 50K+ records)
+2. Ability to include any identity attribute in the payload (for example, costCenter, pay grade, badgeId)
+3. Support API clients unaware of operation semantics. These clients are non-SCIM API clients that only have access to raw *source data* (for example, records in CSV file, SQL table or HR records). These clients don't have the processing capability to read each record and determine the operation semantics of ```Create/Update/Delete``` on the identity store.
+
+The primary goal of MS Graph inbound provisioning /bulkUpload API is to enable customers to send *any* identity data (for example, costCenter, pay grade, badgeId) from *any* identity data source (for example, CSV/SQL/HR) for eventual processing by Azure AD provisioning service. The Azure AD provisioning service consumes the bulk payload data received at this endpoint, applies attribute mapping rules configured by the IT admin and determines whether the data payload leads to (Create, Update, Enable, Disable) operation in the target identity store (Azure AD / on-premises AD).
+
+## Does the provisioning /bulkUpload API support on-premises Active Directory domains as a target?
+
+Yes, the provisioning API supports on-premises AD domains as a target.
+
+## How do we get the /bulkUpload API endpoint for our provisioning app?
+
+The /bulkUpload API is available only for apps of the type: "API-driven inbound provisioning to Azure AD" and "API-driven inbound provisioning to on-premises Active Directory". You can retrieve the unique API endpoint for each provisioning app from the Provisioning blade home page. In **Statistics to date** > **View technical information** and copy the **Provisioning API Endpoint** URL. It has the format:
+
+```http
+https://graph.microsoft.com/beta/servicePrincipals/{servicePrincipalId}/synchronization/jobs/{jobId}/bulkUpload
+```
+
+## How do we perform a full sync using the provisioning /bulkUpload API?
+
+To perform a full sync, use your API client to send the data of all users from your trusted source to the API endpoint as bulk request. Once you send all the data to the API endpoint, the next sync cycle processes all user records and allows you to track the progress using the provisioning logs API endpoint.
+
+## How do we perform delta sync using the provisioning /bulkUpload API?
+
+To perform a delta sync, use your API client to only send information about users whose data has changed in the trusted source. Once you send all the data to the API endpoint, the next sync cycle processes all user records and allows you to track the progress using the provisioning logs API endpoint.
+
+## How does restart provisioning work?
+
+Use the **Restart provisioning** option only if required. Here's how it works:
+
+**Scenario 1:** When you click the **Restart provisioning** button and the job is currently running, the job continues processing the existing data that is already staged. The**Restart provisioning** operation doesn't interrupt an existing cycle. In the subsequent cycle, the provisioning service clears any escrows and picks the new bulk request for processing.
+
+**Scenario 2:** When you click the **Restart provisioning** button and the job is *not* running, then before running subsequent cycle, the provisioning engine purges the data uploaded prior to the restart, clears any escrows, and only processes new incoming data.
+
+## How do we create users using the provisioning /bulkUpload API endpoint?
+
+Here's how the provisioning job associated with the /bulkUpload API endpoint processes incoming user payloads:
+
+1. The job retrieves the attribute mapping for the provisioning job and makes note of the matching attribute pair (by default ```externalId``` API attribute is used to match with ```employeeId``` in Azure AD).
+1. You can change this default attribute matching pair.
+1. The job extracts each operation present in the bulk request payload.
+1. The job checks the value matching identifier in the request (by default the attribute `externalId`) and uses it to check if there's a user in Azure AD with matching `employeeId` value.
+1. If the job doesn't find a matching user, then the job applies the sync rules and creates a new user in the target directory.
+
+To make sure that the right users get created in Azure AD, define the right matching attribute pair in your mapping which uniquely identifies users in your source and Azure AD.
+
+## How do we generate unique values for UPN?
+
+The provisioning service doesn't provide the ability to check for duplicate ```userPrincipalName``` (UPN) and handle conflicts. If the default sync rule for UPN attribute generates an existing UPN value, then the user create operation fails.
+
+Here are some options that you can consider for generating unique UPNs:
+
+1. Add the logic for unique UPN generation in your API client.
+2. Update the sync rule for the UPN attribute to use the [RandomString](functions-for-customizing-application-data.md) function and set the apply mapping parameter to ```On object creation only```. Example:
+
+```Join("", Replace([userName], , "(?<Suffix>@(.)*)", "Suffix", "", , ), RandomString(3, 3, 0, 0, 0, ), "@", DefaultDomain())```
+
+3. If you are provisioning to on-premises Active Directory, you can use the [SelectUniqueValue](functions-for-customizing-application-data.md) function and set the apply mapping parameter to ```On object creation only```.
+
+## How do we send more HR attributes to the provisioning /bulkUpload API endpoint?
+
+By default, the API endpoint supports processing any attribute that is part of the SCIM Core User and Enterprise User schema. If you'd like to send more attributes, you can extend the provisioning app schema, map the new attributes to Azure AD attributes and update the bulk request to send those attributes.
+
+## How do we exclude certain users from the provisioning flow?
+
+You may have a scenario where you want to send all users to the API endpoint, but only include certain users in the provisioning flow and exclude the rest.
+
+You can achieve this using the **Scoping filter**. In the provisioning app configuration, you can define a source object scope and exclude certain users from processing either using an "inclusion rule" (for example, only process users where department EQUALS **Sales**) or an "exclusion rule" (for example, exclude users belonging to Sales, department NOT EQUALS **Sales**).
+
+ See [Scoping users or groups to be provisioned with scoping filters](define-conditional-rules-for-provisioning-user-accounts.md).
+
+## How do we update users using the provisioning /bulkUpload API endpoint?
+
+Here's how the provisioning job associated with the /bulkUpload API endpoint processes incoming user payloads:
+
+1. The provisioning job retrieves the attribute mapping for the provisioning job and makes note of the matching attribute pair (by default ```externalId``` API attribute is used to match with ```employeeId``` in Azure AD). You can change this default attribute matching pair.
+1. The job extracts the operations from the bulk request payload.
+1. The job checks the value matching identifier in the SCIM request (by default: API attribute ```externalId```) and uses it to check if there's a user in Azure AD with matching ```employeeId``` value.
+1. If the job finds a matching user, then it applies the sync rules and compares the source and target profiles.
+1. If the job determines that some values have changed, then it updates the corresponding user record in the directory.
+
+To make sure that the right users get updated in Azure AD, make sure you define the right matching attribute pair in your mapping which uniquely identifies users in your source and Azure AD.
+
+## Can we create more than one app that supports API-driven inbound provisioning?
+
+Yes, you can. Here are some scenarios that may require more than one provisioning app:
+
+**Scenario 1:** Let's say you have three trusted data sources: one for employees, one for contractors and one for vendors. You can create three separate provisioning apps ΓÇô one for each identity type with its own distinct attribute mapping. Each provisioning app has a unique API endpoint, and you can send the respective payloads to each endpoint.
+
+You can retrieve the unique API endpoint for each job from the Provisioning blade home page. Navigate to **Statistics to date** > **View technical information**, then copy the **Provisioning API Endpoint** URL.
+
+**Scenario 2:** Let's say you have multiple sources of truth, each with its own attribute set. For example, HR provides job info attributes (for example jobTitle, employeeType), and the Badging System provides badge information attributes (for example ```badgeId``` that is represented using an extension attribute). In this scenario, you can configure two provisioning apps:
+
+- **Provisioning App #1** that receives attributes from your HR source and create the user.
+
+- **Provisioning App #2** that receives attributes from the Badging system and only update the user attributes. The attribute mapping in this app is restricted to the Badge Information attributes and under Target Object Actions only update is enabled.
+
+- Both apps use the same matching identifier pair (```externalId``` <-> ```employeeId```)
+
+## How do we process terminations using the /bulkUpload API endpoint?
+
+To process terminations, identify an attribute in your source that will be used to set the ```accountEnabled``` flag in Azure AD. If you are provisioning to on-premises Active Directory, then map that source attribute to the `accountDisabled` attribute.
+
+By default, the value associated with the SCIM User Core schema attribute ```active``` determines the status of the user's account in the target directory.
+
+If the attribute is set to **true**, the default mapping rule enables the account. If the attribute is set to **false**, then the default mapping rule disables the account.
+
+## Can we soft-delete a user in Azure AD using /bulkUpload provisioning API?
+
+No. Currently the provisioning service only supports enabling or disabling an account in Azure AD/on-premises AD.
+
+## How can we prevent accidental disabling/deletion of users?
+
+You can enable accidental deletion prevention. See [Enable accidental deletions prevention in the Azure AD provisioning service](accidental-deletions.md)
+
+## Do we need to send all users from the HR system in every request?
+
+No, you don't need to send all users from the HR system / "source of truth" in every request. Just send the users that you'd like create or update.
+
+## Does the API support all HTTP actions (GET/POST/PUT/PATCH)?
+
+No, the /bulkUpload provisioning API endpoint only supports the POST HTTP action.
+
+## If I want to update a user, do I need to send a PUT/PATCH request?
+
+No, the API endpoint doesn't support PUT/PATCH request. To update a user, send the data associated with the user in the POST bulk request payload.
+
+The provisioning job that processes data received by the API endpoint automatically detects whether the user received in the POST request payload needs to be create/updated/enable/disabled based on the configured sync rules. As an API client, you don't need to take any more steps if you want a user profile to be updated.
+
+## How is writeback supported?
+
+The current API only supports inbound data. Here are some options to consider for implementing writeback of attributes like email / username / phone generated by Azure AD, that you can flow back to the HR system:
+
+- **Option 1 ΓÇô SCIM connectivity to HR endpoint/proxy service that in turn updates the HR source**
+
+ - If the system of record provides a SCIM endpoint for user updates (for example Oracle HCM provides an [API endpoint for SCIM updates](https://docs.oracle.com/en/cloud/saas/applications-common/23b/farc#integrate-your-scim-endpoint-with-the-azure-ad-provisioning-service).
+ - If the system of record doesn't provide a SCIM endpoint, explore the possibility of setting up a proxy SCIM service, which receives the update and propagate the change to the HR system.
+
+- **Option 2 ΓÇô Use Azure AD ECMA connector for the writeback scenario**
+
+ - Depending on the customer requirement, explore if one of the ECMA connectors could be used (PowerShell / SQL / Web Services).
+
+- **Option 3 ΓÇô Use Lifecycle Workflows custom extension task in a Joiner workflow**
+ - In Lifecycle Workflows, define a Joiner workflow and define a [custom extension task that invokes a Logic Apps process](https://go.microsoft.com/fwlink/?linkid=2239990), which updates the HR system or generates a CSV file consumed by the HR system.
+
+## Next steps
+
+- [Configure API-driven inbound provisioning app](inbound-provisioning-api-configure-app.md)
+- To learn more about API-driven inbound provisioning, see [inbound user provisioning API concepts](inbound-provisioning-api-concepts.md).
active-directory Inbound Provisioning Api Grant Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-grant-access.md
+
+ Title: Grant access to inbound provisioning API
+description: Learn how to grant access to the inbound provisioning API.
+++++++ Last updated : 07/07/2023++++
+# Grant access to the inbound provisioning API (Public preview)
+
+## Introduction
+
+After you've configured [API-driven inbound provisioning app](inbound-provisioning-api-configure-app.md), you need to grant access permissions so that API clients can send requests to the provisioning [/bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API and query the [provisioning logs API](/graph/api/resources/provisioningobjectsummary). This tutorial walks you through the steps to configure these permissions.
+
+Depending on how your API client authenticates with Azure AD, you can select between two configuration options:
+
+* [Configure a service principal](#configure-a-service-principal): Follow these instructions if your API client plans to use a service principal of an [Azure AD registered app](../develop/howto-create-service-principal-portal.md) and authenticate using OAuth client credentials grant flow.
+* [Configure a managed identity](#configure-a-managed-identity): Follow these instructions if your API client plans to use an Azure AD [managed identity](../managed-identities-azure-resources/overview.md).
+
+## Configure a service principal
+This configuration registers an app in Azure AD that represents the external API client and grants it permission to invoke the inbound provisioning API. The service principal client id and client secret can be used in the OAuth client credentials grant flow.
+
+1. Log in to Microsoft Entra portal (https://entra.microsoft.com) with global administrator or application administrator login credentials.
+1. Browse to **Azure Active Directory** -> **Applications** -> **App registrations**.
+1. Click on the option **New registration**.
+1. Provide an app name, select the default options, and click on **Register**.
+ [![Screenshot of app registration.](media/inbound-provisioning-api-grant-access/register-app.png)](media/inbound-provisioning-api-grant-access/register-app.png#lightbox)
+1. Copy the **Application (client) ID** and **Directory (tenant) ID** values from the Overview blade and save it for later use in your API client.
+ [![Screenshot of app client ID.](media/inbound-provisioning-api-grant-access/app-client-id.png)](media/inbound-provisioning-api-grant-access/app-client-id.png#lightbox)
+1. In the context menu of the app, select **Certificates & secrets** option.
+1. Create a new client secret. Provide a description for the secret and expiry date.
+1. Copy the generated value of the client secret and save it for later use in your API client.
+1. From the context menu **API permissions**, select the option **Add a permission**.
+1. Under **Request API permissions**, select **Microsoft Graph**.
+1. Select **Application permissions**.
+1. Search and select permission **AuditLog.Read.All** and **SynchronizationData-User.Upload**.
+1. Click on **Grant admin consent** on the next screen to complete the permission assignment. Click Yes on the confirmation dialog. Your app should have the following permission sets.
+ [![Screenshot of app permissions.](media/inbound-provisioning-api-grant-access/api-client-permissions.png)](media/inbound-provisioning-api-grant-access/api-client-permissions.png#lightbox)
+1. You're now ready to use the service principal with your API client.
+
+## Configure a managed identity
+
+This section describes how you can assign the necessary permissions to a managed identity.
+
+1. Configure a [managed identity](../managed-identities-azure-resources/overview.md) for use with your Azure resource.
+1. Copy the name of your managed identity from the Azure portal. For example: The screenshot below shows the name of a system assigned managed identity associated with an Azure Logic Apps workflow called "CSV2SCIMBulkUpload".
+
+ [![Screenshot of managed identity name.](media/inbound-provisioning-api-grant-access/managed-identity-name.png)](media/inbound-provisioning-api-grant-access/managed-identity-name.png#lightbox)
+
+1. Run the following PowerShell script to assign permissions to your managed identity.
+ ```powershell
+ Install-Module Microsoft.Graph -Scope CurrentUser
+
+ Connect-MgGraph -Scopes "Application.Read.All","AppRoleAssignment.ReadWrite.All,RoleManagement.ReadWrite.Directory"
+ Select-MgProfile Beta
+ $graphApp = Get-MgServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'"
+
+ $PermissionName = "SynchronizationData-User.Upload"
+ $AppRole = $graphApp.AppRoles | `
+ Where-Object {$_.Value -eq $PermissionName -and $_.AllowedMemberTypes -contains "Application"}
+ $managedID = Get-MgServicePrincipal -Filter "DisplayName eq 'CSV2SCIMBulkUpload'"
+ New-MgServicePrincipalAppRoleAssignment -PrincipalId $managedID.Id -ServicePrincipalId $managedID.Id -ResourceId $graphApp.Id -AppRoleId $AppRole.Id
+
+ $PermissionName = "AuditLog.Read.All"
+ $AppRole = $graphApp.AppRoles | `
+ Where-Object {$_.Value -eq $PermissionName -and $_.AllowedMemberTypes -contains "Application"}
+ $managedID = Get-MgServicePrincipal -Filter "DisplayName eq 'CSV2SCIMBulkUpload'"
+ New-MgServicePrincipalAppRoleAssignment -PrincipalId $managedID.Id -ServicePrincipalId $managedID.Id -ResourceId $graphApp.Id -AppRoleId $AppRole.Id
+ ```
+1. To confirm that the permission was applied, find the managed identity service principal under **Enterprise Applications** in Azure AD. Remove the **Application type** filter to see all service principals.
+ [![Screenshot of managed identity principal.](media/inbound-provisioning-api-grant-access/managed-identity-principal.png)](media/inbound-provisioning-api-grant-access/managed-identity-principal.png#lightbox)
+1. Click on the **Permissions** blade under **Security**. Ensure the permission is set.
+ [![Screenshot of managed identity permissions.](media/inbound-provisioning-api-grant-access/managed-identity-permissions.png)](media/inbound-provisioning-api-grant-access/managed-identity-permissions.png#lightbox)
+1. You're now ready to use the managed identity with your API client.
++
+## Next steps
+- [Invoke inbound provisioning API using cURL](inbound-provisioning-api-curl-tutorial.md)
+- [Frequently asked questions about API-driven inbound provisioning](inbound-provisioning-api-faqs.md)
+
active-directory Inbound Provisioning Api Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-issues.md
+
+ Title: Troubleshoot inbound provisioning API
+description: Learn how to troubleshoot issues with the inbound provisioning API.
++++++ Last updated : 06/27/2023++++
+# Troubleshoot inbound provisioning API issues (Public preview)
+
+## Introduction
+
+This document covers commonly encountered errors and issues with inbound provisioning API and how to troubleshoot them.
+
+## Troubleshooting scenarios
+
+### Invalid data format
+
+**Issue description**
+* You're getting the error message 'Invalid Data Format" with HTTP 400 (Bad Request) response code.
+
+**Probable causes**
+1. You're sending a valid bulk request as per the provisioning [/bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API specs, but you have not set the HTTP Request Header 'Content-Type' to `application/scim+json`.
+2. You're sending a bulk request that doesn't comply to the provisioning [/bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API specs.
+
+**Resolution:**
+1. Ensure the HTTP Request has the `Content-Type` header set to the value ```application/scim+json```.
+1. Ensure that the bulk request payload complies to the provisioning [/bulkUpload](/graph/api/synchronization-synchronizationjob-post-bulkupload) API specs.
+
+### There's nothing in the provisioning logs
+
+**Issue description**
+* You sent a request to the provisioning /bulkUpload API endpoint and you got HTTP 202 response code, but there's no data in the provisioning logs corresponding to your request.
+
+**Probable causes**
+1. Your API-driven provisioning app is paused.
+1. The provisioning service is yet to update the provisioning logs with the bulk request processing details.
+
+**Resolution:**
+1. Verify that your provisioning app is running. If it isn't running, select the menu option **Start provisioning** to process the data.
+1. Expect 5 to 10-minute delay between processing the request and writing to the provisioning logs. If your API client is sending data to the provisioning /bulkUpload API endpoint, then introduce a time delay between the request invocation and provisioning logs query.
+
+### Forbidden 403 response code
+
+**Issue description**
+* You sent a request to the provisioning /bulkUpload API endpoint and you got HTTP 403 (Forbidden) response code.
+
+**Probable causes**
+* The Graph permission `SynchronizationData-User.Upload` is not assigned to your API client.
+
+**Resolution:**
+* Assign your API client the Graph permission `SynchronizationData-User.Upload` and retry the operation.
+
+### Unauthorized 401 response code
+
+**Issue description**
+* You sent a request to the provisioning /bulkUpload API endpoint and you got HTTP 401 (Unauthorized) response code. The error code displays "InvalidAuthenticationToken" with a message that the "Access token has expired or is not yet valid."
+
+**Probable causes**
+* Your access token has expired.
+
+**Resolution:**
+* Generate a new access token for your API client.
+
+### The job enters quarantine state
+
+**Issue description**
+* You just started the provisioning app and it is in quarantine state.
+
+**Probable causes**
+* You have not set the notification email prior to starting the job.
+
+**Resolution:**
+Go to the **Edit Provisioning** menu item. Under **Settings** there's a checkbox next to **Send an email notification when a failure occurs** and a field to input your **Notification Email**. Make sure to check the box, provide an email, and save the change. Click on **Restart provisioning** to get the job out of quarantine.
+
+### User creation - Invalid UPN
+
+**Issue description**
+There's a user provisioning failure. The provisioning logs displays the error code: ```AzureActiveDirectoryInvalidUserPrincipalName```.
+
+**Resolution:**
+1. Got to the **Edit Attribute Mappings** page.
+2. Select the ```UserPrincipalName``` mapping and update it to use the ```RandomString``` function.
+3. Copy and paste this expression into the expression box:
+```Join("", Replace([userName], , "(?<Suffix>@(.)*)", "Suffix", "", , ), RandomString(3, 3, 0, 0, 0, ), "@", DefaultDomain())```
+
+This expression fixes the issue by appending a random number to the UPN value accepted by Azure AD.
+
+### User creation failed - Invalid domain
+
+**Issue description**
+There's a user provisioning failure. The provisioning logs displays an error message that states ```domain does not exist```.
+
+**Resolution:**
+1. Go to the **Edit Attribute Mappings** page.
+2. Select the ```UserPrincipalName``` mapping and copy and paste this expression into the expression input box:
+```Join("", Replace([userName], , "(?<Suffix>@(.)*)", "Suffix", "", , ), RandomString(3, 3, 0, 0, 0, ), "@", DefaultDomain())```
+
+This expression fixes the issue by appending a default domain to the UPN value accepted by Azure AD.
+
+## Next steps
+
+* [Learn more about API-driven inbound provisioning](inbound-provisioning-api-concepts.md)
+
active-directory Insufficient Access Rights Error Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/insufficient-access-rights-error-troubleshooting.md
+
+ Title: Troubleshoot insufficient access rights error
+description: Learn how to troubleshoot InsufficientAccessRights error when provisioning to on-premises Active Directory.
++++++ Last updated : 06/27/2023++++
+# Troubleshoot insufficient access rights error
+
+## Issue
+
+Inbound user provisioning to Active Directory is working as expected for most users. But for some users, the provisioning logs displays the following error:
+
+```
+ERROR: InsufficientAccessRights-SecErr: The user has insufficient access rights.. Access is denied. \nError Details: Problem 4003 - INSUFF_ACCESS_RIGHTS.
+OR
+
+ERROR:
+"Message":"The user has insufficient access rights.",
+"ResponseResultCode":"InsufficientAccessRights",
+"ResponseErrorMessage":"00002098: SecErr: DSID-03150F94, problem 4003 (INSUFF_ACCESS_RIGHTS), data 0",
+The user has insufficient access rights.
+
+```
+The provisioning logs displays the error code: `HybridSynchronizationActiveDirectoryInsufficientAccessRights`.
+
+## Cause
+The Provisioning Agent GMSA account ```provAgentgMSA$``` by default has read/write permission to all user objects in the domain. There are two possible causes that might lead to the above error.
+
+- Cause-1: The user object is part of an OU that doesn't inherit domain-level permissions.
+- Cause-2: The user object belongs to a [protected Active Directory group](https://go.microsoft.com/fwlink/?linkid=2240442). By design, user objects are governed by permissions associated with a special container called [```AdminSDHolder```](https://go.microsoft.com/fwlink/?linkid=2240377). This explains why the ```provAgentgMSA$``` account is unable to update these accounts belonging to protected Active Directory groups. You may try to override and explicitly provide the ```provAgentgMSA$``` account write access to user accounts, which won't work. In order to secure privileged user accounts from a misuse of delegated permissions, there's a background process called [SDProp](https://go.microsoft.com/fwlink/?linkid=2240378) that runs every 60 minutes and ensures that users belonging to a protected group are always managed by permissions defined on the ```AdminSDHolder``` container. Even the approach of adding the ```provAgentgMSA$``` account to the Domain Admin group won't work.
++
+## Resolution
+
+First confirm what is causing the problem.
+To check if Cause-1 is the source of the problem:
+1. Open the **Active Directory Users and Computers Management Console**.
+2. Select the OU associated with the user.
+3. Right click and navigate to **Properties -> Security -> Advanced**.
+ If the **Enable Inheritance** button is shown, then it's confirmed that Cause-1 is the source of the problem.
+4. Click on **Enable Inheritance** so that domain level permissions are applicable to this OU.
+ >[!NOTE]
+ >Please remember to verify the whole hierarchy from domain level down to the OU holding the affected accounts. All Parent OUs/Containers must have inheritance enabled so the permissions applied at the domain level may cascade down to the final object.
+
+If Cause-1 is not the source of the problem, then potentially Cause-2 is the source of the problem. There are two possible resolution options.
+
+**Option 1: Remove affected user from protected AD group**
+To find the list of users that are governed by this ```AdminSDHolder``` permission, Cx can invoke the following command:
+
+```Get-AdObject -filter {AdminCount -gt 0}```
+
+Reference articles:
+* Here's an [example PowerShell script](https://notesbytom.wordpress.com/2017/12/01/clear-admincount-and-enable-inheritance-on-user/) that can be used to clear the AdminCount flag and re-enable inheritance on impacted users.
+* Use the steps documented in this [article - Find Orphaned Accounts](https://social.technet.microsoft.com/wiki/contents/articles/33307.active-directory-find-orphaned-objects.aspx) to find orphaned accounts (accounts who aren't part of a protected group, but still have AdminCount flag set to 1)
+
+*Option 1 might not always work*
+
+There's a process called The Security Descriptor Propagation (SDPROP) process that runs every hour on the domain controller holding the PDC emulator FSMO role. It's this process that sets the ```AdminCount``` attribute to 1. The main function of SDPROP is to protect highly privileged Active Directory accounts, ensuring that they can't be deleted or have rights modified, accidentally or intentionally, by users or processes with less privilege.
+
+Reference articles that explain the reason in detail:
+
+- [Five common questions about AdminHolder and SDProp](https://techcommunity.microsoft.com/t5/ask-the-directory-services-team/five-common-questions-about-adminsdholder-and-sdprop/ba-p/396293)
+- [Understanding AdminSD Holder object](https://petri.com/active-directory-security-understanding-adminsdholder-object/)
++
+**Option 2: Modify the default permissions of the AdminSDHolder container**
+
+If option 1 is not feasible and doesn't work as expected, then ask Cx to check with their AD admin and security administrators, if they are allowed to modify the default permissions of the ```AdminSDHolder``` container. This [article](https://go.microsoft.com/fwlink/?linkid=2240198) that explains the importance of the ```AdminSDHolder``` container. Once Cx gets internal approval to update the ```AdminSDHolder``` container permissions, there are two ways to update the permissions.
+
+* Using ```ADSIEdit``` as described in this [article](https://petri.com/active-directory-security-understanding-adminsdholder-object ).
+* Using ```DSACLS``` command-line script. Here's an example script that could be used as a starting point and Cx can tweak it as per their requirements.
+
+```powershell
+
+$dcFQDN = "<FQDN Of The Nearest RWDC Of Domain>"
+$domainDN = "<Domain Distinguished Name>"
+$domainNBT = "<Domain NetBIOS Name>"
+$dsaclsCMD = "DSACLS '\\$dcFQDN\CN=AdminSDHolder,CN=System,$domainDN' /G '$domainNBT\provAgentgMSA$:RPWP;<Attribute To Write To>'"
+Invoke-
+Expression $dsaclsCMD | Out-Null
+```
+
+If the Cx needs more help on troubleshooting on-premises AD permissions, engage Windows Server Support team.
+This article on [AdminSDHolder issues with Azure AD Connect](https://c7solutions.com/2017/03/administrators-aadconnect-and-adminsdholder-issues) has more examples on DSACLS usage.
+
+**Option 3: Assign full control to provAgentgMSA account**
+
+Assign **Full Control** permissions to the ```provAgentGMSA``` account. We recommend this step if there are failures with a moving a user object from one container OU to another when these users don't belong to a protected user group.
+
+In this scenario, ask Cx to complete the following steps and retest the move operation.
+1. Log in to AD domain controller as admin.
+2. Open PowerShell command-line with run-as admin
+3. At the PowerShell prompt, run the following [DSACLS](https://go.microsoft.com/fwlink/?linkid=2240600) command that grants **Generic All/Full Control** to the provisioning agent GMSA account.
+```dsacls "dc=contoso,dc=com" /I:T /G "CN=provAgentgMSA,CN=Managed Service Accounts,DC=contoso,DC=com:GA"```
+
+Replace the ```dc=contoso,dc=com``` with your root node or appropriate OU container. If you're using a custom GMSA, update the DN value for ```provAgentgMSA```.
+
+**Option 4: Skip GMSA account and use manually created service account**
+This option should only be used as a temporary workaround to unblock until the GMSA permission issue is investigated and resolved. Our recommendation is to use the GMSA account.
+You can set the registry option to [skip GMSA configuration](https://go.microsoft.com/fwlink/?linkid=2239993) and reconfigure the Azure AD Connect provisioning agent to use a manually created service account with the right permissions.
+
+## Next steps
+
+* [Learn more about the Inbound Provisioning API](inbound-provisioning-api-concepts.md)
+
active-directory Concept Authentication Authenticator App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-authenticator-app.md
Users may receive a notification through the mobile app for them to approve or d
To use the Authenticator app at a sign-in prompt rather than a username and password combination, see [Enable passwordless sign-in with the Microsoft Authenticator](howto-authentication-passwordless-phone.md). > [!NOTE]
-> Users don't have the option to register their mobile app when they enable SSPR. Instead, users can register their mobile app at [https://aka.ms/mfasetup](https://aka.ms/mfasetup) or as part of the combined security info registration at [https://aka.ms/setupsecurityinfo](https://aka.ms/setupsecurityinfo).
+> - Users don't have the option to register their mobile app when they enable SSPR. Instead, users can register their mobile app at [https://aka.ms/mfasetup](https://aka.ms/mfasetup) or as part of the combined security info registration at [https://aka.ms/setupsecurityinfo](https://aka.ms/setupsecurityinfo).
+> - The Authenticator app may not be supported on beta versions of iOS and Android.
## Passwordless sign-in
active-directory Tutorial Enable Sspr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-sspr.md
An administrator can manually provide this contact information, or users can go
1. To apply the registration settings, select **Save**.
+> [!NOTE]
+> The interruption to request to register contact information during signing in, will only occur, if the conditions configured on the settings are met, and will only apply to users and admin accounts that are enabled to reset passwords using Azure Active Directory self-service password reset.
+ ## Set up notifications and customizations To keep users informed about account activity, you can set up Azure AD to send email notifications when an SSPR event happens. These notifications can cover both regular user accounts and admin accounts. For admin accounts, this notification provides another layer of awareness when a privileged administrator account password is reset using SSPR. Azure AD will notify all global admins when someone uses SSPR on an admin account.
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
+ Last updated 06/27/2023
-# Conditional Access: Cloud apps, actions, and authentication context
+# Conditional Access: Target resources
-Cloud apps, actions, and authentication context are key signals in a Conditional Access policy. Conditional Access policies allow administrators to assign controls to specific applications, actions, or authentication context.
+Target resources (formerly Cloud apps, actions, and authentication context) are key signals in a Conditional Access policy. Conditional Access policies allow administrators to assign controls to specific applications, actions, or authentication context.
- Administrators can choose from the list of applications that include built-in Microsoft applications and any [Azure AD integrated applications](../manage-apps/what-is-application-management.md) including gallery, non-gallery, and applications published through [Application Proxy](../app-proxy/what-is-application-proxy.md). - Administrators may choose to define policy not based on a cloud application but on a [user action](#user-actions) like **Register security information** or **Register or join devices**, allowing Conditional Access to enforce controls around those actions.
+- Administrators can target [traffic forwarding profiles](#traffic-forwarding-profiles) from Global Secure Access for enhanced functionality.
- Administrators can use [authentication context](#authentication-context) to provide an extra layer of security in applications. ![Define a Conditional Access policy and specify cloud apps](./media/concept-conditional-access-cloud-apps/conditional-access-cloud-apps-or-actions.png)
User actions are tasks that can be performed by a user. Currently, Conditional A
- `Client apps`, `Filters for devices` and `Device state` conditions aren't available with this user action since they're dependent on Azure AD device registration to enforce Conditional Access policies. - When a Conditional Access policy is enabled with this user action, you must set **Azure Active Directory** > **Devices** > **Device Settings** - `Devices to be Azure AD joined or Azure AD registered require Multifactor Authentication` to **No**. Otherwise, the Conditional Access policy with this user action isn't properly enforced. More information about this device setting can found in [Configure device settings](../devices/device-management-azure-portal.md#configure-device-settings).
+## Traffic forwarding profiles
+
+Traffic forwarding profiles in Global Secure Access enable administrators to define and control how traffic is routed through Microsoft Entra Internet Access and Microsoft Entra Private Access. Traffic forwarding profiles can be assigned to devices and remote networks. For an example of how to configure these traffic profiles in Conditional Access policy, see the article [How to require a compliant network check](../../global-secure-access/how-to-compliant-network.md).
+
+For more information about these profiles, see the article [Global Secure Access traffic forwarding profiles](../../global-secure-access/concept-traffic-forwarding.md).
+ ## Authentication context Authentication context can be used to further secure data and actions in applications. These applications can be your own custom applications, custom line of business (LOB) applications, applications like SharePoint, or applications protected by Microsoft Defender for Cloud Apps.
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
Previously updated : 06/14/2023 Last updated : 07/07/2023
We don't support selecting macOS or Linux device platforms when selecting **Requ
## Locations
-When configuring location as a condition, organizations can choose to include or exclude locations. These named locations may include the public IPv4 or IPv6 network information, country or region, or even unknown areas that don't map to specific countries or regions. Only IP ranges can be marked as a trusted location.
+When configuring location as a condition, organizations can choose to include or exclude locations. These named locations may include the public IPv4 or IPv6 network information, country or region, unknown areas that don't map to specific countries or regions, and [Global Secure Access' compliant network](../../global-secure-access/how-to-compliant-network.md).
When including **any location**, this option includes any IP address on the internet not just configured named locations. When selecting **any location**, administrators can choose to exclude **all trusted** or **selected locations**.
-For example, some organizations may choose to not require multifactor authentication when their users are connected to the network in a trusted location such as their physical headquarters. Administrators could create a policy that includes any location but excludes the selected locations for their headquarters networks.
-
-More information about locations can be found in the article, [What is the location condition in Azure Active Directory Conditional Access](location-condition.md).
+Administrators can create policies that target specific locations along with other conditions. More information about locations can be found in the article, [What is the location condition in Azure Active Directory Conditional Access](location-condition.md).
## Client apps
active-directory Concept Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-policies.md
Previously updated : 08/05/2022 Last updated : 07/07/2023
The information used to calculate the device platform comes from unverified sour
#### Locations
-Location data is provided by IP geolocation data. Administrators can choose to define locations and choose to mark some as trusted like those for their organization's network locations.
+Locations connect IP addresses, geographies, and [Global Secure Access' compliant network](../../global-secure-access/how-to-compliant-network.md) to Conditional Access policy decisions. Administrators can choose to define locations and mark some as trusted like those for their organization's primary network locations.
#### Client apps
active-directory Location Condition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/location-condition.md
Multiple Conditional Access policies may prompt users for their GPS location bef
Some IP addresses don't map to a specific country or region. To capture these IP locations, check the box **Include unknown countries/regions** when defining a geographic location. This option allows you to choose if these IP addresses should be included in the named location. Use this setting when the policy using the named location should apply to unknown locations.
-### Define locations
+## Define locations
1. Sign in to the **Azure portal** as a Conditional Access Administrator or Security Administrator. 1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**.
When you configure the location condition, you can distinguish between:
- Any location - All trusted locations
+- All Network Access locations
- Selected locations ### Any location
Using the trusted IPs section of multifactor authentication's service settings i
If you have these trusted IPs configured, they show up as **MFA Trusted IPs** in the list of locations for the location condition.
+### All Network Access locations of my tenant
+
+Organizations with access to Global Secure Access preview features will have an additional location listed that is made up of users and devices that comply with your organization's security policies. For more information, see the section [Enable Global Secure Access signaling for Conditional Access](../../global-secure-access/how-to-compliant-network.md#enable-global-secure-access-signaling-for-conditional-access). It can be used with Conditional Access policies to perform a compliant network check for access to resources.
+ ### Selected locations With this option, you can select one or more named locations. For a policy with this setting to apply, a user needs to connect from any of the selected locations. When you **Select** the named network selection control that shows the list of named networks opens. The list also shows if the network location is marked as trusted.
When you use a cloud hosted proxy or VPN solution, the IP address Azure AD uses
When a cloud proxy is in place, a policy that requires a [hybrid Azure AD joined or compliant device](howto-conditional-access-policy-compliant-device.md#create-a-conditional-access-policy) can be easier to manage. Keeping a list of IP addresses used by your cloud hosted proxy or VPN solution up to date can be nearly impossible.
+We recommend organizations utilize Global Secure Access to enable [source IP restoration](../../global-secure-access/how-to-source-ip-restoration.md) to avoid this change in address and simplify management.
+ ### When is a location evaluated? Conditional Access policies are evaluated when:
active-directory App Only Access Primer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-only-access-primer.md
For example, to read a list of all teams created in an organization, you need to
As a developer, you need to configure all required app-only permissions, also referred to as app roles on your application registration. You can configure your app's requested app-only permissions through the Azure portal or Microsoft Graph. App-only access doesn't support dynamic consent, so you can't request individual permissions or sets of permissions at runtime.
-Once you've configured all the permissions your app needs, it must get admin consent [admin consent](../manage-apps/grant-admin-consent.md) for it to access the resources. For example, only users with the global admin role can grant app-only permissions (app roles) for the Microsoft Graph API. Users with other admin roles, like application admin and cloud app admin, are able to grant app-only permissions for other resources.
+Once you've configured all the permissions your app needs, it must get [admin consent](../manage-apps/grant-admin-consent.md) for it to access the resources. For example, only users with the global admin role can grant app-only permissions (app roles) for the Microsoft Graph API. Users with other admin roles, like application admin and cloud app admin, are able to grant app-only permissions for other resources.
Admin users can grant app-only permissions by using the Azure portal or by creating grants programmatically through the Microsoft Graph API. You can also prompt for interactive consent from within your app, but this option isn't preferable since app-only access doesn't require a user.
The example given is a simple illustration of application authorization. The pro
- [Learn how to create and assign app roles in Azure AD](howto-add-app-roles-in-azure-ad-apps.md) - [Overview of permissions in Microsoft Graph](/graph/permissions-overview)-- [Microsoft Graph permissions reference](/graph/permissions-reference)
+- [Microsoft Graph permissions reference](/graph/permissions-reference)
active-directory Msal Ios Shared Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-ios-shared-devices.md
# Shared device mode for iOS devices > [!IMPORTANT]
-> This feature [!INCLUDE [PREVIEW BOILERPLATE](../../../includes/active-directory-develop-preview.md)]
+> This feature [!INCLUDE [PREVIEW BOILERPLATE](./includes/develop-preview.md)]
Frontline workers such as retail associates, flight crew members, and field service workers often use a shared mobile device to perform their work. These shared devices can present security risks if your users share their passwords or PINs, intentionally or not, to access customer and business data on the shared device.
To take advantage of shared device mode feature, app developers and cloud device
1. **Device administrators** prepare the device to be shared by using a mobile device management (MDM) provider like Microsoft Intune. The MDM pushes the [Microsoft Authenticator app](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc) to the devices and turns on "Shared Mode" for each device through a profile update to the device. This Shared Mode setting is what changes the behavior of the supported apps on the device. This configuration from the MDM provider sets the shared device mode for the device and enables the [Microsoft Enterprise SSO plug-in for Apple devices](apple-sso-plugin.md) that is required for shared device mode. To learn more about SSO extensions, see the [Apple video](https://developer.apple.com/videos/play/tech-talks/301/).
-1. **Application developers** write a single-account app (multiple-account apps aren't supported in shared device mode) to handle the following scenario:
+2. **Application developers** write a single-account app (multiple-account apps aren't supported in shared device mode) to handle the following scenario:
- Sign in a user device-wide through any supported application. - Sign out a user device-wide through any supported application.
active-directory Msal Shared Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-shared-devices.md
Shared device mode is a feature of Azure Active Directory (Azure AD) that allows you to build and deploy applications that support frontline workers and educational scenarios that require shared Android and iOS devices. > [!IMPORTANT]
-> Shared device mode for iOS [!INCLUDE [PREVIEW BOILERPLATE](../../../includes/active-directory-develop-preview.md)]
+> Shared device mode for iOS [!INCLUDE [PREVIEW BOILERPLATE](./includes/develop-preview.md)]
### Supporting multiple users on devices designed for one user
active-directory Quickstart Register App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-register-app.md
# Quickstart: Register an application with the Microsoft identity platform ## Next steps
active-directory Scenario Web Api Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-app-configuration.md
Microsoft recommends that you use the [Microsoft.Identity.Web](https://www.nuget
## Client secrets or client certificates ## Program.cs
The following image shows the possibilities of *Microsoft.Identity.Web* and the
## Client secrets or client certificates ## Modify *Startup.Auth.cs*
active-directory Scenario Web App Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-app-configuration.md
Select the tab for the platform you're interested in:
## Client secrets or client certificates ## Startup.cs
The following image shows the various possibilities of *Microsoft.Identity.Web*
## Client secrets or client certificates ## Startup.Auth.cs
active-directory Tutorial V2 Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-android.md
In this tutorial:
## How this tutorial works
-![Shows how the sample app generated by this tutorial works.](../../../includes/media/develop-guidedsetup-android-intro/android-intro.svg)
+![Screenshot of how the sample app generated by this tutorial works.](./media/guidedsetup-android-intro/android-intro.svg)
The app in this tutorial signs in users and get data on their behalf. This data is accessed through a protected API (Microsoft Graph API) that requires authorization and is protected by the Microsoft identity platform.
active-directory Tutorial V2 Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-ios.md
In this tutorial:
## How tutorial app works
-![Shows how the sample app generated by this tutorial works.](../../../includes/media/develop-guidedsetup-ios-introduction/iosintro.svg)
+![Screenshot of how the sample app generated by this tutorial works.](./media/guidedsetup-ios-introduction/ios-intro.svg)
The app in this tutorial can sign in users and get data from Microsoft Graph on their behalf. This data is accessed via a protected API (Microsoft Graph API in this case) that requires authorization and is protected by the Microsoft identity platform.
The only value you modify is the value assigned to `kClientID` to be your [Appli
Add a new keychain group to your project **Signing & Capabilities**. The keychain group should be `com.microsoft.adalcache` on iOS and `com.microsoft.identity.universalstorage` on macOS.
-![Xcode UI displaying how the keychain group should be set up.](../../../includes/media/develop-guidedsetup-ios-introduction/iosintro-keychainShare.png)
+![Xcode UI displaying how the keychain group should be set up.](./media/guidedsetup-ios-introduction/ios-intro-keychain-share.png)
## For iOS only, configure URL schemes
active-directory Tutorial V2 Javascript Spa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-spa.md
sampleApp/
## How the sample app works
-![Diagram that shows how the sample app generated by this tutorial works.](media/develop-guidedsetup-javascriptspa-introduction/javascriptspa-intro.svg)
+![Diagram that shows how the sample app generated by this tutorial works.](./media/guidedsetup-javascriptspa-introduction/javascript-spa-intro.svg)
The application that you create in this tutorial enables a JavaScript SPA to query the Microsoft Graph API. This querying can also work for a web API that's set up to accept tokens from the Microsoft identity platform. After the user signs in, the SPA requests an access token and adds it to the HTTP requests through the authorization header. The SPA will use this token to acquire the user's profile and emails via the Microsoft Graph API.
Now that you've set up the code, you need to test it:
After the browser loads your *https://docsupdatetracker.net/index.html* file, select **Sign In**. You're prompted to sign in with the Microsoft identity platform. ### Provide consent for application access The first time that you sign in to your application, you're prompted to grant it access to your profile and sign you in. Select **Accept** to continue. ### View application results After you sign in, you can select **Read More** under your displayed name. Your user profile information is returned in the displayed Microsoft Graph API response. ### More information about scopes and delegated permissions
active-directory Tutorial V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-windows-desktop.md
In this tutorial:
## How the sample app generated by this guide works
-![Shows how the sample app generated by this tutorial works](./media/develop-guidedsetup-windesktop-intro/windesktophowitworks.svg)
+![Screenshot of how the sample app generated by this tutorial works.](./media/guidedsetup-windesktop-intro/win-desktop-how-it-works.svg)
The sample application that you create with this guide enables a Windows Desktop application that queries the Microsoft Graph API or a web API that accepts tokens from a Microsoft identity-platform endpoint. For this scenario, you add a token to HTTP requests via the Authorization header. The Microsoft Authentication Library (MSAL) handles token acquisition and renewal.
private void DisplayBasicTokenInfo(AuthenticationResult authResult)
In addition to the access token that's used to call the Microsoft Graph API, after the user signs in, MSAL also obtains an ID token. This token contains a small subset of information that's pertinent to users. The `DisplayBasicTokenInfo` method displays the basic information that's contained in the token. For example, it displays the user's display name and ID, as well as the token expiration date and the string representing the access token itself. You can select the _Call Microsoft Graph API_ button multiple times and see that the same token was reused for subsequent requests. You can also see the expiration date being extended when MSAL decides it's time to renew the token. ## Next steps
active-directory New Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/new-name.md
+
+ Title: New name for Azure Active Directory
+description: Learn how we're unifying the Microsoft Entra product family and how we're renaming Azure Active Directory (Azure AD) to Microsoft Entra ID.
++++++ Last updated : 07/11/2023+++
+# Customer intent: As a new or existing customer, I want to learn more about the new name for Azure Active Directory (Azure AD) and understand the impact the name change may have on other products, new or existing license(s), what I need to do, and where I can learn more about Microsoft Entra products.
++
+# New name for Azure Active Directory
+
+To unify the [Microsoft Entra](/entra) product family, reflect the progression to modern multicloud identity security, and simplify secure access experiences for all, we're renaming Azure Active Directory (Azure AD) to Microsoft Entra ID.
+
+## No action is required from you
+
+If you're using Azure AD today or are currently deploying Azure AD in your organizations, you can continue to use the service without interruption. All existing deployments, configurations, and integrations will continue to function as they do today without any action from you.
+
+You can continue to use familiar Azure AD capabilities that you can access through the Azure portal, Microsoft 365 admin center, and the [Microsoft Entra admin center](https://entra.microsoft.com).
+
+## Only the name is changing
+
+All features and capabilities are still available in the product. Licensing, terms, service-level agreements, product certifications, support and pricing remain the same.
+
+Service plan display names will change on October 1, 2023. Microsoft Entra ID Free, Microsoft Entra ID P1, and Microsoft Entra ID P2 will be the new names of standalone offers, and all capabilities included in the current Azure AD plans remain the same. Microsoft Entra ID ΓÇô currently known as Azure AD ΓÇô will continue to be included in Microsoft 365 licensing plans, including Microsoft 365 E3 and Microsoft 365 E5. Details on pricing and whatΓÇÖs included are available on the [pricing and free trials page](https://aka.ms/PricingEntra).
++
+During 2023, you may see both the current Azure AD name and the new Microsoft Entra ID name in support area paths. For self-service support, look for the topic path of "Microsoft Entra" or "Azure Active Directory/Microsoft Entra ID."
+
+## Identity developer and devops experiences aren't impacted by the rename
+
+To make the transition seamless, all existing login URLs, APIs, PowerShell cmdlets, and Microsoft Authentication Libraries (MSAL) stay the same, as do developer experiences and tooling.
+
+Microsoft identity platform encompasses all our identity and access developer assets. It will continue to provide the resources to help you build applications that your users and customers can sign in to using their Microsoft identities or social accounts.
+
+Naming is also not changing for:
+
+- [Microsoft Authentication Library (MSAL)](../develop/msal-overview.md) - Use to acquire security tokens from the Microsoft identity platform to authenticate users and access secured web APIs to provide secure access to Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API.
+- [Microsoft Graph](/graph) - Get programmatic access to organizations, user, and application data stored in Microsoft Entra ID.
+- [Microsoft Graph PowerShell](/powershell/microsoftgraph/overview) - Acts as an API wrapper for the Microsoft Graph APIs and helps administer every Microsoft Entra ID feature that has an API in Microsoft Graph.
+- [Windows Server Active Directory](/troubleshoot/windows-server/identity/active-directory-overview), commonly known as "Active Directory," and all related Windows Server identity services associated with Active Directory.
+- [Active Directory Federation Services (AD FS)](/windows-server/identity/active-directory-federation-services) nor [Active Directory Domain Services (AD DS)](/windows-server/identity/ad-ds/active-directory-domain-services) nor the product name "Active Directory" or any corresponding features.
+- [Azure Active Directory B2C](../../active-directory-b2c/index.yml) will continue to be available as an Azure service.
+- [Any deprecated or retired functionality, feature, or service](what-is-deprecated.md) of Azure AD.
+
+## Frequently asked questions
+
+### When is the name change happening?
+
+The name change will start appearing across Microsoft experiences after a 30-day notification period, which started July 11, 2023. Display names for SKUs and service plans will change on October 1, 2023. We expect most naming text string changes in Microsoft experiences to be completed by the end of 2023.
+
+### Why is the name being changed?
+
+As part of our ongoing commitment to simplify secure access experiences for everyone, the renaming of Azure AD to Microsoft Entra ID is designed to make it easier to use and navigate the unified and expanded Microsoft Entra product family.
+
+### What is Microsoft Entra?
+
+Microsoft Entra helps you protect all identities and secure network access everywhere. The expanded product family includes:
+
+| Identity and access management | New identity categories | Network access |
+||||
+| [Microsoft Entra ID (currently known as Azure AD)](../index.yml) | [Microsoft Entra Verified ID](../verifiable-credentials/index.yml) | [Microsoft Entra Internet Access](https://aka.ms/GlobalSecureAccessDocs) |
+| [Microsoft Entra ID Governance](../governance/index.yml) | [Microsoft Entra Permissions Management](../cloud-infrastructure-entitlement-management/index.yml) | [Microsoft Entra Private Access](https://aka.ms/GlobalSecureAccessDocs) |
+| [Microsoft Entra External ID](../external-identities/index.yml) | [Microsoft Entra Workload ID](../workload-identities/index.yml) | |
+
+### Where can I manage Microsoft Entra ID?
+
+You can manage Microsoft Entra ID and all other Microsoft Entra solutions in the [Microsoft Entra admin center](https://entra.microsoft.com) or [Azure portal](https://portal.azure.com).
+
+### What are the display names for service plans and SKUs?
+
+Licensing, pricing, and functionality aren't changing. Display names will be updated October 1, 2023 as follows.
+
+| **Old display name for service plan** | **New display name for service plan** |
+|||
+| Azure Active Directory Free | Microsoft Entra ID Free |
+| Azure Active Directory Premium P1 | Microsoft Entra ID P1 |
+| Azure Active Directory Premium P2 | Microsoft Entra ID P2 |
+| Azure Active Directory for education | Microsoft Entra ID for education |
+| **Old display name for product SKU** | **New display name for product SKU** |
+| Azure Active Directory Premium P1 | Microsoft Entra ID P1 |
+| Azure Active Directory Premium P1 for students | Microsoft Entra ID P1 for students |
+| Azure Active Directory Premium P1 for faculty | Microsoft Entra ID P1 for faculty |
+| Azure Active Directory Premium P1 for government | Microsoft Entra ID P1 for government |
+| Azure Active Directory Premium P2 | Microsoft Entra ID P2 |
+| Azure Active Directory Premium P2 for students | Microsoft Entra ID P2 for students |
+| Azure Active Directory Premium P2 for faculty | Microsoft Entra ID P2 for faculty |
+| Azure Active Directory Premium P2 for government | Microsoft Entra ID P2 for government |
+| Azure Active Directory F2 | Microsoft Entra ID F2 |
+
+### Is Azure AD going away?
+
+No, only the name Azure AD is going away. Capabilities remain the same.
+
+### What will happen to the Azure AD capabilities and features like App Gallery or Conditional Access?
+
+The naming of features changes to Microsoft Entra. For example:
+
+- Azure AD tenant -> Microsoft Entra tenant
+- Azure AD account -> Microsoft Entra account
+- Azure AD joined -> Microsoft Entra joined
+- Azure AD Conditional Access -> Microsoft Entra Conditional Access
+
+All features and capabilities remain unchanged aside from the name. Customers can continue to use all features without any interruption.
+
+### Are licenses changing? Are there any changes to pricing?
+
+No. Prices, terms and service level agreements (SLAs) remain the same. Pricing details are available at <https://www.microsoft.com/security/business/microsoft-entra-pricing>.
+
+### Will Microsoft Entra ID be available as a free service with an Azure subscription?
+
+Customers currently using Azure AD Free as part of their Azure, Microsoft 365, Dynamics 365, Teams, or Intune subscription will continue to have access to the same capabilities. It will be called Microsoft Entra ID Free. Get the free version at <https://www.microsoft.com/security/business/microsoft-entra-pricing>.
+
+### What's changing for Microsoft 365 or Azure AD for Office 365?
+
+Microsoft Entra ID ΓÇô currently known as Azure AD ΓÇô will continue to be available within Microsoft 365 enterprise and business premium offers. Office 365 was renamed Microsoft 365 in 2022. Unique capabilities in the Azure AD for Office 365 apps (such as company branding and self-service sign-in activity search) will now be available to all Microsoft customers in Microsoft Entra ID Free.
+
+### What's changing for Microsoft 365 E3?
+
+There are no changes to the identity features and functionality available in Microsoft 365 E3. Microsoft 365 E3 includes Microsoft Entra ID P1, currently known as Azure AD Premium P1.
+
+### What's changing for Microsoft 365 E5?
+
+In addition to the capabilities they already have, Microsoft 365 E5 customers will also get access to new identity protection capabilities like token protection, Conditional Access based on GPS-based location and step-up authentication for the most sensitive actions. Microsoft 365 E5 includes Microsoft Entra P2, currently known as Azure AD Premium P2.
+
+### How and when are customers being notified?
+
+The name changes are publicly announced as of July 11, 2023.
+
+Banners, alerts, and message center posts will notify users of the name change. These will be displayed on the tenant overview page, portals including Azure, Microsoft 365, and Microsoft Entra admin center, and Microsoft Learn.
+
+### What if I use the Azure AD name in my content or app?
+
+We'd like your help spreading the word about the name change and implementing it in your own experiences. If you're a content creator, author of internal documentation for IT or identity security admins, developer of Azure ADΓÇôenabled apps, independent software vendor, or Microsoft partner, we hope you use the naming guidance outlined in the following section ([Azure AD name changes and exceptions](#azure-ad-name-changes-and-exceptions)) to make the name change in your content and product experiences by the end of 2023.
+
+## Azure AD name changes and exceptions
+
+We encourage content creators, organizations with internal documentation for IT or identity security admins, developers of Azure AD-enabled apps, independent software vendors, or partners of Microsoft to stay current with the new naming guidance by updating copy by the end of 2023. We recommend changing the name in customer-facing experiences, prioritizing highly visible surfaces.
+
+### Product name
+
+Replace the product name "Azure Active Directory" or "Azure AD" or "AAD" with Microsoft Entra ID.
+
+*Microsoft Entra* is the correct name for the family of identity and network access solutions, one of which is *Microsoft Entra ID.*
+
+### Logo/icon
+
+Azure AD is becoming Microsoft Entra ID, and the product icon is also being updated. Work with your Microsoft partner organization to obtain the new product icon.
+
+### Feature names
+
+Capabilities or services formerly known as "Azure Active Directory &lt;feature name&gt;" or "Azure AD &lt;feature name&gt;" will be branded as Microsoft Entra product family features. For example:
+
+- "Azure AD Conditional Access" is becoming "Microsoft Entra Conditional Access"
+- "Azure AD single sign-on" is becoming "Microsoft Entra single sign-on"
+- "Azure AD tenant" is becoming "Microsoft Entra tenant"
+
+### Exceptions to Azure AD name change
+
+Products or features that are being deprecated aren't being renamed. These products or features include:
+
+- Azure AD Authentication Library (ADAL), replaced by [Microsoft Authentication Library (MSAL)](../develop/msal-overview.md)
+- Azure AD Graph, replaced by [Microsoft Graph](/graph)
+- Azure Active Directory PowerShell for Graph (Azure AD PowerShell), replaced by [Microsoft Graph PowerShell](/powershell/microsoftgraph)
+
+Names that don't have "Azure AD" also aren't changing. These products or features include Active Directory Federation Services (AD FS), Microsoft identity platform, and Windows Server Active Directory Domain Services (AD DS).
+
+End users shouldn't be exposed to the Azure AD or Microsoft Entra ID name. For sign-ins and account user experiences, follow guidance for work and school accounts in [Sign in with Microsoft branding guidelines](../develop/howto-add-branding-in-apps.md).
+
+## Next steps
+
+- [Stay up-to-date with what's new in Azure AD/Microsoft Entra ID](whats-new.md)
+- [Get started using Microsoft Entra ID at the Microsoft Entra admin center](https://entra.microsoft.com/)
+- [Learn more about Microsoft Entra with content from Microsoft Learn](/entra)
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-prerequisites.md
To read more about securing your Active Directory environment, see [Best practic
#### Installation prerequisites -- Azure AD Connect must be installed on a domain-joined Windows Server 2016 or later. You can deploy Azure AD Connect on Windows Server 2016 but since Windows Server 2016 is in extended support, you may require [a paid support program](/lifecycle/policies/fixed#extended-support) if you require support for this configuration. We recommend the usage of domain joined Windows Server 2022.-- The minimum .NET Framework version required is 4.6.2, and newer versions of .NET are also supported.
+- Azure AD Connect must be installed on a domain-joined Windows Server 2016 or later. We recommend using domain-joined Windows Server 2022. You can deploy Azure AD Connect on Windows Server 2016 but since Windows Server 2016 is in extended support, you may require [a paid support program](/lifecycle/policies/fixed#extended-support) if you require support for this configuration.
+- The minimum .NET Framework version required is 4.6.2, and newer versions of .NET are also supported. .NET version 4.8 and greater offers the best accessibility compliance.
- Azure AD Connect can't be installed on Small Business Server or Windows Server Essentials before 2019 (Windows Server Essentials 2019 is supported). The server must be using Windows Server standard or better. - The Azure AD Connect server must have a full GUI installed. Installing Azure AD Connect on Windows Server Core isn't supported. - The Azure AD Connect server must not have PowerShell Transcription Group Policy enabled if you use the Azure AD Connect wizard to manage Active Directory Federation Services (AD FS) configuration. You can enable PowerShell transcription if you use the Azure AD Connect wizard to manage sync configuration.
active-directory Whatis Azure Ad Connect V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/whatis-azure-ad-connect-v2.md
Azure AD Connect cloud sync is the future of synchronization for Microsoft. It
> [!VIDEO https://www.youtube.com/embed/9T6lKEloq0Q]
-Before moving the Azure AD Connect V2.0, you should consider moving to cloud sync. You can see if cloud sync is right for you, by accessing the [Check sync tool](https://aka.ms/M365Wizard) from the portal or via the link provided.
+Before moving the Azure AD Connect V2.0, you should consider moving to cloud sync. You can see if cloud sync is right for you, by accessing the [Check sync tool](https://aka.ms/EvaluateSyncOptions) from the portal or via the link provided.
For more information, see [What is cloud sync?](../cloud-sync/what-is-cloud-sync.md)
active-directory Concept Identity Protection Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-security-overview.md
description: Learn how the Security overview gives you an insight into your orga
- Previously updated : 08/22/2022+ Last updated : 07/07/2023
active-directory Howto Identity Protection Remediate Unblock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-remediate-unblock.md
You can allow users to self-remediate their sign-in risks and user risks by sett
Here are the prerequisites on users before risk-based policies can be applied to them to allow self-remediation of risks: - To perform MFA to self-remediate a sign-in risk:
- - The user must have registered for Azure AD Multi-Factor Authentication.
+ - The user must have registered for Azure AD Multifactor Authentication.
- To perform secure password change to self-remediate a user risk:
- - The user must have registered for Azure AD Multi-Factor Authentication.
+ - The user must have registered for Azure AD Multifactor Authentication.
- For hybrid users that are synced from on-premises to cloud, password writeback must have been enabled on them. If a risk-based policy is applied to a user during sign-in before the above prerequisites are met, then the user is blocked. This block action is because they aren't able to perform the required access control, and admin intervention is required to unblock the user.
active-directory Id Protection Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/id-protection-dashboard.md
+
+ Title: Microsoft Entra ID Protection overview preview
+description: Learn how the Microsoft Entra ID Protection overview provides a view into security posture.
+++++ Last updated : 07/07/2023++++++++
+# Microsoft Entra ID Protection dashboard (preview)
+
+Microsoft Entra ID Protection prevents identity compromises by detecting identity attacks and reporting risks. It enables customers to protect their organizations by monitoring risks, investigating them, and configuring risk-based access policies to guard sensitive access and auto remediate risks.
+
+Our new dashboard helps customers better analyze their security posture, understand how well they're protected, identify vulnerabilities, and perform recommended actions.
+
+[![Screenshot showing the new Microsoft Entra ID Protection overview dashboard.](./media/id-protection-dashboard/microsoft-entra-id-protection-dashboard.png)](./media/id-protection-dashboard/microsoft-entra-id-protection-dashboard.png)
+
+This dashboard is designed to empower organizations with rich insights and actionable recommendations tailored to your tenant. This information provides a better view into your organizationΓÇÖs security posture and allows you to enable effective protections accordingly. You have access to key metrics, attack graphics, a map highlighting risky locations, top recommendations to enhance security posture, and recent activities.
+
+## Prerequisites
+
+To access this new dashboard, you need:
+
+- Azure Active Directory Free, or Azure AD Premium P1, or Azure AD Premium P2 licenses for your users.
+- Users must have at least the [Security Reader](../roles/permissions-reference.md#security-reader) role assigned.
+- To view a comprehensive list of recommendations and select the recommended action links, you need Azure AD Premium P2 licenses.
+
+## Access the dashboard
+
+Organizations can access the new dashboard by:
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)**.
+1. Browse to **Identity** > **Protection** > **Identity Protection** > **Dashboard (Preview)**.
+
+### Metric cards
+
+As you implement more security measures such as risk-based policies, your tenant protection strengthens. So, we're now providing four key metrics to help you understand the effectiveness of the security measures you have in place.
+
+[![Screenshot showing the metric graphs in the dashboard.](./media/id-protection-dashboard/microsoft-entra-id-protection-dashboard-metrics.png)](./media/id-protection-dashboard/microsoft-entra-id-protection-dashboard-metrics.png)
+
+| Metric | Metric definition | Refresh frequency | Where to view detail |
+| | | | |
+| Number of attacks blocked | Number of attacks blocked for this tenant on each day. <br><br> An attack is considered blocked if the risky sign-in was interrupted by any access policy. The access control required by the policy should block the attacker from signing in, therefore blocking the attack. | Every 24 hours. | View the risk detections that determined the attacks in the **Risk detections report**, filter ΓÇ£Risk stateΓÇ¥ by: <br><br>- **Remediated** <br>- **Dismissed** <br>-** Confirmed safe** |
+| Number of users protected | Number of users in this tenant whose risk state changed from **At risk** to **Remediated** or **Dismissed** on each day. <br><br> A **Remediated** risk state indicates that the user has self-remediated their user risk by completing MFA or secure password change, and their account is therefore protected. <br><br> A **Dismissed** risk state indicates that an admin has dismissed the userΓÇÖs risk because they identified the userΓÇÖs account to be safe. | Every 24 hours. | View users protected in the **Risky users report**, filter ΓÇ£Risk stateΓÇ¥ by: <br><br>- **Remediated** <br>- **Dismissed** |
+| Mean time your users take to self-remediate their risks | Average time for the Risk state of risky users in your tenant to change from **At risk** to **Remediated**. <br><br> A userΓÇÖs risk state changes to **Remediated** when they self-remediated their user risk through MFA or secure password change. <br><br> To reduce the self-remediation time in your tenant, deploy risk-based CA policies. | Every 24 hours. | View remediated users in the Risky users report, filter ΓÇ£Risk stateΓÇ¥ by: <br><br>- Remediated |
+| Number of new high-risk users detected | Number of new risky users with risk level **High** detected on each day. | Every 24 hours. | View high-risk users in the Risky users report, filter risk level by <br><br>- ΓÇ£HighΓÇ¥ |
+
+Data aggregation for the following three metrics started on June 22, 2023, so these metrics are available from that date. We're working on updating the graph to reflect that.
+
+- Number of attacks blocked
+- Number of users protected
+- Mean time to remediate user risk
+
+The graphs provide a rolling 12 month window of data.
+
+### Attack graphic
+
+To help you better understand your risk exposure, we're introducing an innovative Attack Graphic that displays common identity-based attack patterns detected for your tenant. The attack patterns are represented by MITRE ATT&CK techniques and are determined by our advanced risk detections. For more information, see the section [Risk detection type to MITRE attack type mapping](#risk-detection-type-to-mitre-attack-type-mapping).
+
+[![Screenshot showing the attack graphic in the dashboard.](./media/id-protection-dashboard/microsoft-entra-id-protection-dashboard-attack-graphic.png)](./media/id-protection-dashboard/microsoft-entra-id-protection-dashboard-attack-graphic.png)
+
+#### What is considered an attack in Microsoft Entra ID Protection?
+
+Each type of [risk detection](concept-identity-protection-risks.md#what-are-risk-detections) corresponds to a MITRE ATT&CK technique. When risks are detected during a sign-in, risk detections are generated, and the sign-in becomes a risky sign-in. Attacks are then identified based on the risk detections. Refer to the below table for the mapping between Microsoft Entra ID ProtectionΓÇÖs risk detections and attacks as categorized by MITRE ATT&CK techniques.
+
+#### How to interpret the attack graphic?
+
+The graphic presents attack types impacted your tenant over the past 30 days, and whether they were blocked during sign-in. On the left side, you see the volume of each attack type. On the right, the numbers of blocked and yet-to-be-remediated attacks are displayed. The graph updates every 24 hours, so the displayed volumes may not exactly mirror your latest detections volume in the Risk detections report.
+
+- Blocked: An attack is classified as blocked if the associated risky sign-in was interrupted by an access policy, like requiring multifactor authentication. This action prevents the attacker's sign-in and blocks the attack.
+- Not remediated: Successful risky sign-ins that weren't interrupted need remediation. Therefore, risk detections associated with these risky sign-ins also require remediation. You can view these sign-ins and associated risk detections in the Risky sign-ins report by filtering with the "At risk" risk state.
+
+#### Where can I view the attacks?
+
+To view the risk detections that identified the attacks,
+
+1. Refer to the table in the section [Risk detection type to MITRE attack type mapping](#risk-detection-type-to-mitre-attack-type-mapping). Look for the detection types correspond to the attack type you're interested in.
+1. Go to the Risk detections report
+1. Use the **Detection type** filter and select the risk detection types identified in step 1. Apply the filter to view detections for the attack type only.
+
+We're enhancing our Risk detections report to include an 'Attack type' filter and display the associated attack type for each detection type. This feature makes it easier for you to view detections corresponding to specific attack types.
+
+#### Apply filters
+
+Two filters can be applied to the graph:
+
+- **Attack Types**: This filter allows you to view selected attack patterns only.
+- **Attacks Handled**: Use this filter to view either blocked or non-remediated attacks separately.
+
+### Risk detection type to MITRE attack type mapping
+
+| Microsoft Entra ID Protection risk detection type | MITRE ATT&CK technique mapping | Attack display name |
+| | | |
+| Unfamiliar Sign-in Properties | T1078.004 | Access using a valid account (Detected at Sign-In) |
+| Impossible Travel | T1078 | Access using a valid account (Detected Offline) |
+| Suspicious Sign-ins | T1078 | Access using a valid account (Detected Offline) |
+| MCAS New Country | T1078 | Access using a valid account (Detected Offline) |
+| MCAS Anonymous IP | T1078 | Access using a valid account (Detected Offline) |
+| Verified Threat Actor IP | T1078 | Access using a valid account (Detected Offline) |
+| Suspicious browser | T1078 | Access using a valid account (Detected Offline) |
+| Azure AD threat intelligence (user) | T1078 | Access using a valid account (Detected Offline) |
+| Azure AD threat intelligence (sign-in) | T1078 | Access using a valid account (Detected Offline) |
+| Anomalous User activity | T1098 | Account Manipulation |
+| Password spray | T1110.003 | Brute Force: Password Spraying |
+| Mass access to sensitive files | TA0009 | Collection |
+| Mass access to sensitive files | TA0009 | Collection |
+| MCAS Manipulation | T1114.003 | Email Collection/Hide Artifacts |
+| MCAS Suspicious inbox forwarding | T1114.003 | Email Collection/Hide Artifacts |
+| Token Issuer Anomaly | T1606.002 | Forge Web Credentials: SAML Tokens |
+| Leaked Credentials | T1589.001 | Gather Victim Identity Info |
+| Anonymous IP address | T1090 | Obfuscation/Access using proxy |
+| Malicious IP | T1090 | Obfuscation/Access using proxy |
+| Possible attempt to access Primary Refresh Token (PRT) | T1528 | Steal Application Token |
+| Anomalous Token | T1539 | Steal Web Session Cookie/Token Theft |
+
+### Map
+
+A map is provided to display the country location of the risky sign-ins in your tenant. The size of the bubble reflects the volume of the risk sign-ins in at that location. Hovering over the bubble shows a call-out box, providing the country name and number of risky sign-ins from that place.
+
+[![Screenshot showing the map graphic in the dashboard.](./media/id-protection-dashboard/microsoft-entra-id-protection-dashboard-map.png)](./media/id-protection-dashboard/microsoft-entra-id-protection-dashboard-map.png)
+
+It contains the following elements:
+
+- Date range: choose the date range and view risky sign-ins from within that time range on the map. Values available are: last 24 hours, last seven days, and last one month.
+- Risk level: choose the risk level of the risky sign-ins to view. Values available are: High, Medium, Low.
+- **Risky Locations** count:
+ - Definition: The number of locations from where your tenant's risky sign-ins were from.
+ - The date range and risk level filter apply to this count.
+ - Selecting this count takes you to the Risky sign-ins report filtered by the selected date range and risk level.
+- **Risky Sign-ins** count:
+ - Definition: The number of total risky sign-ins with the selected risk level in the selected date range.
+ - The date range and risk level filter apply to this count.
+ - Selecting this count takes you to the Risky sign-ins report filtered by the selected date range and risk level.
+
+### Recommendations
+
+WeΓÇÖve also introduced new Microsoft Entra ID Protection recommendations for customers to configure their environment to increase their security posture. These Recommendations are based on the attacks detected in your tenant over the past 30 days. The recommendations are provided to guide your security staff with recommended actions to take.
+
+[![Screenshot showing recommendations in the dashboard.](./media/id-protection-dashboard/microsoft-entra-id-protection-dashboard-recommendations.png)](./media/id-protection-dashboard/microsoft-entra-id-protection-dashboard-recommendations.png)
+
+Common attacks that are seen, like password spray, leaked credentials in your tenant, and mass access to sensitive files can inform you that there was a potential breach. In the previous screenshot, the example **Identity Protection detected at least 20 users with leaked credentials in your tenant** the recommended action in this case would be to create a Conditional Access policy requiring secure password reset on risky users.
+
+In the recommendations component on our new dashboard, customers see:
+
+- Up to three recommendations if specific attacks occur in their tenant.
+- Insight into the impact of the attack.
+- Direct links to take appropriate actions for remediation.
+
+Customers with P2 licenses can view a comprehensive list of recommendations that provide insights with actions. When ΓÇÿView AllΓÇÖ is selected, it opens a panel showing more recommendations that were triggered based on the attacks in their environment.
+
+### Recent activities
+
+Recent Activity provides a summary of recent risk-related activities in your tenant. Possible activity types are:
+
+1. Attack Activity
+1. Admin Remediation Activity
+1. Self-Remediation Activity
+1. New High-Risk Users
+
+[![Screenshot showing recent activities in the dashboard.](./media/id-protection-dashboard/microsoft-entra-id-protection-dashboard-recent-activities.png)](./media/id-protection-dashboard/microsoft-entra-id-protection-dashboard-recent-activities.png)
+
+## Known issues
+
+Depending on the configuration of your tenant, there may or may not be recommendations or recent activities on your dashboard. We're working on a better **no new recommendations or recent activities** view to enhance the experience.
+
+## Next steps
+
+- [Plan a deployment](how-to-deploy-identity-protection.md)
+- [What are risks?](concept-identity-protection-risks.md)
+- [How can users self-remediate their risks through risk-based access policies?](howto-identity-protection-remediate-unblock.md#self-remediation-with-risk-based-policy)
active-directory Directory Services Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/directory-services-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Directory Services'
+description: Learn how to configure single sign-on between Azure Active Directory and Directory Services.
++++++++ Last updated : 07/10/2023+++
+# Tutorial: Azure AD SSO integration with Directory Services
+
+In this tutorial, you'll learn how to integrate Directory Services with Azure Active Directory (Azure AD). When you integrate Directory Services with Azure AD, you can:
+
+* Control in Azure AD who has access to Directory Services.
+* Enable your users to be automatically signed-in to Directory Services with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Directory Services single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Directory Services supports **SP and IDP** initiated SSO.
+* Directory Services supports **Just In Time** user provisioning.
+* Directory Services supports [Automated user provisioning](open-text-directory-services-provisioning-tutorial.md).
+
+## Add Directory Services from the gallery
+
+To configure the integration of Directory Services into Azure AD, you need to add Directory Services from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Directory Services** in the search box.
+1. Select **Directory Services** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Azure AD SSO for Directory Services
+
+Configure and test Azure AD SSO with Directory Services using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Directory Services.
+
+To configure and test Azure AD SSO with Directory Services, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Directory Services SSO](#configure-directory-services-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Directory Services test user](#create-directory-services-test-user)** - to have a counterpart of B.Simon in Directory Services that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Directory Services** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using one of the following patterns:
+
+ | Identifier |
+ ||
+ | `https://<HOSTNAME.DOMAIN.com>/otdsws/login` |
+ | `https://<HOSTNAME.DOMAIN.com>/<OTDS_TENANT>/<TENANTID>/otdsws/login` |
+ | `https://<HOSTNAME.DOMAIN.com>/otdsws/<OTDS_TENANT>/<TENANTID>/login` |
+ | `https://<HOSTNAME.DOMAIN.com>/<OTDS_TENANT>/<TENANTID>/login` |
+ |
+
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | Reply URL |
+ ||
+ | `https://<HOSTNAME.DOMAIN.com>/otdsws/login` |
+ | `https://<HOSTNAME.DOMAIN.com>/<OTDS_TENANT>/<TENANTID>/otdsws/login` |
+ | `https://<HOSTNAME.DOMAIN.com>/otdsws/<OTDS_TENANT>/<TENANTID>/login` |
+ | `https://<HOSTNAME.DOMAIN.com>/<OTDS_TENANT>/<TENANTID>/login` |
+ |
+
+1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using one of the following patterns:
+
+ | Sign-on URL |
+ ||
+ | `https://<HOSTNAME.DOMAIN.com>/otdsws/login` |
+ | `https://<HOSTNAME.DOMAIN.com>/<OTDS_TENANT>/<TENANTID>/otdsws/login` |
+ | `https://<HOSTNAME.DOMAIN.com>/otdsws/<OTDS_TENANT>/<TENANTID>/login` |
+ | `https://<HOSTNAME.DOMAIN.com>/<OTDS_TENANT>/<TENANTID>/login` |
+ |
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Directory Services support team](mailto:support@opentext.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Directory Services.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Directory Services**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Directory Services SSO
+
+To configure single sign-on on **Directory Services** side, you need to send the **App Federation Metadata Url** to [Directory Services support team](mailto:support@opentext.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Directory Services test user
+
+In this section, a user called B.Simon is created in Directory Services. Directory Services supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Directory Services, a new one is created after authentication.
+
+> [!NOTE]
+> Directory Services also supports automatic user provisioning, you can find more details [here](./open-text-directory-services-provisioning-tutorial.md) on how to configure automatic user provisioning.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Directory Services Sign-on URL where you can initiate the login flow.
+
+* Go to Directory Services Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Directory Services for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Directory Services tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Directory Services for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure Directory Services you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Informatica Intelligent Data Management Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/informatica-intelligent-data-management-cloud-tutorial.md
Previously updated : 05/29/2023 Last updated : 07/10/2023
Complete the following steps to enable Azure AD single sign-on in the Azure port
![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, if you have **Service Provider metadata file**, perform the following steps:
+1. On the **Basic SAML Configuration** section, perform the following steps:
- a. Click **Upload metadata file**.
+ 1. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<ORG_ID>.<REGION>.informaticacloud.com`
- ![Screenshot shows how to upload metadata file.](common/upload-metadata.png "Folder")
-
- b. Click on **folder logo** to select the metadata file and click **Upload**.
-
- ![Screenshot shows to choose metadata file.](common/browse-upload-metadata.png "File")
-
- c. After the metadata file is successfully uploaded, the **Identifier** and **Reply URL** values get auto populated in **Basic SAML Configuration** section.
-
- > [!NOTE]
- > You will get the **Service Provider metadata file** from the **Configure Informatica Intelligent Data Management Cloud SSO** section, which is explained later in the tutorial. If the **Identifier** and **Reply URL** values do not get auto populated, then fill in the values manually according to your requirement.
+ 1. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<REGION>.informaticacloud.com/identity-service/acs/<ORG_ID>`
1. If you wish to configure the application in **SP** initiated mode, then perform the following step: In the **Sign on URL** textbox, type a URL using the following pattern:
- `https://<INFORMATICA_CLOUD_REGION>.informaticacloud.com/ma/sso/<INFORMATICA_CLOUD_ORG_ID>`
+ `https://<REGION>.informaticacloud.com/ma/sso/<ORG_ID>`
> [!NOTE]
- > This value is not real. Update this value with the actual Sign on URL. Contact [Informatica Intelligent Data Management Cloud support team](mailto:support@informatica.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Informatica Intelligent Data Management Cloud support team](mailto:support@informatica.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
Complete the following steps to enable Azure AD single sign-on in the Azure port
1. Click **Save** to save the details.
- 1. **Download Service Provider Metadata** xml file and upload in the **Basic SAML Configuration** section in the Azure portal.
- ### Create Informatica Intelligent Data Management Cloud test user In this section, a user called B.Simon is created in Informatica Intelligent Data Management Cloud. Informatica Intelligent Data Management Cloud supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Informatica Intelligent Data Management Cloud, a new one is commonly created after authentication.
active-directory Opentext Directory Services Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/opentext-directory-services-tutorial.md
- Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with OpenText Directory Services'
-description: Learn how to configure single sign-on between Azure Active Directory and OpenText Directory Services.
-------- Previously updated : 11/21/2022---
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with OpenText Directory Services
-
-In this tutorial, you'll learn how to integrate OpenText Directory Services with Azure Active Directory (Azure AD). When you integrate OpenText Directory Services with Azure AD, you can:
-
-* Control in Azure AD who has access to OpenText Directory Services.
-* Enable your users to be automatically signed-in to OpenText Directory Services with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
-
-## Prerequisites
-
-To get started, you need the following items:
-
-* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* OpenText Directory Services single sign-on (SSO) enabled subscription.
-
-## Scenario description
-
-In this tutorial, you configure and test Azure AD SSO in a test environment.
-
-* OpenText Directory Services supports **SP and IDP** initiated SSO.
-* OpenText Directory Services supports **Just In Time** user provisioning.
-* OpenText Directory Services supports [Automated user provisioning](open-text-directory-services-provisioning-tutorial.md).
-
-## Add OpenText Directory Services from the gallery
-
-To configure the integration of OpenText Directory Services into Azure AD, you need to add OpenText Directory Services from the gallery to your list of managed SaaS apps.
-
-1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
-1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **OpenText Directory Services** in the search box.
-1. Select **OpenText Directory Services** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-
-## Configure and test Azure AD SSO for OpenText Directory Services
-
-Configure and test Azure AD SSO with OpenText Directory Services using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in OpenText Directory Services.
-
-To configure and test Azure AD SSO with OpenText Directory Services, perform the following steps:
-
-1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure OpenText Directory Services SSO](#configure-opentext-directory-services-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create OpenText Directory Services test user](#create-opentext-directory-services-test-user)** - to have a counterpart of B.Simon in OpenText Directory Services that is linked to the Azure AD representation of user.
-1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-
-## Configure Azure AD SSO
-
-Follow these steps to enable Azure AD SSO in the Azure portal.
-
-1. In the Azure portal, on the **OpenText Directory Services** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
-
- a. In the **Identifier** text box, type a URL using one of the following patterns:
-
- | Identifier |
- ||
- | `https://<HOSTNAME.DOMAIN.com>/otdsws/login` |
- | `https://<HOSTNAME.DOMAIN.com>/<OTDS_TENANT>/<TENANTID>/otdsws/login` |
- | `https://<HOSTNAME.DOMAIN.com>/otdsws/<OTDS_TENANT>/<TENANTID>/login` |
- | `https://<HOSTNAME.DOMAIN.com>/<OTDS_TENANT>/<TENANTID>/login` |
- |
-
- b. In the **Reply URL** text box, type a URL using one of the following patterns:
-
- | Reply URL |
- ||
- | `https://<HOSTNAME.DOMAIN.com>/otdsws/login` |
- | `https://<HOSTNAME.DOMAIN.com>/<OTDS_TENANT>/<TENANTID>/otdsws/login` |
- | `https://<HOSTNAME.DOMAIN.com>/otdsws/<OTDS_TENANT>/<TENANTID>/login` |
- | `https://<HOSTNAME.DOMAIN.com>/<OTDS_TENANT>/<TENANTID>/login` |
- |
-
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- In the **Sign-on URL** text box, type a URL using one of the following patterns:
-
- | Sign-on URL |
- ||
- | `https://<HOSTNAME.DOMAIN.com>/otdsws/login` |
- | `https://<HOSTNAME.DOMAIN.com>/<OTDS_TENANT>/<TENANTID>/otdsws/login` |
- | `https://<HOSTNAME.DOMAIN.com>/otdsws/<OTDS_TENANT>/<TENANTID>/login` |
- | `https://<HOSTNAME.DOMAIN.com>/<OTDS_TENANT>/<TENANTID>/login` |
- |
-
- > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [OpenText Directory Services Client support team](mailto:support@opentext.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
-1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
-
- ![The Certificate download link](common/copy-metadataurl.png)
-
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to OpenText Directory Services.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **OpenText Directory Services**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
-1. In the **Add Assignment** dialog, click the **Assign** button.
-
-## Configure OpenText Directory Services SSO
-
-To configure single sign-on on **OpenText Directory Services** side, you need to send the **App Federation Metadata Url** to [OpenText Directory Services support team](mailto:support@opentext.com). They set this setting to have the SAML SSO connection set properly on both sides.
-
-### Create OpenText Directory Services test user
-
-In this section, a user called B.Simon is created in OpenText Directory Services. OpenText Directory Services supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in OpenText Directory Services, a new one is created after authentication.
-
-> [!NOTE]
-> OpenText Directory Services also supports automatic user provisioning, you can find more details [here](./open-text-directory-services-provisioning-tutorial.md) on how to configure automatic user provisioning.
-
-## Test SSO
-
-In this section, you test your Azure AD single sign-on configuration with following options.
-
-#### SP initiated:
-
-* Click on **Test this application** in Azure portal. This will redirect to OpenText Directory Services Sign on URL where you can initiate the login flow.
-
-* Go to OpenText Directory Services Sign-on URL directly and initiate the login flow from there.
-
-#### IDP initiated:
-
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the OpenText Directory Services for which you set up the SSO.
-
-You can also use Microsoft My Apps to test the application in any mode. When you click the OpenText Directory Services tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the OpenText Directory Services for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Next steps
-
-Once you configure OpenText Directory Services you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Xm Fax And Xm Send Secure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/xm-fax-and-xm-send-secure-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with XM Fax and XM SendSecure'
+description: Learn how to configure single sign-on between Azure Active Directory and XM Fax and XM SendSecure.
++++++++ Last updated : 07/10/2023++++
+# Tutorial: Azure AD SSO integration with XM Fax and XM SendSecure
+
+In this tutorial, you'll learn how to integrate XM Fax and XM SendSecure with Azure Active Directory (Azure AD). When you integrate XM Fax and XM SendSecure with Azure AD, you can:
+
+* Control in Azure AD who has access to XM Fax and XM SendSecure.
+* Enable your users to be automatically signed-in to XM Fax and XM SendSecure with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Azure AD Cloud Application Administrator or Application Administrator role.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+* XM Fax and XM SendSecure subscription.
+* XM Fax and XM SendSecure administrator account.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* XM Fax and XM SendSecure supports **SP-initiated** SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add XM Fax and XM SendSecure from the gallery
+
+To configure the integration of XM Fax and XM SendSecure into Azure AD, you need to add XM Fax and XM SendSecure from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **XM Fax and XM SendSecure** in the search box.
+1. Select **XM Fax and XM SendSecure** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Azure AD SSO for XM Fax and XM SendSecure
+
+Configure and test Azure AD SSO with XM Fax and XM SendSecure using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user at XM Fax and XM SendSecure.
+
+To configure and test Azure AD SSO with XM Fax and XM SendSecure, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure XM Fax and XM SendSecure SSO](#configure-xm-fax-and-xm-sendsecure-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create XM Fax and XM SendSecure test user](#create-xm-fax-and-xm-sendsecure-test-user)** - to have a counterpart of B.Simon in XM Fax and XM SendSecure that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **XM Fax and XM SendSecure** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type one of the following URLs:
+
+ | **Identifier** |
+ |-|
+ | `https://login.xmedius.com/` |
+ | `https://login.xmedius.eu/` |
+ | `https://login.xmedius.ca/` |
+
+ b. In the **Reply URL** textbox, type one of the following URLs:
+
+ | **Reply URL** |
+ |-|
+ | `https://login.xmedius.com/auth/saml/callback` |
+ | `https://login.xmedius.eu/auth/saml/callback` |
+ | `https://login.xmedius.ca/auth/saml/callback` |
+
+ c. In the **Sign-on URL** text box, type a URL using one of the following patterns:
+
+ | **Sign-on URL** |
+ |-|
+ | `https://login.xmedius.com/{account}` |
+ | `https://login.xmedius.eu/{account}` |
+ | `https://login.xmedius.ca/{account}` |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up XM Fax and XM SendSecure** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows how to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon:
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the user name in the following format: username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to XM Fax and XM SendSecure:
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **XM Fax and XM SendSecure**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure XM Fax and XM SendSecure SSO
+
+1. Log in to your XM Cloud account using a Web browser.
+
+1. From the main menu of your Web Portal, select **enterprise_account -> Enterprise Settings**.
+
+1. Go to **Single Sign-On** section and select **SAML 2.0**.
+
+1. Provide the following required information:
+
+ a. In the **Issuer (Identity Provider)** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal.
+
+ b. In the **Sign In URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ c. Open the downloaded **Certificate (Base64)** from the Azure portal into Notepad and paste the content into the **X.509 Signing Certificate** textbox.
+
+ d. click **Save**.
+
+> [!NOTE]
+> Keep the fail-safe URL (`https://login.[domain]/[account]/no-sso`) provided at the bottom of the SSO configuration section, it will allow you to log in using your XM Cloud account credentials if you lock yourself after SSO activation.
+
+### Create XM Fax and XM SendSecure test user
+
+Create a user called Britta Simon at XM Fax and XM SendSecure. Make sure the email is set to "B.Simon@contoso.com".
+
+> [!NOTE]
+> Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with the following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to XM Fax and XM SendSecure Sign-on URL where you can initiate the login flow.
+
+* Go to XM Fax and XM SendSecure Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the XM Fax and XM SendSecure tile in the My Apps portal, this will redirect to XM Fax and XM SendSecure Sign-on URL. For more information about the My Apps portal, see [Introduction to the My Apps portal](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure XM Fax and XM SendSecure you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Api Server Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-vnet-integration.md
API Server VNet Integration is available in all global Azure regions.
az provider register --namespace Microsoft.ContainerService ```
-## Limitations
-
-* Existing AKS private clusters can't be converted to API Server VNet Integration clusters.
- ## Create an AKS cluster with API Server VNet Integration using managed VNet You can configure your AKS clusters with API Server VNet Integration in managed VNet or bring-your-own VNet mode. You can create the as public clusters (with API server access available via a public IP) or private clusters (where the API server is only accessible via private VNet connectivity). You can also toggle between a public and private state without redeploying your cluster.
You can configure your AKS clusters with API Server VNet Integration in managed
## Create a private AKS cluster with API Server VNet Integration using bring-your-own VNet
-When using bring-your-own VNet, you must create and delegate an API server subnet to `Microsoft.ContainerService/managedClusters`, which grants the AKS service permissions to inject the API server pods and internal load balancer into that subnet. You can't use the subnet for any other workloads, but you can use it for multiple AKS clusters located in the same virtual network. An AKS cluster requires *two to seven* IP addresses depending on cluster scale. The minimum supported API server subnet size is a */28*.
+When using bring-your-own VNet, you must create and delegate an API server subnet to `Microsoft.ContainerService/managedClusters`, which grants the AKS service permissions to inject the API server pods and internal load balancer into that subnet. You can't use the subnet for any other workloads, but you can use it for multiple AKS clusters located in the same virtual network. The minimum supported API server subnet size is a */28*.
The cluster identity needs permissions to both the API server subnet and the node subnet. Lack of permissions at the API server subnet can cause a provisioning failure. > [!WARNING]
-> Running out of IP addresses may prevent API server scaling and cause an API server outage.
+> An AKS cluster reserves at least 9 IPs in the subnet address space. Running out of IP addresses may prevent API server scaling and cause an API server outage.
### Create a resource group
az group create -l <location> -n <resource-group>
## Convert an existing AKS cluster to API Server VNet Integration
-You can convert existing public AKS clusters to API Server VNet Integration clusters by supplying an API server subnet that meets the requirements listed earlier. These requirements include: in the same VNet as the cluster nodes, permissions granted for the AKS cluster identity, and size of at least */28*. Converting your cluster is a one-way migration. Clusters can't have API Server VNet Integration disabled after it's been enabled.
+You can convert existing public/private AKS clusters to API Server VNet Integration clusters by supplying an API server subnet that meets the requirements listed earlier. These requirements include: in the same VNet as the cluster nodes, permissions granted for the AKS cluster identity, and size of at least */28*. Converting your cluster is a one-way migration. Clusters can't have API Server VNet Integration disabled after it's been enabled.
This upgrade performs a node-image version upgrade on all node pools and restarts all workloads while they undergo a rolling image upgrade.
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
apiVersion: v1
kind: Pod metadata: name: quick-start
- namespace: SERVICE_ACCOUNT_NAMESPACE
+ namespace: "${SERVICE_ACCOUNT_NAMESPACE}"
labels: azure.workload.identity/use: "true" spec:
- serviceAccountName: workload-identity-sa
+ serviceAccountName: "${SERVICE_ACCOUNT_NAME}"
EOF ```
api-management Api Management Howto App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-app-insights.md
To improve performance issues, skip:
> >
+## Troubleshooting
+
+Addressing the issue of telemetry data flow from API Management to Application Insights:
++ Investigate whether a linked Azure Monitor Private Link Scope (AMPLS) resource exists within the VNet where the API Management resource is connected. AMPLS resources have a global scope across subscriptions and are responsible for managing data query and ingestion for all Azure Monitor resources. It's possible that the AMPLS has been configured with a Private-Only access mode specifically for data ingestion. In such instances, include the Application Insights resource and its associated Log Analytics resource in the AMPLS. Once this addition is made, the API Management data will be successfully ingested into the Application Insights resource, resolving the telemetry data transmission issue.+ ## Next steps + Learn more about [Azure Application Insights](/azure/application-insights/).
api-management Forward Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/forward-request-policy.md
The `forward-request` policy forwards the incoming request to the backend servic
## Policy statement ```xml
-<forward-request timeout="time in seconds" follow-redirects="false | true" buffer-request-body="false | true" buffer-response="true | false" fail-on-error-status-code="false | true"/>
+<forward-request http-version="1 | 2or1 | 2" timeout="time in seconds" follow-redirects="false | true" buffer-request-body="false | true" buffer-response="true | false" fail-on-error-status-code="false | true"/>
``` ## Attributes
The `forward-request` policy forwards the incoming request to the backend servic
| Attribute | Description | Required | Default | | | -- | -- | - | | timeout | The amount of time in seconds to wait for the HTTP response headers to be returned by the backend service before a timeout error is raised. Minimum value is 0 seconds. Values greater than 240 seconds may not be honored, because the underlying network infrastructure can drop idle connections after this time. Policy expressions are allowed. | No | 300 |
+| http-version | The HTTP spec version to use when sending the HTTP response to the backend service. When using `2or1`, the gateway will favor HTTP /2 over /1, but fall back to HTTP /1 if HTTP /2 does not work. | No | 1 |
| follow-redirects | Specifies whether redirects from the backend service are followed by the gateway or returned to the caller. Policy expressions are allowed. | No | `false` | | buffer-request-body | When set to `true`, request is buffered and will be reused on [retry](retry-policy.md). | No | `false` | | buffer-response | Affects processing of chunked responses. When set to `false`, each chunk received from the backend is immediately returned to the caller. When set to `true`, chunks are buffered (8 KB, unless end of stream is detected) and only then returned to the caller.<br/><br/>Set to `false` with backends such as those implementing [server-sent events (SSE)](how-to-server-sent-events.md) that require content to be returned or streamed immediately to the caller. Policy expressions aren't allowed. | No | `true` |
The `forward-request` policy forwards the incoming request to the backend servic
## Examples
+### Send request to HTTP/2 backend
+
+The following API level policy forwards all API requests to an HTTP/2 backend service.
+
+```xml
+<!-- api level -->
+<policies>
+ <inbound>
+ <base/>
+ </inbound>
+ <backend>
+ <forward-request http-version="2or1"/>
+ </backend>
+ <outbound>
+ <base/>
+ </outbound>
+</policies>
+```
+
+This is required for HTTP /2 or gRPC workloads and currently only supported in self-hosted gateway. Learn more in our [API gateway overview](api-management-gateways-overview.md).
+ ### Forward request with timeout interval The following API level policy forwards all API requests to the backend service with a timeout interval of 60 seconds.
The following API level policy forwards all API requests to the backend service
<base/> </outbound> </policies>- ``` ### Inherit policy from parent scope
This operation level policy does not forward requests to the backend service.
* [API Management advanced policies](api-management-advanced-policies.md)
attestation Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/quickstart-powershell.md
New-AzAttestation creates an attestation provider.
```powershell $attestationProvider = "<attestation provider name>"
-New-AzAttestation -Name $attestationProvider -ResourceGroupName $attestationResourceGroup -Location $location
+New-AzAttestationProvider -Name $attestationProvider -ResourceGroupName $attestationResourceGroup -Location $location
``` PolicySignerCertificateFile is a file specifying a set of trusted signing keys. If a filename is specified for the PolicySignerCertificateFile parameter, attestation provider can be configured only with policies in signed JWT format. Else policy can be configured in text or an unsigned JWT format. ```powershell
-New-AzAttestation -Name $attestationProvider -ResourceGroupName $attestationResourceGroup -Location $location -PolicySignersCertificateFile "C:\test\policySignersCertificates.pem"
+New-AzAttestationProvider -Name $attestationProvider -ResourceGroupName $attestationResourceGroup -Location $location -PolicySignersCertificateFile "C:\test\policySignersCertificates.pem"
``` For PolicySignersCertificateFile sample, see [examples of policy signer certificate](policy-signer-examples.md).
For PolicySignersCertificateFile sample, see [examples of policy signer certific
Get-AzAttestation retrieves the attestation provider properties like status and AttestURI. Take a note of AttestURI, as it will be needed later. ```azurepowershell
-Get-AzAttestation -Name $attestationProvider -ResourceGroupName $attestationResourceGroup
+Get-AzAttestationProvider -Name $attestationProvider -ResourceGroupName $attestationResourceGroup
``` The above command should produce output in this format:
TagsTable:
Attestation providers can be deleted using the Remove-AzAttestation cmdlet. ```powershell
-Remove-AzAttestation -Name $attestationProvider -ResourceGroupName $attestationResourceGroup
+Remove-AzAttestationProvider -Name $attestationProvider -ResourceGroupName $attestationResourceGroup
``` ## Policy management
azure-functions Consumption Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/consumption-plan.md
Title: Azure Functions Consumption plan hosting description: Learn about how Azure Functions Consumption plan hosting lets you run your code in an environment that scales dynamically, but you only pay for resources used during execution. Previously updated : 06/06/2023 Last updated : 07/10/2023 # Customer intent: As a developer, I want to understand the benefits of using the Consumption plan so I can get the scalability benefits of Azure Functions without having to pay for resources I don't need.
To learn more about how to estimate costs when running in a Consumption plan, se
When you create a function app in the Azure portal, the Consumption plan is the default. When using APIs to create your function app, you don't have to first create an App Service plan as you do with Premium and Dedicated plans.
+In Consumption plan hosting, each function app typically runs in its own plan. In the Azure portal or in code, you may also see the Consumption plan referred to as `Dynamic` or `Y1`.
+ Use the following links to learn how to create a serverless function app in a Consumption plan, either programmatically or in the Azure portal: + [Azure CLI](./scripts/functions-cli-create-serverless.md)
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md
Durable Functions is designed to work with all Azure Functions programming langu
| Java | Functions 4.0+ | Java 8+ | 4.x bundles | > [!NOTE]
-> The new programming models for authoring Functions in Python (V2) and Node.js (V4) are currently in preview. Compared to the current models, the new experiences are designed to be more flexible and intuitive for Python and JavaScript/TypeScript developers. Learn more about the differences between the models in the [Python developer guide](../functions-reference-python.md?pivots=python-mode-decorators) and [Node.js upgrade guide](../functions-node-upgrade-v4.md).
+> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, this new experience is designed to be more flexible and intuitive for Node developers. Learn more about the differences between the models in the [Node.js upgrade guide](../functions-node-upgrade-v4.md).
> > In the following code snippets, Python (PM2) denotes programming model V2, and JavaScript (PM4) denotes programming model V4, the new experiences.
azure-functions Quickstart Python Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-python-vscode.md
Title: Create your first durable function in Azure using Python description: Create and publish an Azure Durable Function in Python using Visual Studio Code.-+ Last updated 06/15/2022-+ ms.devlang: python zone_pivot_groups: python-mode-functions
In this article, you learn how to use the Visual Studio Code Azure Functions ext
:::image type="content" source="./media/quickstart-python-vscode/functions-vs-code-complete.png" alt-text="Screenshot of the running durable function in Azure.":::
-> [!NOTE]
-> The new programming model for authoring Functions in Python (V2) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for Python programmers. To learn more, see Azure Functions Python [developer guide](../functions-reference-python.md?pivots=python-mode-decorators).
- ## Prerequisites To complete this tutorial:
Version 2 of the Python programming model requires the following minimum version
- [Azure Functions Runtime](../functions-versions.md) v4.16+ - [Azure Functions Core Tools](../functions-run-local.md) v4.0.5095+ (if running locally)
+- [azure-functions-durable](https://pypi.org/project/azure-functions-durable/) v1.2.4+
## Enable v2 programming model
-The following application setting is required to run the v2 programming model while it is in preview:
+The following application setting is required to run the v2 programming model:
- Name: `AzureWebJobsFeatureFlags` - Value: `EnableWorkerIndexing`
Review the table below for an explanation of each function and its purpose in th
| **`hello`** | The activity function, which performs the work being orchestrated. The function returns a simple greeting to the city passed as an argument. | | **`http_start`** | An [HTTP-triggered function](../functions-bindings-http-webhook.md) that starts an instance of the orchestration and returns a check status response. |
+> [!NOTE]
+> Durable Functions also supports Python V2's [blueprints](../functions-reference-python.md#blueprints). To use them, you will need to register your blueprint functions using the [`azure-functions-durable`](https://pypi.org/project/azure-functions-durable) `Blueprint` class, as
+> shown [here](https://github.com/Azure/azure-functions-durable-python/blob/dev/samples-v2/blueprint/durable_blueprints.py). The resulting blueprint can then be registered as normal. See our [sample](https://github.com/Azure/azure-functions-durable-python/tree/dev/samples-v2/blueprint) for an example.
+ ::: zone-end ## Test the function locally
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
app = func.FunctionApp()
app.register_functions(bp) ```
+> [!NOTE]
+> Durable Functions also supports blueprints. To create blueprints for Durable Functions apps, register your orchestration, activity, and entity triggers and client bindings using the [`azure-functions-durable`](https://pypi.org/project/azure-functions-durable) `Blueprint` class, as
+> shown [here](https://github.com/Azure/azure-functions-durable-python/blob/dev/samples-v2/blueprint/durable_blueprints.py). The resulting blueprint can then be registered as normal. See our [sample](https://github.com/Azure/azure-functions-durable-python/tree/dev/samples-v2/blueprint) for an example.
+ ::: zone-end ::: zone pivot="python-mode-configuration"
azure-maps Authentication Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/authentication-best-practices.md
When creating publicly facing client applications with Azure Maps, you must ensu
Subscription key-based authentication (Shared Key) can be used in either client side applications or web services, however it's the least secure approach to securing your application or web service. The reason is the key is easily obtained from an HTTP request and grants access to all Azure Maps REST API available in the SKU (Pricing Tier). If you do use subscription keys, be sure to [rotate them regularly] and keep in mind that Shared Key doesn't allow for configurable lifetime, it must be done manually. You should also consider using [Shared Key authentication with Azure Key Vault], which enables you to securely store your secret in Azure.
-If using [Azure Active Directory (Azure AD) authentication] or [Shared Access Signature (SAS) Token authentication] (preview), access to Azure Maps REST APIs is authorized using [role-based access control (RBAC)]. RBAC enables you to control what access is given to the issued tokens. You should consider how long access should be granted for the tokens. Unlike Shared Key authentication, the lifetime of these tokens is configurable.
+If using [Azure Active Directory (Azure AD) authentication] or [Shared Access Signature (SAS) Token authentication], access to Azure Maps REST APIs is authorized using [role-based access control (RBAC)]. RBAC enables you to control what access is given to the issued tokens. You should consider how long access should be granted for the tokens. Unlike Shared Key authentication, the lifetime of these tokens is configurable.
> [!TIP] >
azure-maps Azure Maps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-authentication.md
description: "Learn about two ways of authenticating requests in Azure Maps: shared key authentication and Azure Active Directory (Azure AD) authentication." Previously updated : 05/25/2021 Last updated : 07/05/2023
When you configure Azure RBAC, you choose a security principal and apply it to a
The following role definition types exist to support application scenarios. | Azure Role Definition | Description |
-| : | :- |
+| : | :- |
| Azure Maps Search and Render Data Reader | Provides access to only search and render Azure Maps REST APIs to limit access to basic web browser use cases. |
-| Azure Maps Data Reader | Provides access to immutable Azure Maps REST APIs. |
-| Azure Maps Data Contributor | Provides access to mutable Azure Maps REST APIs. Mutability, defined by the actions: write and delete. |
-| Custom Role Definition | Create a crafted role to enable flexible restricted access to Azure Maps REST APIs. |
+| Azure Maps Data Reader | Provides access to immutable Azure Maps REST APIs. |
+| Azure Maps Data Contributor | Provides access to mutable Azure Maps REST APIs. Mutability, defined by the actions: write and delete. |
+| Azure Maps Data Read and Batch Role | This role can be used to assign read and batch actions on Azure Maps. |
+| Custom Role Definition | Create a crafted role to enable flexible restricted access to Azure Maps REST APIs. |
Some Azure Maps services may require elevated privileges to perform write or delete actions on Azure Maps REST APIs. Azure Maps Data Contributor role is required for services, which provide write or delete actions. The following table describes what services Azure Maps Data Contributor is applicable when using write or delete actions. When only read actions are required, the Azure Maps Data Reader role can be used in place of the Azure Maps Data Contributor role.
Disabling local authentication doesn't take effect immediately. Allow a few minu
## Shared access signature token authentication -
-Shared Access Signature token authentication is in preview.
- Shared access signature (SAS) tokens are authentication tokens created using the JSON Web token (JWT) format and are cryptographically signed to prove authentication for an application to the Azure Maps REST API. A SAS token, created by integrating a [user-assigned managed identity] with an Azure Maps account in your Azure subscription. The user-assigned managed identity is given authorization to the Azure Maps account through Azure RBAC using either built-in or custom role definitions. Functional key differences of SAS token from Azure AD Access tokens:
azure-maps Geographic Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geographic-scope.md
GET https://eu.atlas.microsoft.com/search/address/{format}?api-version=1.0&query
## Additional information
-For information on limiting what regions a SAS token can use in see [Authentication with Azure Maps]
+For information on limiting what regions a SAS token can use in, see [Authentication with Azure Maps].
- [Azure geographies] - [Azure Government cloud support]
azure-maps How To Secure Sas App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-sas-app.md
-# Secure an Azure Maps account with a SAS token (preview)
+# Secure an Azure Maps account with a SAS token
This article describes how to create an Azure Maps account with a securely stored SAS token you can use to call the Azure Maps REST API.
The following steps describe how to create and configure an Azure Maps account w
{ "name": "[parameters('accountName')]", "type": "Microsoft.Maps/accounts",
- "apiVersion": "2021-12-01-preview",
+ "apiVersion": "2023-06-01",
"location": "[parameters('location')]", "sku": { "name": "[parameters('pricingTier')]"
The following steps describe how to create and configure an Azure Maps account w
"expiry" : "[variables('sasParameters').expiry]" }, "properties": {
- "value": "[listSas(variables('accountId'), '2021-12-01-preview', variables('sasParameters')).accountSasToken]"
+ "value": "[listSas(variables('accountId'), '2023-06-01', variables('sasParameters')).accountSasToken]"
} } ]
azure-maps Quick Ios App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-ios-app.md
The next step in building your application is to install the Azure Maps iOS SDK.
2. Enter the following in the resulting dialog: * Enter `https://github.com/Azure/azure-maps-ios-sdk-distribution.git` in the search bar that appears in the top right corner. * Select `Up to Next Major Version` in the **Dependency Rule** field.
- * Enter `1.0.0-pre.1` into the **Dependency Rule** version field.
+ * Enter `1.0.0-pre.3` into the **Dependency Rule** version field.
![Add dependency rule to an iOS project.](./media/ios-sdk/quick-ios-app/xcode-dependency-rule.png)
Xcode takes a few seconds to build the application. After the build is complete,
![Your first map on an iOS application.](./media/ios-sdk/quick-ios-app/example.png)
+## Access map functionality
+
+You can start customing map functionality by getting hold to `AzureMap` instance in a `mapView.onReady` handler. For a MapControl view added above, your sample `ViewController` may look the following way:
+
+```swift
+class ViewController: UIViewController {
+
+ override func viewDidLoad() {
+ super.viewDidLoad()
+ let mapView = self.view.subviews.first as? MapControl;
+ mapView?.onReady({ map in
+ // customize your map here
+ // map.sources.add()
+ // map.layers.insertLayer()
+ })
+ }
+}
+```
+
+Proceed to [Add a polygon layer to the map in the iOS SDK](add-polygon-layer-map-ios.md) for one such example.
+ ## Clean up resources <!--
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
This document contains information about new features and other changes to the M
## v3 (preview)
+### [3.0.0-preview.10] (July 11, 2023)
+
+#### Bug fixes (3.0.0-preview.10)
+
+- Dynamic pixel ratio fixed in underlying maplibre-gl dependency.
+
+- Fixed an issue where `sortKey`, `radialOffset`, `variableAnchor` is not applied when used in `SymbolLayer` options.
+
+#### Installation (3.0.0-preview.10)
+
+The preview is available on [npm][3.0.0-preview.10] and CDN.
+
+- **NPM:** Refer to the instructions at [azure-maps-control@3.0.0-preview.10][3.0.0-preview.10]
+
+- **CDN:** Reference the following CSS and JavaScript in the `<head>` element of an HTML file:
+
+ ```html
+ <link href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0.0-preview.10/atlas.min.css" rel="stylesheet" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0.0-preview.10/atlas.min.js"></script>
+ ```
+ ### [3.0.0-preview.9] (June 27, 2023) #### New features (3.0.0-preview.9)
This document contains information about new features and other changes to the M
- Elevation APIs: `atlas.sources.ElevationTileSource`, `map.enableElevation(elevationSource, options)`, `map.disableElevation()` -- ability to customize maxPitch / minPitch in `CameraOptions`
+- Ability to customize maxPitch / minPitch in `CameraOptions`
#### Bug fixes (3.0.0-preview.9) -- fixed an issue where accessibility-related duplicated DOM elements may result when `map.setServiceOptions` is called
+- Fixed an issue where accessibility-related duplicated DOM elements may result when `map.setServiceOptions` is called
#### Installation (3.0.0-preview.9)
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog]
+[3.0.0-preview.10]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.10
[3.0.0-preview.9]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.9 [3.0.0-preview.8]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.8 [3.0.0-preview.7]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.7
azure-maps Release Notes Spatial Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-spatial-module.md
This document contains information about new features and other changes to the Azure Maps Spatial IO Module.
+## [0.1.5]
+
+### Bug fixes (0.1.5)
+
+- adds missing check in [WmsClient.getFeatureInfoHtml] that decides service capabilities.
+ ## [0.1.4] ### Bug fixes (0.1.4)
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog]
+[WmsClient.getFeatureInfoHtml]: https://learn.microsoft.com/javascript/api/azure-maps-spatial-io/atlas.io.ogc.wfsclient?view=azure-maps-typescript-latest#azure-maps-spatial-io-atlas-io-ogc-wfsclient-getfeatureinfo
+[0.1.5]: https://www.npmjs.com/package/azure-maps-spatial-io/v/0.1.5
[0.1.4]: https://www.npmjs.com/package/azure-maps-spatial-io/v/0.1.4 [Azure Maps Spatial IO Samples]: https://samples.azuremaps.com/?search=Spatial%20IO%20Module [Azure Maps Blog]: https://techcommunity.microsoft.com/t5/azure-maps-blog/bg-p/AzureMapsBlog
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to always update to the latest version, or opt in to the
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
-| June 2023| **Windows** <ul><li>Add new file path column to custom logs table</li><li>Config setting to disable custom IMDS endpoint in Tenant.json file</li><li>Support DCR settings for DiskQuotaInMB</li><li>FluentBit binaries signed with Microsoft customer Code Sign cert</li><li>Minimize number of retries on calls to refresh tokens</li><li>Don't overwrite resource ID with empty string</li><li>AzSecPack updated to version 4.27</li><li>AzureProfiler and AzurePerfCollector updated to version 1.0.0.990</li><li>MetricsExtension updated to version 2.2023.513.10</li><li>Troubleshooter updated to version 1.5.0</li></ul>**Linux** <ul><li>Add the forwarder/collector's identifier (hostname)</li><li>Link OpenSSL dynamically</li><li>Support Arc-Enabled Servers proxy configuration file</li><li>**Fixes**<ul><li>Allow uploads soon after AMA start up</li><li>Run LocalSink GC on a dedicated thread to avoid thread pool scheduling issues</li><li>Fix upgrade restart of disabled services</li><li>Handle Linux Hardening where sudo on root is blocked</li><li>CEF processing fixes for noncomliant RFC 5424 logs</li><li>ASA tenant can fail to start up due to config-cache directory permissions</li><li>Fix auth proxy in AMA</li></ul></li></ul>|1.17.0 |1.27.0|
+| June 2023| **Windows** <ul><li>Add new file path column to custom logs table</li><li>Config setting to disable custom IMDS endpoint in Tenant.json file</li><li>Support DCR settings for DiskQuotaInMB</li><li>FluentBit binaries signed with Microsoft customer Code Sign cert</li><li>Minimize number of retries on calls to refresh tokens</li><li>Don't overwrite resource ID with empty string</li><li>AzSecPack updated to version 4.27</li><li>AzureProfiler and AzurePerfCollector updated to version 1.0.0.990</li><li>MetricsExtension updated to version 2.2023.513.10</li><li>Troubleshooter updated to version 1.5.0</li></ul>**Linux** <ul><li>Add the forwarder/collector's identifier (hostname)</li><li>Link OpenSSL dynamically</li><li>Support Arc-Enabled Servers proxy configuration file</li><li>**Fixes**<ul><li>Allow uploads soon after AMA start up</li><li>Run LocalSink GC on a dedicated thread to avoid thread pool scheduling issues</li><li>Fix upgrade restart of disabled services</li><li>Handle Linux Hardening where sudo on root is blocked</li><li>CEF processing fixes for noncomliant RFC 5424 logs</li><li>ASA tenant can fail to start up due to config-cache directory permissions</li><li>Fix auth proxy in AMA</li></ul></li></ul>|1.17.0 |1.27.2|
| May 2023 | **Windows** <ul><li>Enable Large Event support for all regions.</li><li>Update to TroubleShooter 1.4.0.</li><li>Fixed issue when Event Log subscription become invalid an would not resubscribe.</li><li>AMA: Fixed issue with Large Event sending too large data. Also affecting Custom Log.</li></ul> **Linux** <ul><li>Support for CIS and SELinux [hardening](./agents-overview.md)</li><li>Include Ubuntu 22.04 (Jammy) in azure-mdsd package publishing</li><li>Move storage SDK patch to build container</li><li>Add system Telegraf counters to AMA</li><li>Drop msgpack and syslog data if not configured in active configuration</li><li>Limit the events sent to Public ingestion pipeline</li><li>**Fixes** <ul><li>Fix mdsd crash in init when in persistent mode </li><li>Remove FdClosers from ProtocolListeners to avoid a race condition</li><li>Fix sed regex special character escaping issue in rpm macro for Centos 7.3.Maipo</li><li>Fix latency and future timestamp issue</li><li>Install AMA syslog configs only if customer is opted in for syslog in DCR</li><li>Fix heartbeat time check</li><li>Skip unnecessary cleanup in fatal signal handler</li><li>Fix case where fast-forwarding may cause intervals to be skipped</li><li>Fix comma separated custom log paths with fluent</li></ul></li><ul> | 1.16.0.0 | 1.26.2 | | Apr 2023 | **Windows** <ul><li>AMA: Enable Large Event support based on Region.</li><li>AMA: Upgrade to FluentBit version 2.0.9</li><li>Update Troubleshooter to 1.3.1</li><li>Update ME version to 2.2023.331.1521</li><li>Updating package version for AzSecPack 4.26 release</li></ul>|1.15.0| Coming soon| | Mar 2023 | **Windows** <ul><li>Text file collection improvements to handle high rate logging and continuous tailing of longer lines</li><li>VM Insights fixes for collecting metrics from non-English OS</li></ul> | 1.14.0.0 | Coming soon |
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
Since MO is a tenant level resource, the scope of the permission would be higher
#### 1. Assign ΓÇÿMonitored Object ContributorΓÇÖ role to the operator
-This step grants the ability to create and link a monitored object to a user.
+This step grants the ability to create and link a monitored object to a user or group.
**Request URI** ```HTTP
PUT https://management.azure.com/providers/microsoft.insights/providers/microsof
| Name | Description | |:|:| | roleDefinitionId | Fixed value: Role definition ID of the 'Monitored Objects Contributor' role: `/providers/Microsoft.Authorization/roleDefinitions/56be40e24db14ccf93c37e44c597135b` |
-| principalId | Provide the `Object Id` of the identity of the user to which the role needs to be assigned. It may be the user who elevated at the beginning of step 1, or another user who will perform later steps. |
+| principalId | Provide the `Object Id` of the identity of the user to which the role needs to be assigned. It may be the user who elevated at the beginning of step 1, or another user or group who will perform later steps. |
After this step is complete, **reauthenticate** your session and **reacquire** your ARM bearer token.
PUT https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{
|:|:| | `dataCollectionRuleID` | The resource ID of an existing Data Collection Rule that you created in the **same region** as the Monitored Object. |
+#### 4. List associations to Monitored Object
+If you need to view the associations, you can list them for the Monitored Object.
+
+**Permissions required**: Anyone who has ΓÇÿReaderΓÇÖ at an appropriate scope can perform this operation, similar to that assigned in step 1.
+
+**Request URI**
+```HTTP
+GET https://management.azure.com/{MOResourceId}/providers/microsoft.insights/datacollectionruleassociations/?api-version=2021-09-01-preview
+```
+**Sample Request URI**
+```HTTP
+GET https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{AADTenantId}/providers/microsoft.insights/datacollectionruleassociations/?api-version=2021-09-01-preview
+```
+
+```JSON
+{
+ "value": [
+ {
+ "id": "/subscriptions/703362b3-f278-4e4b-9179-c76eaf41ffc2/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVm/providers/Microsoft.Insights/dataCollectionRuleAssociations/myRuleAssociation",
+ "name": "myRuleAssociation",
+ "type": "Microsoft.Insights/dataCollectionRuleAssociations",
+ "properties": {
+ "dataCollectionRuleId": "/subscriptions/703362b3-f278-4e4b-9179-c76eaf41ffc2/resourceGroups/myResourceGroup/providers/Microsoft.Insights/dataCollectionRules/myCollectionRule",
+ "provisioningState": "Succeeded"
+ },
+ "systemData": {
+ "createdBy": "user1",
+ "createdByType": "User",
+ "createdAt": "2021-04-01T12:34:56.1234567Z",
+ "lastModifiedBy": "user2",
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "2021-04-02T12:34:56.1234567Z"
+ },
+ "etag": "070057da-0000-0000-0000-5ba70d6c0000"
+ }
+ ],
+ "nextLink": null
+}
+```
+
+#### 5. Disassociate DCR to Monitored Object
+If you need to remove an association of a Data Collection Rule (DCR) to the Monitored Object.
+
+**Permissions required**: Anyone who has ΓÇÿMonitored Object ContributorΓÇÖ at an appropriate scope can perform this operation, as assigned in step 1.
+
+**Request URI**
+```HTTP
+DELETE https://management.azure.com/{MOResourceId}/providers/microsoft.insights/datacollectionruleassociations/{associationName}?api-version=2021-09-01-preview
+```
+**Sample Request URI**
+```HTTP
+DELETE https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{AADTenantId}/providers/microsoft.insights/datacollectionruleassociations/{associationName}?api-version=2021-09-01-preview
+```
+
+**URI Parameters**
+
+| Name | In | Type | Description |
+|||||
+| `MOResourceId` | path | string | Full resource ID of the MO created in step 2. Example: 'providers/Microsoft.Insights/monitoredObjects/{AADTenantId}' |
+| `associationName` | path | string | The name of the association. The name is case insensitive. Example: 'assoc01' |
+
+**Headers**
+- Authorization: ARM Bearer Token
+- Content-Type: Application/json
+ ### Using PowerShell for onboarding ```PowerShell
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
When Azure Monitor data indicates that there might be a problem with your infrastructure or application, an alert is triggered. Alerts can contain action groups, which are a collection of notification preferences. Azure Monitor, Azure Service Health, and Azure Advisor use action groups to notify users about the alert and take an action.
-This article shows you how to create and manage action groups. Depending on your requirements, you can configure various alerts to use the same action group or different action groups.
+This article shows you how to create and manage action groups.
-Each action is made up of the following properties:
+Each action is made up of:
- **Type**: The notification that's sent or action that's performed. Examples include sending a voice call, SMS, or email. You can also trigger various types of automated actions. - **Name**: A unique identifier within the action group.
Each action is made up of the following properties:
In general, an action group is a global service. Efforts to make them more available regionally are in development. Global requests from clients can be processed by action group services in any region. If one region of the action group service is down, the traffic is automatically routed and processed in other regions. As a global service, an action group helps provide a disaster recovery solution. Regional requests rely on availability zone redundancy to meet privacy requirements and offer a similar disaster recovery solution.
+- You can add up to five action groups to an alert rule.
+- Action groups are executed concurrently, in no specific order.
+- Multiple alert rules can use the same action group.
+ ## Create an action group in the Azure portal 1. Go to the [Azure portal](https://portal.azure.com/).
azure-monitor Alerts Manage Alert Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alert-instances.md
Title: Manage your alert instances description: The alerts page summarizes all alert instances in all your Azure resources generated in the last 30 days and allows you to manage your alert instances. Previously updated : 08/03/2022 Last updated : 07/11/2023 # Manage your alert instances
-The **Alerts** page summarizes all alert instances in all your Azure resources generated in the last 30 days. You can see all types of alerts from multiple subscriptions in a single pane. You can search for a specific alert and manage alert instances.
+The **Alerts** page summarizes all alert instances in all your Azure resources generated in the last 30 days. You can search for a specific alert and manage alert instances.
-There are a few ways to get to the **Alerts** page:
+You can get to the **Alerts** page in a few ways:
- From the home page in the [Azure portal](https://portal.azure.com/), select **Monitor** > **Alerts**.
The **Alerts** summary pane summarizes the alerts fired in the last 24 hours. Yo
To see more information about a specific alert instance, select the alert instance to open the **Alert details** page.
+ :::image type="content" source="media/alerts-managing-alert-instances/alerts-page.png" alt-text="Screenshot that shows the Alerts summary page in the Azure portal.":::
++
+## View alerts as a timeline (preview)
+
+You can see your alerts in a timeline view. In this view, you can see the number of alerts fired in a specific time range.
+
+To see the alerts in a timeline view, select **View as timeline** at the top of the Alerts summary page.
+
+ :::image type="content" source="media/alerts-managing-alert-instances/alerts-view-timeline.png" alt-text="Screenshot that shows the view timeline button in the Alerts summary page in the Azure portal.":::
+
+The timeline shows you which resource the alerts were fired on to give you context of the alert in your Azure hierarchy. The alerts are grouped by the time they were fired. You can filter the alerts by severity, resource, and more. You can also select a specific time range to see the alerts fired in that time range.
+
+ :::image type="content" source="media/alerts-managing-alert-instances/alerts-timeline.png" alt-text="Screenshot that shows the Alerts timeline page in the Azure portal.":::
## Alert details page The **Alert details** page provides more information about the selected alert:
azure-monitor Alerts Processing Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-processing-rules.md
You can also define filters to narrow down which specific subset of alerts are a
| Filter | Description| |:|:|
-Alert context (payload) | The rule applies only to alerts that contain any of the filter's strings within the [alert context](./alerts-common-schema.md) section of the alert. This section includes fields specific to each alert type. This filter does not apply to log alert search results. |
+Alert context (payload) | The rule applies only to alerts that contain any of the filter's strings within the [alert context](./alerts-common-schema.md) section of the alert. This section includes fields specific to each alert type. |
Alert rule ID | The rule applies only to alerts from a specific alert rule. The value should be the full resource ID, for example, `/subscriptions/SUB1/resourceGroups/RG1/providers/microsoft.insights/metricalerts/MY-API-LATENCY`. To locate the alert rule ID, open a specific alert rule in the portal, select **Properties**, and copy the **Resource ID** value. You can also locate it by listing your alert rules from PowerShell or the Azure CLI. | Alert rule name | The rule applies only to alerts with this alert rule name. It can also be useful with a **Contains** operator. | Description | The rule applies only to alerts that contain the specified string within the alert rule description field. |
azure-monitor Tutorial Log Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/tutorial-log-alert.md
On the **Condition** tab, the **Log query** will already be filled in. The **Mea
:::image type="content" source="media/tutorial-log-alert/alert-rule-dimensions.png" lightbox="media/tutorial-log-alert/alert-rule-dimensions.png"alt-text="Alert rule dimensions":::
+If you need a certain dimension(s) included in the alert notification email, you can specify a dimension (e.g. "Computer"), the alert notification email will include the computer name that triggered the alert. The alerting engine uses the alert query to determine the available dimensions. If you do not see the dimension you want in the drop-down list for the "Dimension name", it is because the alert query does not expose that column in the results. You can easily add the dimensions you want by adding a Project line to your query that includes the columns you want to use. You can also use the Summarize line to add additional columns to the query results.
+ ## Configure alert logic In the alert logic, configure the **Operator** and **Threshold value** to compare to the value returned from the measurement. An alert is created when this value is true. Select a value for **Frequency of evaluation** which defines how often the log query is run and evaluated. The cost for the alert rule increases with a lower frequency. When you select a frequency, the estimated monthly cost is displayed in addition to a preview of the query results over a time period.
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md
Title: Filtering and preprocessing in the Application Insights SDK | Microsoft Docs description: Write telemetry processors and telemetry initializers for the SDK to filter or add properties to the data before the telemetry is sent to the Application Insights portal. Previously updated : 06/23/2023 Last updated : 07/10/2023 ms.devlang: csharp, javascript, python
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
Title: Data retention and storage in Application Insights | Microsoft Docs description: Retention and privacy policy statement for Application Insights. Previously updated : 03/22/2023 Last updated : 07/10/2023
azure-monitor Javascript Feature Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md
description: Learn how to install and use JavaScript feature extensions (Click A
ibiza Previously updated : 06/23/2023 Last updated : 07/10/2023 ms.devlang: javascript
azure-monitor Javascript Framework Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md
description: Learn how to install and use JavaScript framework extensions for th
ibiza Previously updated : 06/23/2023 Last updated : 07/10/2023 ms.devlang: javascript
azure-monitor Javascript Sdk Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk-configuration.md
Title: Microsoft Azure Monitor Application Insights JavaScript SDK configuration description: Microsoft Azure Monitor Application Insights JavaScript SDK configuration. Previously updated : 02/28/2023 Last updated : 07/10/2023 ms.devlang: javascript
azure-monitor Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk.md
Title: Microsoft Azure Monitor Application Insights JavaScript SDK description: Microsoft Azure Monitor Application Insights JavaScript SDK is a powerful tool for monitoring and analyzing web application performance. Previously updated : 06/23/2023 Last updated : 07/10/2023 ms.devlang: javascript
azure-monitor Best Practices Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md
Azure Monitor monitors your custom applications by using [Application Insights](
Application Insights is the feature of Azure Monitor for monitoring your cloud native and hybrid applications.
-You must create a resource in Application Insights for each application that you're going to monitor. Log data collected by Application Insights is stored in Azure Monitor Logs for a workspace-based application. Log data for classic applications is stored separately from your Log Analytics workspace as described in [Data structure](logs/log-analytics-workspace-overview.md#data-structure).
+You may create a resource in Application Insights for each application that you're going to monitor or a single application resource for multiple applications. Whether to use separate or a single application resource for multiple applications is a fundamental decision of your monitoring strategy. Separate resources can save costs and prevent mixing data from different applications, but a single resource can simplify your monitoring by keeping all relevant telemetry together. See [How many Application Insights resources should I deploy](app/separate-resources.md) for criteria to help you make this design decision.
- When you create the application, you must select whether to use classic or workspace based. See [Create an Application Insights resource](/previous-versions/azure/azure-monitor/app/create-new-resource) to create a classic application.
-See [Workspace-based Application Insights resources (preview)](app/create-workspace-resource.md) to create a workspace-based application.
-
- A fundamental design decision is whether to use separate or a single application resource for multiple applications. Separate resources can save costs and prevent mixing data from different applications, but a single resource can simplify your monitoring by keeping all relevant telemetry together. See [How many Application Insights resources should I deploy](app/separate-resources.md) for criteria to help you make this design decision.
+When you create the application resource, you must select whether to use classic or workspace based. See [Create an Application Insights resource](/previous-versions/azure/azure-monitor/app/create-new-resource) to create a classic application.
+See [Workspace-based Application Insights resources](app/create-workspace-resource.md) to create a workspace-based application. Log data collected by Application Insights is stored in Azure Monitor Logs for a workspace-based application. Log data for classic applications is stored separately from your Log Analytics workspace as described in [Data structure](logs/log-analytics-workspace-overview.md#data-structure).
### Configure codeless or code-based monitoring
azure-monitor Basic Logs Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-query.md
For more information, see [Set a table's log data plan](basic-logs-configure.md)
> [!NOTE] > Other tools that use the Azure API for querying - for example, Grafana and Power BI - cannot access Basic Logs.
-> [!NOTE]
-> Billing of queries on Basic Logs is not yet enabled. You can query Basic Logs for free until early 2023.
- [!INCLUDE [log-analytics-query-permissions](../../../includes/log-analytics-query-permissions.md)] ## Limitations
azure-monitor Ingest Logs Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/ingest-logs-event-hub.md
Title: Ingest events from Azure Event Hubs into Azure Monitor Logs
+ Title: Ingest events from Azure Event Hubs into Azure Monitor Logs (Preview)
description: Ingest logs from Event Hubs into Azure Monitor Logs
-# Tutorial: Ingest events from Azure Event Hubs into Azure Monitor Logs
+# Tutorial: Ingest events from Azure Event Hubs into Azure Monitor Logs (Preview)
[Azure Event Hubs](../../event-hubs/event-hubs-about.md) is a big data streaming platform that collects events from multiple sources to be ingested by Azure and external services. This article explains how to ingest data directly from an event hub into a Log Analytics workspace.
Learn more about to:
- [Create a custom table](../logs/create-custom-table.md#create-a-custom-table). - [Create a data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint).-- [Update an existing data collection rule](../essentials/data-collection-rule-edit.md).
+- [Update an existing data collection rule](../essentials/data-collection-rule-edit.md).
azure-monitor Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/restore.md
You can:
- Perform up to four restores per table per week. ## Pricing model
-The charge for maintaining restored logs is calculated based on the volume of data you restore, in GB, and the number or days for which you restore the data. Charges are prorated and subject to the minimum restore duration and data volume. There is no charge for querying against restored logs.
+The charge for restored logs is based on the volume of data you restore, in GB, and the duration for which you keep the restored data.
+- Charges are subject to a minimum restored data volume.
+- Charges are prorated hourly and subject to a minimum restore duration.
+- For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
-For example, if your table holds 500 GB a day and you restore 10 days of data, you'll be charged for 5000 GB a day until you dismiss the restored data.
+For example, if your table holds 500 GB a day and you restore 10 days of data, you'll be charged for 5000 GB a day until you [dismiss the restored data](#dismiss-restored-data).
> [!NOTE]
-> Billing of restore is not yet enabled. You can restore logs for free until early 2023.
-
-For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+> There is no charge for querying restored logs.
## Next steps
azure-monitor Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md
You can use all functions and binary operators within these operators.
## Pricing model The charge for a search job is based on: -- Search job execution - the amount of data the search job needs to scan.-- Search job results - the amount of data ingested in the results table, based on the regular log data ingestion prices.
+- Search job execution - the amount of data the search job scans.
+- Search job results - the amount of data the search job finds and is ingested into the results table, based on the regular log data ingestion prices.
-For example, if your table holds 500 GB per day, for a query on three days, you'll be charged for 1500 GB of scanned data. If the job returns 1000 records, you'll be charged for ingesting these 1000 records into the results table.
-
-> [!NOTE]
-> Search job execution is free until early 2023. In other words, until early 2023, you will only incur charges for ingesting the search results, not for executing the search job.
+For example, if your table holds 500 GB per day, for a search over 30 days, you'll be charged for 15,000 GB of scanned data.
+If the search job finds 1,000 records that match the search query, you'll be charged for ingesting these 1,000 records into the results table.
For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
azure-portal Azure Portal Safelist Urls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-safelist-urls.md
azure.status.microsoft (Azure Status)
storage.azure.com (Azure Storage) storage.azure.net (Azure Storage) vault.azure.net (Azure Key Vault Service)
+ux.console.azure.com (Azure Cloud Shell)
``` ### [U.S. Government Cloud](#tab/us-government-cloud)
azure-resource-manager Bicep Functions Parameters File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-parameters-file.md
Last updated 06/05/2023
# Parameters file function for Bicep
-Bicep provides a function called `readEnvironmentVariable()` that allows you to retrieve values from environment variables. It also offers the flexibility to set a default value if the environment variable does not exist. This function can only be using in the `.bicepparam` files. For more information, see [Bicep parameters file](./parameter-files.md).
+Bicep provides a function called `readEnvironmentVariable()` that allows you to retrieve values from environment variables. It also offers the flexibility to set a default value if the environment variable does not exist. This function can only be used in the `.bicepparam` files. For more information, see [Bicep parameters file](./parameter-files.md).
## readEnvironmentVariable()
azure-resource-manager Deployment Stacks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-stacks.md
+
+ Title: Create & deploy deployment stacks in Bicep
+description: Describes how to create deployment stacks in Bicep.
+ Last updated : 07/10/2023++
+# Deployment stacks (Preview)
+
+An Azure deployment stack is a type of Azure resource that enables the management of a group of Azure resources as an atomic unit. When a Bicep file or an ARM JSON template is submitted to a deployment stack, it defines the resources that are managed by the stack. If a resource that was previously included in the template is removed, it will either be detached or deleted based on the specified _actionOnUnmanage_ behavior of the deployment stack. Similar to other Azure resources, access to the deployment stack can be restricted using Azure role-based access control (Azure RBAC).
+
+To create and update a deployment stack, you can utilize Azure CLI, Azure PowerShell, or the Azure portal along with Bicep files. These Bicep files are transpiled into ARM JSON templates, which are then deployed as a deployment object by the stack. The deployment stack offers additional capabilities beyond the [familiar deployment resources](./deploy-cli.md), serving as a superset of those capabilities.
+
+`Microsoft.Resources/deploymentStacks` is the resource type for deployment stacks. It consists of a main template that can perform 1-to-many updates across scopes to the resources it describes, and block any unwanted changes to those resources.
+
+When planning your deployment and determining which resource groups should be part of the same stack, it's important to consider the management lifecycle of those resources, which includes creation, updating, and deletion. For instance, suppose you need to provision some test VMs for various application teams across different resource group scopes. In this case, a deployment stack can be utilized to create these test environments and update the test VM configurations through subsequent updates to the deployment stack. After completing the project, it may be necessary to remove or delete any resources that were created, such as the test VMs. By utilizing a deployment stack, the managed resources can be easily removed by specifying the appropriate delete flag. This streamlined approach saves time during environment cleanup, as it involves a single update to the stack resource rather than individually modifying or removing each test VM across various resource group scopes.
+
+Deployment stacks requires Azure PowerShell [version 10.1.0 or later](/powershell/azure/install-az-ps) or Azure CLI [version 2.50.0 or later](/cli/azure/install-azure-cli).
+
+To create your first deployment stack, work through [Quickstart: create deployment stack](./quickstart-create-deployment-stacks.md).
+
+## Why use deployment stacks?
+
+Deployment stacks provide the following benefits:
+
+- Simplified provisioning and management of resources across different scopes as a cohesive entity.
+- Preventing undesired modifications to managed resources through [deny settings](#protect-managed-resources-against-deletion).
+- Efficient environment cleanup by employing delete flags during deployment stack updates.
+- Utilizing standard templates such as Bicep, ARM templates, or Template specs for your deployment stacks.
+
+### Known issues
+
+- Deleting resource groups currently bypasses deny assignments.
+- Implicitly created resources aren't managed by the stack. Therefore, no deny assignments or cleanup is possible.
+- [What-if](./deploy-what-if.md) isn't available in the preview.
+- Management group scoped deployment stacks can only deploy the template to subscription.
+- When using the Azure CLI create command to modify an existing stack, the deployment process continues regardless of whether you choose _n_ for a prompt. To halt the procedure, use _[CTRL] + C_.
+- If you create or modify a deployment stack in the Azure portal, deny settings will be overwritten (support for deny settings in the Azure portal is currently in progress).
+- Management group deployment stacks not yet available in the Azure portal.
++
+## Create deployment stacks
+
+A deployment stack resource can be created at resource group, subscription, or management group scope. The template passed into a deployment stack defines the resources to be created or updated at the target scope specified for the template deployment.
+
+- A stack at resource group scope can deploy the template passed-in to the same resource group scope where the deployment stack exists.
+- A stack at subscription scope can deploy the template passed-in to a resource group scope (if specified) or the same subscription scope where the deployment stack exists.
+- A stack at management group scope can deploy the template passed-in to the subscription scope specified.
+
+It's important to note that where a deployment stack exists, so is the deny assignment created with the deny settings capability. For example, by creating a deployment stack at subscription scope that deploys the template to resource group scope and with deny settings mode `DenyDelete`, you can easily provision managed resources to the specified resource group and block delete attempts to those resources. By using this approach, you also enhance the security of the deployment stack by separating it at the subscription level, as opposed to the resource group level. This separation ensures that the developer teams working with the provisioned resources only have visibility and write access to the resource groups, while the deployment stack remains isolated at a higher level. This minimizes the number of users that can edit a deployment stack and make changes to its deny assignment. For more information, see [Protect managed resource against deletion](#protect-managed-resources-against-deletion).
+
+The create-stack commands can also be used to [update deployment stacks](#update-deployment-stacks).
+
+To create a deployment stack at the resource group scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+New-AzResourceGroupDeploymentStack `
+ -Name '<deployment-stack-name>' `
+ -ResourceGroupName '<resource-group-name>' `
+ -TemplateFile '<bicep-file-name>' `
+ -DenySettingsMode none
+```
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group create \
+ --name <deployment-stack-name> \
+ --resource-group <resource-group-name> \
+ --template-file <bicep-file-name> \
+ --deny-settings-mode none
+```
+
+# [Portal](#tab/azure-portal)
+
+Currently not implemented.
+++
+To create a deployment stack at the subscription scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+New-AzSubscriptionDeploymentStack `
+ -Name '<deployment-stack-name>' `
+ -Location '<location>' `
+ -TemplateFile '<bicep-file-name>' `
+ -DeploymentResourceGroupName '<resource-group-name>' `
+ -DenySettingsMode none
+```
+
+The `DeploymentResourceGroupName` parameter specifies the resource group used to store the managed resources. If the parameter isn't specified, the managed resources are stored in the subscription scope.
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack sub create \
+ --name <deployment-stack-name> \
+ --location <location> \
+ --template-file <bicep-file-name> \
+ --deployment-resource-group-name <resource-group-name> \
+ --deny-settings-mode none
+```
+
+The `deployment-resource-group-name` parameter specifies the resource group used to store the managed resources. If the parameter isn't specified, the managed resources are stored in the subscription scope.
+
+# [Portal](#tab/azure-portal)
+
+Currently not implemented.
+++
+To create a deployment stack at the management group scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+New-AzManagmentGroupDeploymentStack `
+ -Name '<deployment-stack-name>' `
+ -Location '<location>' `
+ -TemplateFile '<bicep-file-name>' `
+ -DeploymentSubscriptionId '<subscription-id>' `
+ -DenySettingsMode none
+```
+
+The `deploymentSubscriptionId` parameter specifies the subscription used to store the managed resources. If the parameter isn't specified, the managed resources are stored in the management group scope.
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack mg create \
+ --name <deployment-stack-name> \
+ --location <location> \
+ --template-file <bicep-file-name> \
+ --deployment-subscription-id <subscription-id> \
+ --deny-settings-mode none
+```
+
+The `deployment-subscription` parameter specifies the subscription used to store the managed resources. If the parameter isn't specified, the managed resources are stored in the management group scope.
+
+# [Portal](#tab/azure-portal)
+
+Currently not implemented.
+++
+## List deployment stacks
+
+To list deployment stack resources at the resource group scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Get-AzResourceGroupDeploymentStack `
+ -ResourceGroupName '<resource-group-name>'
+```
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group list \
+ --resource-group <resource-group-name>
+```
+
+# [Portal](#tab/azure-portal)
+
+1. From the Azure portal, open the resource group that contains the deployment stacks.
+1. From the left menu, select `Deployment stacks` to list the deployment stacks deployed to the resource group.
+
+ :::image type="content" source="./media/deployment-stacks/deployment-stack-portal-group-list-stacks.png" alt-text="Screenshot of listing deployment stacks at the resource group scope.":::
+++
+To list deployment stack resources at the subscription scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Get-AzSubscriptionDeploymentStack
+```
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack sub list
+```
+
+# [Portal](#tab/azure-portal)
+
+1. From the Azure portal, open the subscription that contains the deployment stacks.
+1. From the left menu, select `Deployment stacks` to list the deployment stacks deployed to the subscription.
+
+ :::image type="content" source="./media/deployment-stacks/deployment-stack-portal-sub-list-stacks.png" alt-text="Screenshot of listing deployment stacks at the subscription scope.":::
+++
+To list deployment stack resources at the management group scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Get-AzManagementGroupDeploymentStack `
+ -ManagementGroupId '<management-group-id>'
+```
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack mg list \
+ --management-group-id <management-group-id>
+```
+
+# [Portal](#tab/azure-portal)
+
+Currently not implemented.
+++
+## Update deployment stacks
+
+To update a deployment stack, which may involve adding or deleting a managed resource, you need to make changes to the underlying Bicep files. Once the modifications are made, you have two options to update the deployment stack: run the update command or rerun the create command.
+
+The list of managed resources can be fully controlled through the infrastructure as code (IaC) design pattern.
+
+### Use the Set command
+
+To update a deployment stack at the resource group scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Set-AzResourceGroupDeploymentStack `
+ -Name '<deployment-stack-name>' `
+ -ResourceGroupName '<resource-group-name>' `
+ -TemplateFile '<bicep-file-name>' `
+ -DenySettingsMode none
+```
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group create \
+ --name <deployment-stack-name> \
+ --resource-group <resource-group-name> \
+ --template-file <bicep-file-name> \
+ --deny-settings-mode none
+```
+
+> [!NOTE]
+> Azure CLI doesn't have a deployment stack set command. Use the new command instead.
+
+# [Portal](#tab/azure-portal)
+
+Currently not implemented.
+++
+To update a deployment stack at the subscription scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Set-AzSubscriptionDeploymentStack `
+ -Name '<deployment-stack-name>' `
+ -Location '<location>' `
+ -TemplateFile '<bicep-file-name>' `
+ -DeploymentResourceGroupName '<resource-group-name>' `
+ -DenySettingsMode none
+```
+
+The `DeploymentResourceGroupName` parameter specifies the resource group used to store the deployment stack resources. If you don't specify a resource group name, the deployment stack service will create a new resource group for you.
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack sub create \
+ --name <deployment-stack-name> \
+ --location <location> \
+ --template-file <bicep-file-name> \
+ --deployment-resource-group-name <resource-group-name> \
+ --deny-settings-mode none
+```
+
+# [Portal](#tab/azure-portal)
+
+Currently not implemented.
+++
+To update a deployment stack at the management group scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Set-AzManagmentGroupDeploymentStack `
+ -Name '<deployment-stack-name>' `
+ -Location '<location>' `
+ -TemplateFile '<bicep-file-name>' `
+ -DeploymentSubscriptionId '<subscription-id>' `
+ -DenySettingsMode none
+```
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack mg create \
+ --name <deployment-stack-name> \
+ --location <location> \
+ --template-file <bicep-file-name> \
+ --deployment-subscription-id <subscription-id> \
+ --deny-settings-mode none
+```
+
+# [Portal](#tab/azure-portal)
+
+Currently not implemented.
+++
+### Use the new command
+
+You get a warning similar to the following:
+
+```warning
+The deployment stack 'myStack' you're trying to create already already exists in the current subscription/management group/resource group. Do you want to overwrite it? Detaching: resources, resourceGroups (Y/N)
+```
+
+For more information, see [Create deployment stacks](#create-deployment-stacks).
+
+### Control detachment and deletion
+
+A detached resource (or unmanaged resource) refers to a resource that isn't tracked or managed by the deployment stack but still exists within Azure.
+
+To instruct Azure to delete unmanaged resources, update the stack with the create stack command with one of the following delete flags. For more information, see [Create deployment stack](#create-deployment-stacks).
+
+# [PowerShell](#tab/azure-powershell)
+
+- `DeleteAll`: use delete rather than detach for managed resources and resource groups.
+- `DeleteResources`: use delete rather than detach for managed resources only.
+- `DeleteResourceGroups`: use delete rather than detach for managed resource groups only. It's invalid to use `DeleteResourceGroups` by itself. `DeleteResourceGroups` must be used together with `DeleteResources`.
+
+For example:
+
+```azurepowershell
+New-AzSubscriptionDeploymentStack `
+ -Name '<deployment-stack-name' `
+ -TemplateFile '<bicep-file-name>' `
+ -DenySettingsMode none`
+ -DeleteResourceGroups `
+ -DeleteResources
+```
+
+# [CLI](#tab/azure-cli)
+
+- `delete-all`: use delete rather than detach for managed resources and resource groups.
+- `delete-resources`: use delete rather than detach for managed resources only.
+- `delete-resource-groups`: use delete rather than detach for managed resource groups only. It's invalid to use `delete-resource-groups` by itself. `delete-resource-groups` must be used together with `delete-resources`.
+
+For example:
+
+```azurecli
+az stack sub create `
+ --name <deployment-stack-name> `
+ --location <location> `
+ --template-file <bicep-file-name> `
+ --deny-settings-mode none `
+ --delete-resource-groups `
+ --delete-resources
+```
+
+# [Portal](#tab/azure-portal)
+
+Currently not implemented.
+++
+> [!WARNING]
+> When deleting resource groups with either the `DeleteAll` or `DeleteResourceGroups` properties, the managed resource groups and all the resources contained within them will also be deleted.
+
+## Delete deployment stacks
+
+# [PowerShell](#tab/azure-powershell)
+
+If you run the delete commands without the delete flags, the unmanaged resources will be detached but not deleted. To delete the unmanaged resources, use the following switches:
+
+- `DeleteAll`: Delete both the resources and the resource groups.
+- `DeleteResources`: Delete the resources only.
+- `DeleteResourceGroups`: Delete the resource groups only.
+
+# [CLI](#tab/azure-cli)
+
+If you run the delete commands without the delete flags, the unmanaged resources will be detached but not deleted. To delete the unmanaged resources, use the following switches:
+
+- `delete-all`: Delete both the resources and the resource groups.
+- `delete-resources`: Delete the resources only.
+- `delete-resource-groups`: Delete the resource groups only.
+
+# [Portal](#tab/azure-portal)
+
+Select one of the delete flags when you delete a deployment stack.
++++
+Even if you specify the delete all switch, if there are unmanaged resources within the resource group where the deployment stack is located, both the unmanaged resource and the resource group itself won't be deleted.
+
+To delete deployment stack resources at the resource group scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Remove-AzResourceGroupDeploymentStack `
+ -name '<deployment-stack-name>' `
+ -ResourceGroupName '<resource-group-name>' `
+ [-DeleteAll/-DeleteResourceGroups/-DeleteResources]
+```
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group delete \
+ --name <deployment-stack-name> \
+ --resource-group <resource-group-name> \
+ [--delete-all/--delete-resource-groups/--delete-resources]
+```
+
+# [Portal](#tab/azure-portal)
+
+1. From the Azure portal, open the resource group that contains the deployment stacks.
+1. From the left menu, select `Deployment stacks`, select the deployment stack to be deleted, and then select `Delete stack`.
+
+ :::image type="content" source="./media/deployment-stacks/deployment-stack-portal-group-delete-stacks.png" alt-text="Screenshot of deleting deployment stacks at the resource group scope.":::
+
+1. Select an `Update behavior`, and then select `Next`.
+
+ :::image type="content" source="./media/deployment-stacks/deployment-stack-portal-group-delete-stack-update-behavior.png" alt-text="Screenshot of update behavior (delete flags) for deleting resource group scope deployment stacks.":::
+++
+To delete deployment stack resources at the subscription scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Remove-AzSubscriptionDeploymentStack `
+ -Name '<deployment-stack-name>' `
+ [-DeleteAll/-DeleteResourceGroups/-DeleteResources]
+```
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack sub delete \
+ --name <deployment-stack-name> \
+ [--delete-all/--delete-resource-groups/--delete-resources]
+```
+
+# [Portal](#tab/azure-portal)
+
+1. From the Azure portal, open the subscription that contains the deployment stacks.
+1. From the left menu, select `Deployment stacks`, select the deployment stack to be deleted, and then select `Delete stack`.
+
+ :::image type="content" source="./media/deployment-stacks/deployment-stack-portal-sub-delete-stacks.png" alt-text="Screenshot of deleting deployment stacks at the subscription scope.":::
+
+1. Select an `Update behavior` (delete flag), and then select `Next`.
+
+ :::image type="content" source="./media/deployment-stacks/deployment-stack-portal-sub-delete-stack-update-behavior.png" alt-text="Screenshot of update behavior (delete flags) for deleting subscription scope deployment stacks.":::
++
+To delete deployment stack resources at the management group scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Remove-AzManagementGroupDeploymentStack `
+ -Name '<deployment-stack-name>' `
+ -ManagementGroupId '<management-group-id>' `
+ [-DeleteAll/-DeleteResourceGroups/-DeleteResources]
+```
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack mg delete \
+ --name <deployment-stack-name> \
+ --management-group-id <management-group-id> \
+ [--delete-all/--delete-resource-groups/--delete-resources]
+```
+
+# [Portal](#tab/azure-portal)
+
+Currently not implemented.
+++
+## View managed resources in deployment stack
+
+During public preview, the deployment stack service doesn't yet have an Azure portal graphical user interface (GUI). To view the managed resources inside a deployment stack, use the following Azure Powershell/Azure CLI commands:
+
+To view managed resources at the resource group scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+(Get-AzResourceGroupDeploymentStack -Name '<deployment-stack-name>' -ResourceGroupName '<resource-group-name>').Resources
+```
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group list \
+ --name <deployment-stack-name> \
+ --resource-group <resource-group-name> \
+ --output json
+```
+
+# [Portal](#tab/azure-portal)
+
+1. From the Azure portal, open the resource group that contains the deployment stacks.
+1. From the left menu, select `Deployment stacks`.
+
+ :::image type="content" source="./media/deployment-stacks/deployment-stack-portal-group-list-stacks.png" alt-text="Screenshot of listing managed resources at the resource group scope.":::
+
+1. Select one of the deployment stacks to view the managed resources of the deployment stack.
+++
+To view managed resources at the subscription scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+(Get-AzSubscriptionDeploymentStack -Name '<deployment-stack-name>').Resources
+```
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack sub show \
+ --name <deployment-stack-name> \
+ --output json
+```
+
+# [Portal](#tab/azure-portal)
+
+1. From the Azure portal, open the subscription that contains the deployment stacks.
+1. From the left menu, select `Deployment stacks` to list the deployment stacks deployed to the subscription.
+
+ :::image type="content" source="./media/deployment-stacks/deployment-stack-portal-sub-list-stacks.png" alt-text="Screenshot of listing managed resources at the subscription scope.":::
+
+1. Select the deployment stack to list the managed resources.
+++
+To view managed resources at the management group scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+(Get-AzManagementGroupDeploymentStack -Name '<deployment-stack-name>' -ManagementGroupId '<management-group-id>').Resources
+```
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack mg show \
+ --name <deployment-stack-name> \
+ --management-group-id <management-group-id> \
+ --output json
+```
+
+# [Portal](#tab/azure-portal)
+
+Currently not implemented.
+++
+## Add resources to deployment stack
+
+To add a managed resource, add the resource definition to the underlying Bicep files, and then run the update command or rerun the create command. For more information, see [Update deployment stacks](#update-deployment-stacks).
+
+## Delete managed resources from deployment stack
+
+To delete a managed resource, remove the resource definition from the underlying Bicep files, and then run the update command or rerun the create command. For more information, see [Update deployment stacks](#update-deployment-stacks).
+
+## Protect managed resources against deletion
+
+When creating a deployment stack, it's possible to assign a specific type of permissions to the managed resources, which prevents their deletion by unauthorized security principals. These settings are refereed as deny settings. You want to store the stack at a parent scope.
+
+# [PowerShell](#tab/azure-powershell)
+
+The Azure PowerShell includes these parameters to customize the deny assignment:
+
+- `DenySettingsMode`: Defines the operations that are prohibited on the managed resources to safeguard against unauthorized security principals attempting to delete or update them. This restriction applies to everyone unless explicitly granted access. The values include: `None`, `DenyDelete`, and `DenyWriteAndDelete`.
+- `DenySettingsApplyToChildScopes`: Deny settings are applied to nested resources under managed resources.
+- `DenySettingsExcludedActions`: List of role-based management operations that are excluded from the deny settings. Up to 200 actions are permitted.
+- `DenySettingsExcludedPrincipals`: List of Azure Active Directory (Azure AD) principal IDs excluded from the lock. Up to five principals are permitted.
+
+# [CLI](#tab/azure-cli)
+
+The Azure CLI includes these parameters to customize the deny assignment:
+
+- `deny-settings-mode`: Defines the operations that are prohibited on the managed resources to safeguard against unauthorized security principals attempting to delete or update them. This restriction applies to everyone unless explicitly granted access. The values include: `none`, `denyDelete`, and `denyWriteAndDelete`.
+- `deny-settings-apply-to-child-scopes`: Deny settings are applied to nested resources under managed resources.
+- `deny-settings-excluded-actions`: List of role-based access control (RBAC) management operations excluded from the deny settings. Up to 200 actions are allowed.
+- `deny-settings-excluded-principals`: List of Azure Active Directory (Azure AD) principal IDs excluded from the lock. Up to five principals are allowed.
+
+# [Portal](#tab/azure-portal)
+
+Currently not implemented.
+++
+To apply deny settings at the resource group scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+New-AzResourceGroupDeploymentStack `
+ -Name '<deployment-stack-name>' `
+ -ResourceGroupName '<resource-group-name>' `
+ -TemplateFile '<bicep-file-name>' `
+ -DenySettingsMode DenyDelete `
+ -DenySettingsExcludedActions Microsoft.Compute/virtualMachines/write Microsoft.StorageAccounts/delete `
+ -DenySettingsExcludedPrincipals <object-id> <object-id>
+```
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group create \
+ --name <deployment-stack-name> \
+ --resource-group <resource-group-name> \
+ --template-file <bicep-file-name> \
+ --deny-settings-mode denyDelete \
+ --deny-settings-excluded-actions Microsoft.Compute/virtualMachines/write Microsoft.StorageAccounts/delete \
+ --deny-settings-excluded-principals <object-id> <object-id>
+```
+
+# [Portal](#tab/azure-portal)
+
+Currently not implemented.
+++
+To apply deny settings at the subscription scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+New-AzSubscriptionDeploymentStack `
+ -Name '<deployment-stack-name>' `
+ -Location '<location>' `
+ -TemplateFile '<bicep-file-name>' `
+ -DenySettingsMode DenyDelete `
+ -DenySettingsExcludedActions Microsoft.Compute/virtualMachines/write Microsoft.StorageAccounts/delete `
+ -DenySettingsExcludedPrincipals <object-id> <object-id>
+```
+
+Use the `DeploymentResourceGroupName` parameter to specify the resource group name at which the deployment stack is created. If a scope isn't specified, it uses the scope of the deployment stack.
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack sub create \
+ --name <deployment-stack-name> \
+ --location <location> \
+ --template-file <bicep-file-name> \
+ --deny-settings-mode denyDelete \
+ --deny-settings-excluded-actions Microsoft.Compute/virtualMachines/write Microsoft.StorageAccounts/delete \
+ --deny-settings-excluded-principals <object-id> <object-id>
+```
+
+Use the `deployment-resource-group` parameter to specify the resource group at which the deployment stack is created. If a scope isn't specified, it uses the scope of the deployment stack.
+
+# [Portal](#tab/azure-portal)
+
+Currently not implemented.
+++
+To apply deny settings at the management group scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+New-AzManagmentGroupDeploymentStack `
+ -Name '<deployment-stack-name>' `
+ -Location '<location>' `
+ -TemplateFile '<bicep-file-name>' `
+ -DenySettingsMode DenyDelete `
+ -DenySettingsExcludedActions Microsoft.Compute/virtualMachines/write Microsoft.StorageAccounts/delete `
+ -DenySettingsExcludedPrincipals <object-id> <object-id>
+```
+
+Use the `DeploymentSubscriptionId ` parameter to specify the subscription ID at which the deployment stack is created. If a scope isn't specified, it uses the scope of the deployment stack.
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack mg create \
+ --name <deployment-stack-name> \
+ --location <location> \
+ --template-file <bicep-file-name> \
+ --deny-settings-mode denyDelete \
+ --deny-settings-excluded-actions Microsoft.Compute/virtualMachines/write Microsoft.StorageAccounts/delete \
+ --deny-settings-excluded-principals <object-id> <object-id>
+```
+
+Use the `deployment-subscription ` parameter to specify the subscription ID at which the deployment stack is created. If a scope isn't specified, it uses the scope of the deployment stack.
+
+# [Portal](#tab/azure-portal)
+
+Currently not implemented.
+++
+## Detach managed resources from deployment stack
+
+By default, deployment stacks detach and don't delete unmanaged resources when they're no longer contained within the stack's management scope. For more information, see [Update deployment stacks](#update-deployment-stacks).
+
+## Export templates from deployment stacks
+
+You can export the resources from a deployment stack to a JSON output. You can pipe the output to a file.
+
+To export a deployment stack at the resource group scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Save-AzResourceGroupDeploymentStack `
+ -Name '<deployment-stack-name>' `
+ -ResourceGroupName '<resource-group-name>' `
+```
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group export \
+ --name <deployment-stack-name> \
+ --resource-group <resource-group-name>
+```
+
+# [Portal](#tab/azure-portal)
+
+Currently not implemented.
+++
+To export a deployment stack at the subscription scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Save-AzSubscriptionDeploymentStack `
+ -name '<deployment-stack-name>'
+```
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack sub export \
+ --name <deployment-stack-name>
+```
+
+# [Portal](#tab/azure-portal)
+
+Currently not implemented.
+++
+To export a deployment stack at the management group scope:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Save-AzManagmentGroupDeploymentStack `
+ -Name '<deployment-stack-name>' `
+ -ManagementGroupId '<management-group-id>'
+```
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack mg export \
+ --name <deployment-stack-name> \
+ --management-group-id <management-group-id>
+```
+
+# [Portal](#tab/azure-portal)
+
+Currently not implemented.
+++
+## Next steps
+
+To go through a quickstart, see [Quickstart: create a deployment stack](./quickstart-create-deployment-stacks.md).
azure-resource-manager Quickstart Create Deployment Stacks Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-create-deployment-stacks-template-specs.md
+
+ Title: Create and deploy a deployment stack with Bicep from template specs
+description: Learn how to use Bicep to create and deploy a deployment stack from template specs.
Last updated : 07/06/2023++
+# Customer intent: As a developer I want to use Bicep to create a deployment stack from a template spec.
++
+# Quickstart: Create and deploy a deployment stack with Bicep from template specs (Preview)
+
+This quickstart describes how to create a [deployment stack](deployment-stacks.md) from a template spec.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure PowerShell [version 10.1.0 or later](/powershell/azure/install-az-ps) or Azure CLI [version 2.50.0 or later](/cli/azure/install-azure-cli).
+- [Visual Studio Code](https://code.visualstudio.com/) with the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep).
+
+## Create a Bicep file
+
+Create a Bicep file to create a storage account and a virtual network.
+
+```bicep
+param resourceGroupLocation string = resourceGroup().location
+param storageAccountName string = 'store${uniqueString(resourceGroup().id)}'
+param vnetName string = 'vnet${uniqueString(resourceGroup().id)}'
+
+resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+ name: storageAccountName
+ location: resourceGroupLocation
+ kind: 'StorageV2'
+ sku: {
+ name: 'Standard_LRS'
+ }
+}
+
+resource virtualNetwork 'Microsoft.Network/virtualNetworks@2022-11-01' = {
+ name: vnetName
+ location: resourceGroupLocation
+ properties: {
+ addressSpace: {
+ addressPrefixes: [
+ '10.0.0.0/16'
+ ]
+ }
+ subnets: [
+ {
+ name: 'Subnet-1'
+ properties: {
+ addressPrefix: '10.0.0.0/24'
+ }
+ }
+ {
+ name: 'Subnet-2'
+ properties: {
+ addressPrefix: '10.0.1.0/24'
+ }
+ }
+ ]
+ }
+}
+```
+
+Save the Bicep file as _main.bicep_.
+
+## Create template spec
+
+Create a template spec with the following command.
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az group create \
+ --name 'templateSpecRG' \
+ --location 'centralus'
+
+az ts create \
+ --name 'stackSpec' \
+ --version '1.0' \
+ --resource-group 'templateSpecRG' \
+ --location 'centralus' \
+ --template-file 'main.bicep'
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+New-AzResourceGroup `
+ -Name "templateSpecRG" `
+ -Location "centralus"
+
+New-AzTemplateSpec `
+ -Name "stackSpec" `
+ -Version "1.0" `
+ -ResourceGroupName "templateSpecRG" `
+ -Location "centralus" `
+ -TemplateFile "main.bicep"
+```
+++
+The format of the template spec ID is `/subscriptions/<subscription-id>/resourceGroups/templateSpecRG/providers/Microsoft.Resources/templateSpecs/stackSpec/versions/1.0`.
+
+## Create a deployment stack
+
+Create a deployment stack from the template spec.
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az group create \
+ --name 'demoRg' \
+ --location 'centralus'
+
+id=$(az ts show --name stackSpec --resource-group templateSpecRG --version "1.0" --query "id")
+
+az stack group create \
+ --name demoStack \
+ --resource-group 'demoRg' \
+ --template-spec $id \
+ --deny-settings-mode 'none'
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+New-AzResourceGroup `
+ -Name "demoRg" `
+ -Location "eastus"
+
+$id = (Get-AzTemplateSpec -ResourceGroupName templateSpecRG -Name stackSpec -Version "1.0").Versions.Id
+
+New-AzResourceGroupDeploymentStack `
+ -Name 'demoStack' `
+ -ResourceGroupName 'demoRg' `
+ -TemplateSpecId $id `
+ -DenySettingsMode none
+```
+++
+## Verify the deployment
+
+To list the deployed deployment stacks at the subscription level:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group show \
+ --resource-group 'demoRg' \
+ --name 'demoStack'
+```
+
+The output shows two managed resources - one storage account and one virtual network:
+
+```output
+{
+ "actionOnUnmanage": {
+ "managementGroups": "detach",
+ "resourceGroups": "detach",
+ "resources": "detach"
+ },
+ "debugSetting": null,
+ "deletedResources": [],
+ "denySettings": {
+ "applyToChildScopes": false,
+ "excludedActions": null,
+ "excludedPrincipals": null,
+ "mode": "none"
+ },
+ "deploymentId": "/subscriptions/00000000-0000-0000-0000-000000000000/demoRg/providers/Microsoft.Resources/deployments/demoStack-2023-06-08-14-58-28-fd6bb",
+ "deploymentScope": null,
+ "description": null,
+ "detachedResources": [],
+ "duration": "PT30.1685405S",
+ "error": null,
+ "failedResources": [],
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/demoRg/providers/Microsoft.Resources/deploymentStacks/demoStack",
+ "location": null,
+ "name": "demoStack",
+ "outputs": null,
+ "parameters": {},
+ "parametersLink": null,
+ "provisioningState": "succeeded",
+ "resourceGroup": "demoRg",
+ "resources": [
+ {
+ "denyStatus": "none",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/demoRg/providers/Microsoft.Network/virtualNetworks/vnetthmimleef5fwk",
+ "resourceGroup": "demoRg",
+ "status": "managed"
+ },
+ {
+ "denyStatus": "none",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/demoRg/providers/Microsoft.Storage/storageAccounts/storethmimleef5fwk",
+ "resourceGroup": "demoRg",
+ "status": "managed"
+ }
+ ],
+ "systemData": {
+ "createdAt": "2023-06-08T14:58:28.377564+00:00",
+ "createdBy": "johndole@contoso.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-06-08T14:58:28.377564+00:00",
+ "lastModifiedBy": "johndole@contoso.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "template": null,
+ "templateLink": null,
+ "type": "Microsoft.Resources/deploymentStacks"
+}
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Get-AzResourceGroupDeploymentStack -ResourceGroupName demoRg -Name demoStack
+```
+
+The output shows two managed resources - one virtual network, and one storage account:
+
+```output
+Id : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Resources/deploymentStacks/demoStack
+Name : demoStack
+ProvisioningState : succeeded
+ResourcesCleanupAction : detach
+ResourceGroupsCleanupAction : detach
+DenySettingsMode : none
+CreationTime(UTC) : 6/5/2023 8:55:48 PM
+DeploymentId : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Resources/deployments/demoStack-2023-06-05-20-55-48-38d09
+Resources : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Network/virtualNetworks/vnetzu6pnx54hqubm
+ /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Storage/storageAccounts/storezu6pnx54hqubm
+```
+++
+You can also verify the deployment by list the managed resources in the deployment stack:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group show \
+ --name 'demoStack'
+ --resource-group 'demoRg'
+ --output 'json'
+```
+
+The output is similar to:
+
+```output
+{
+ "actionOnUnmanage": {
+ "managementGroups": "detach",
+ "resourceGroups": "detach",
+ "resources": "detach"
+ },
+ "debugSetting": null,
+ "deletedResources": [],
+ "denySettings": {
+ "applyToChildScopes": false,
+ "excludedActions": null,
+ "excludedPrincipals": null,
+ "mode": "none"
+ },
+ "deploymentId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Resources/deployments/demoStack-2023-06-05-20-55-48-38d09",
+ "deploymentScope": null,
+ "description": null,
+ "detachedResources": [],
+ "duration": "PT29.006353S",
+ "error": null,
+ "failedResources": [],
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Resources/deploymentStacks/demoStack",
+ "location": null,
+ "name": "demoStack",
+ "outputs": null,
+ "parameters": {},
+ "parametersLink": null,
+ "provisioningState": "succeeded",
+ "resourceGroup": "demoRg",
+ "resources": [
+ {
+ "denyStatus": "none",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Network/virtualNetworks/vnetzu6pnx54hqubm",
+ "resourceGroup": "demoRg",
+ "status": "managed"
+ },
+ {
+ "denyStatus": "none",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Storage/storageAccounts/storezu6pnx54hqubm",
+ "resourceGroup": "demoRg",
+ "status": "managed"
+ }
+ ],
+ "systemData": {
+ "createdAt": "2023-06-05T20:55:48.006789+00:00",
+ "createdBy": "johndole@contoso.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-06-05T20:55:48.006789+00:00",
+ "lastModifiedBy": "johndole@contoso.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "template": null,
+ "templateLink": null,
+ "type": "Microsoft.Resources/deploymentStacks"
+}
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+(Get-AzResourceGroupDeploymentStack -Name "demoStack" -ResourceGroupName "demoRg").Resources
+```
+
+The output is similar to:
+
+```output
+Status DenyStatus Id
+ - --
+managed none /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Network/virtualNetworks/vnetzu6pnx54hqubm
+managed none /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Storage/storageAccounts/storezu6pnx54hqubm
+```
+++
+## Delete the deployment stack
+
+To delete the deployment stack, and the managed resources:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group delete \
+ --name 'demoStack' \
+ --resource-group 'demoRg' \
+ --delete-all
+```
+
+If you run the delete commands without the **delete all** parameters, the managed resources are detached but not deleted. For example:
+
+```azurecli
+az stack group delete \
+ --name 'demoStack' \
+ --resource-group 'demoRg'
+```
+
+The following parameters can be used to control between detach and delete.
+
+- `--delete-all`: Delete both the resources and the resource groups.
+- `--delete-resources`: Delete the resources only.
+- `--delete-resource-groups`: Delete the resource groups only.
+
+For more information, see [Delete deployment stacks](./deployment-stacks.md#delete-deployment-stacks).
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Remove-AzResourceGroupDeploymentStack `
+ -Name demoStack `
+ -ResourceGroupName demoRg `
+ -DeleteAll
+```
+
+If you run the delete commands without the **delete all** parameters, the managed resources are detached but not deleted. For example:
+
+```azurepowershell
+Remove-AzResourceGroupDeploymentStack `
+ -Name "demoStack" `
+ -ResourceGroupName "demoRg"
+```
+
+The following parameters can be used to control between detach and delete.
+
+- `DeleteAll`: delete both resource groups and the managed resources.
+- `DeleteResources`: delete the managed resources only.
+- `DeleteResourceGroups`: delete the resource groups only.
+
+For more information, see [Delete deployment stacks](./deployment-stacks.md#delete-deployment-stacks).
+++
+## Clean up resources
+
+The remove command only remove the managed resources and managed resource groups. You still need to delete the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli
+az group delete --name 'demoRg'
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell
+Remove-AzResourceGroup -Name "demoRg"
+```
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Deployment stacks](./deployment-stacks.md)
azure-resource-manager Quickstart Create Deployment Stacks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-create-deployment-stacks.md
+
+ Title: Create and deploy a deployment stack with Bicep (Preview)
+description: Learn how to use Bicep to create and deploy a deployment stack in your Azure subscription.
Last updated : 07/06/2023++
+# Customer intent: As a developer I want to use Bicep to create a deployment stack.
++
+# Quickstart: Create and deploy a deployment stack with Bicep
+
+This quickstart describes how to create a [deployment stack](deployment-stacks.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure PowerShell [version 10.1.0 or later](/powershell/azure/install-az-ps) or Azure CLI [version 2.50.0 or later](/cli/azure/install-azure-cli).
+- [Visual Studio Code](https://code.visualstudio.com/) with the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep).
+
+## Create a Bicep file
+
+Create a Bicep file to create a storage account and a virtual network.
+
+```bicep
+param resourceGroupLocation string = resourceGroup().location
+param storageAccountName string = 'store${uniqueString(resourceGroup().id)}'
+param vnetName string = 'vnet${uniqueString(resourceGroup().id)}'
+
+resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+ name: storageAccountName
+ location: resourceGroupLocation
+ kind: 'StorageV2'
+ sku: {
+ name: 'Standard_LRS'
+ }
+}
+
+resource virtualNetwork 'Microsoft.Network/virtualNetworks@2022-11-01' = {
+ name: vnetName
+ location: resourceGroupLocation
+ properties: {
+ addressSpace: {
+ addressPrefixes: [
+ '10.0.0.0/16'
+ ]
+ }
+ subnets: [
+ {
+ name: 'Subnet-1'
+ properties: {
+ addressPrefix: '10.0.0.0/24'
+ }
+ }
+ {
+ name: 'Subnet-2'
+ properties: {
+ addressPrefix: '10.0.1.0/24'
+ }
+ }
+ ]
+ }
+}
+```
+
+Save the Bicep file as _main.bicep_.
+
+## Create a deployment stack
+
+In this quickstart, you create the deployment stack at the resource group scope. You can also create the deployment stack at the subscription scope or the management group scope. For more information, see [Create deployment stacks](./deployment-stacks.md#create-deployment-stacks).
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az group create \
+ --name 'demoRg' \
+ --location 'centralus'
+
+az stack group create \
+ --name demoStack \
+ --resource-group 'demoRg' \
+ --template-file './main.bicep' \
+ --deny-settings-mode 'none'
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+New-AzResourceGroup `
+ -Name "demoRg" `
+ -Location "eastus"
+
+New-AzResourceGroupDeploymentStack `
+ -Name "demoStack" `
+ -ResourceGroupName "demoRg" `
+ -TemplateFile "./main.bicep" `
+ -DenySettingsMode "none"
+```
+++
+## Verify the deployment
+
+To list the deployed deployment stacks at the resource group level:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group show \
+ --resource-group 'demoRg' \
+ --name 'demoStack'
+```
+
+The output shows two managed resources - one storage account and one virtual network:
+
+```output
+{
+ "actionOnUnmanage": {
+ "managementGroups": "detach",
+ "resourceGroups": "detach",
+ "resources": "detach"
+ },
+ "debugSetting": null,
+ "deletedResources": [],
+ "denySettings": {
+ "applyToChildScopes": false,
+ "excludedActions": null,
+ "excludedPrincipals": null,
+ "mode": "none"
+ },
+ "deploymentId": "/subscriptions/00000000-0000-0000-0000-000000000000/demoRg/providers/Microsoft.Resources/deployments/demoStack-2023-06-08-14-58-28-fd6bb",
+ "deploymentScope": null,
+ "description": null,
+ "detachedResources": [],
+ "duration": "PT30.1685405S",
+ "error": null,
+ "failedResources": [],
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/demoRg/providers/Microsoft.Resources/deploymentStacks/demoStack",
+ "location": null,
+ "name": "demoStack",
+ "outputs": null,
+ "parameters": {},
+ "parametersLink": null,
+ "provisioningState": "succeeded",
+ "resourceGroup": "demoRg",
+ "resources": [
+ {
+ "denyStatus": "none",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/demoRg/providers/Microsoft.Network/virtualNetworks/vnetthmimleef5fwk",
+ "resourceGroup": "demoRg",
+ "status": "managed"
+ },
+ {
+ "denyStatus": "none",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/demoRg/providers/Microsoft.Storage/storageAccounts/storethmimleef5fwk",
+ "resourceGroup": "demoRg",
+ "status": "managed"
+ }
+ ],
+ "systemData": {
+ "createdAt": "2023-06-08T14:58:28.377564+00:00",
+ "createdBy": "johndole@contoso.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-06-08T14:58:28.377564+00:00",
+ "lastModifiedBy": "johndole@contoso.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "template": null,
+ "templateLink": null,
+ "type": "Microsoft.Resources/deploymentStacks"
+}
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Get-AzResourceGroupDeploymentStack `
+ -ResourceGroupName "demoRg" `
+ -Name "demoStack"
+```
+
+The output shows two managed resources - one storage account and one virtual network:
+
+```output
+Id : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Resources/deploymentStacks/demoStack
+Name : demoStack
+ProvisioningState : succeeded
+ResourcesCleanupAction : detach
+ResourceGroupsCleanupAction : detach
+DenySettingsMode : none
+CreationTime(UTC) : 6/5/2023 8:55:48 PM
+DeploymentId : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Resources/deployments/demoStack-2023-06-05-20-55-48-38d09
+Resources : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Network/virtualNetworks/vnetzu6pnx54hqubm
+ /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Storage/storageAccounts/storezu6pnx54hqubm
+```
+++
+You can also verify the deployment by list the managed resources in the deployment stack:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group show \
+ --name 'demoStack' \
+ --resource-group 'demoRg' \
+ --output 'json'
+```
+
+The output is similar to:
+
+```output
+{
+ "actionOnUnmanage": {
+ "managementGroups": "detach",
+ "resourceGroups": "detach",
+ "resources": "detach"
+ },
+ "debugSetting": null,
+ "deletedResources": [],
+ "denySettings": {
+ "applyToChildScopes": false,
+ "excludedActions": null,
+ "excludedPrincipals": null,
+ "mode": "none"
+ },
+ "deploymentId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Resources/deployments/demoStack-2023-06-05-20-55-48-38d09",
+ "deploymentScope": null,
+ "description": null,
+ "detachedResources": [],
+ "duration": "PT29.006353S",
+ "error": null,
+ "failedResources": [],
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Resources/deploymentStacks/demoStack",
+ "location": null,
+ "name": "demoStack",
+ "outputs": null,
+ "parameters": {},
+ "parametersLink": null,
+ "provisioningState": "succeeded",
+ "resourceGroup": "demoRg",
+ "resources": [
+ {
+ "denyStatus": "none",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Network/virtualNetworks/vnetzu6pnx54hqubm",
+ "resourceGroup": "demoRg",
+ "status": "managed"
+ },
+ {
+ "denyStatus": "none",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Storage/storageAccounts/storezu6pnx54hqubm",
+ "resourceGroup": "demoRg",
+ "status": "managed"
+ }
+ ],
+ "systemData": {
+ "createdAt": "2023-06-05T20:55:48.006789+00:00",
+ "createdBy": "johndole@contoso.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-06-05T20:55:48.006789+00:00",
+ "lastModifiedBy": "johndole@contoso.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "template": null,
+ "templateLink": null,
+ "type": "Microsoft.Resources/deploymentStacks"
+}
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+(Get-AzResourceGroupDeploymentStack -Name "demoStack" -ResourceGroupName "demoRg").Resources
+```
+
+The output is similar to:
+
+```output
+Status DenyStatus Id
+ - --
+managed none /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Network/virtualNetworks/vnetzu6pnx54hqubm
+managed none /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Storage/storageAccounts/storezu6pnx54hqubm
+```
+++
+Once a stack is created, you can access and view both the stack itself and the managed resources associated with it through the Azure portal. Navigate to the resource group where the stack has been deployed, and you can access all the relevant information and settings.
++
+## Update the deployment stack
+
+To update a deployment stack, you can modify the underlying Bicep file and rerunning the create deployment stack command.
+
+Edit **main.bicep** to change the sku name to `Standard_GRS` from `Standard_LRS`:
+
+Run the following command:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group create \
+ --name 'demoStack' \
+ --resource-group 'demoRg' \
+ --template-file './main.bicep' \
+ --deny-settings-mode 'none'
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Set-AzResourceGroupDeploymentStack `
+ -Name "demoStack" `
+ -ResourceGroupname "demoRg" `
+ -TemplateFile "./main.bicep" `
+ -DenySettingsMode "none"
+```
+++
+From the Azure portal, check the properties of the storage account to confirm the change.
+
+Using the same method, you can add a resource to the deployment stack or remove a managed resource from the deployment stack. For more information, see [Add resources to a deployment stack](./deployment-stacks.md#add-resources-to-deployment-stack) and [Delete managed resources from a deployment stack](./deployment-stacks.md#delete-managed-resources-from-deployment-stack).
+
+## Delete the deployment stack
+
+To delete the deployment stack, and the managed resources:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group delete \
+ --name 'demoStack' \
+ --resource-group 'demoRg' \
+ --delete-all
+```
+
+If you run the delete commands without the **delete all** parameters, the managed resources are detached but not deleted. For example:
+
+```azurecli
+az stack group delete \
+ --name 'demoStack' \
+ --resource-group 'demoRg'
+```
+
+The following parameters can be used to control between detach and delete.
+
+- `--delete-all`: Delete both the resources and the resource groups.
+- `--delete-resources`: Delete the resources only.
+- `--delete-resource-groups`: Delete the resource groups only. It's invalid to use `delete-resource-groups` by itself. `delete-resource-groups` must be used together with `delete-resources`.
+
+For more information, see [Delete deployment stacks](./deployment-stacks.md#delete-deployment-stacks).
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Remove-AzResourceGroupDeploymentStack `
+ -Name "demoStack" `
+ -ResourceGroupName "demoRg" `
+ -DeleteAll
+```
+
+If you run the delete commands without the **delete all** parameters, the managed resources are detached but not deleted. For example:
+
+```azurepowershell
+Remove-AzResourceGroupDeploymentStack `
+ -Name "demoStack" `
+ -ResourceGroupName "demoRg"
+```
+
+The following parameters can be used to control between detach and delete.
+
+- `DeleteAll`: delete both resource groups and the managed resources.
+- `DeleteResources`: delete the managed resources only.
+- `DeleteResourceGroups`: delete the resource groups only. It's invalid to use `DeleteResourceGroups` by itself. `DeleteResourceGroups` must be used together with `DeleteResources`.
+
+For more information, see [Delete deployment stacks](./deployment-stacks.md#delete-deployment-stacks).
+++
+The remove command exclusively removes managed resources and managed resource groups. You are still responsible for deleting the resource groups that are not managed by the deployment stack.
+
+## Clean up resources
+
+Delete the unmanaged resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli
+az group delete --name 'demoRg'
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell
+Remove-AzResourceGroup -Name "demoRg"
+```
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Deployment stacks](./deployment-stacks.md)
azure-resource-manager Tutorial Use Deployment Stacks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/tutorial-use-deployment-stacks.md
+
+ Title: Use deployment stack with Bicep
+description: Learn how to use Bicep to create and deploy a deployment stack.
Last updated : 07/06/2023++++
+# Tutorial: use deployment stack with Bicep (Preview)
+
+In this tutorial, you learn the process of creating and managing a deployment stack. The tutorial focuses on creating the deployment stack at the resource group scope. However, you can also create deployment stacks at either the subscription scope. To gain further insights into creating deployment stacks, see [Create deployment stacks](./deployment-stacks.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure PowerShell [version 10.1.0 or later](/powershell/azure/install-az-ps) or Azure CLI [version 2.50.0 or later](/cli/azure/install-azure-cli).
+- [Visual Studio Code](https://code.visualstudio.com/) with the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep).
+
+## Create a Bicep file
+
+Create a Bicep file in Visual Studio Code to create a storage account and a virtual network. This file is used to create your deployment stack.
+
+```bicep
+param resourceGroupLocation string = resourceGroup().location
+param storageAccountName string = 'store${uniqueString(resourceGroup().id)}'
+param vnetName string = 'vnet${uniqueString(resourceGroup().id)}'
+
+resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+ name: storageAccountName
+ location: resourceGroupLocation
+ kind: 'StorageV2'
+ sku: {
+ name: 'Standard_LRS'
+ }
+}
+
+resource virtualNetwork 'Microsoft.Network/virtualNetworks@2022-11-01' = {
+ name: vnetName
+ location: resourceGroupLocation
+ properties: {
+ addressSpace: {
+ addressPrefixes: [
+ '10.0.0.0/16'
+ ]
+ }
+ subnets: [
+ {
+ name: 'Subnet-1'
+ properties: {
+ addressPrefix: '10.0.0.0/24'
+ }
+ }
+ {
+ name: 'Subnet-2'
+ properties: {
+ addressPrefix: '10.0.1.0/24'
+ }
+ }
+ ]
+ }
+}
+```
+
+Save the Bicep file as _main.bicep_.
+
+## Create a deployment stack
+
+To create a resource group and a deployment stack, execute the following commands, ensuring you provide the appropriate Bicep file path based on your execution location.
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az group create \
+ --name 'demoRg' \
+ --location 'centralus'
+
+az stack group create \
+ --name 'demoStack' \
+ --resource-group 'demoRg' \
+ --template-file './main.bicep' \
+ --deny-settings-mode 'none'
+```
+
+The `deny-settings-mode` switch assigns a specific type of permissions to the managed resources, which prevents their deletion by unauthorized security principals. For more information, see [Protect managed resources against deletion](./deployment-stacks.md#protect-managed-resources-against-deletion).
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+New-AzResourceGroup `
+ -Name "demoRg" `
+ -Location "centralus"
+
+New-AzResourceGroupDeploymentStack `
+ -Name "demoStack" `
+ -ResourceGroupName "demoRg" `
+ -TemplateFile "./main.bicep" `
+ -DenySettingsMode "none"
+```
+
+The `DenySettingsMode` switch assigns a specific type of permissions to the managed resources, which prevents their deletion by unauthorized security principals. For more information, see [Protect managed resources against deletion](./deployment-stacks.md#protect-managed-resources-against-deletion).
+++
+## List the deployment stack and the managed resources
+
+To verify the deployment, you can list the deployment stack and list the managed resources of the deployment stack.
+
+To list the deployed deployment stack:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group show \
+ --resource-group 'demoRg' \
+ --name 'demoStack'
+```
+
+The output shows two managed resources - one storage account and one virtual network:
+
+```output
+{
+ "actionOnUnmanage": {
+ "managementGroups": "detach",
+ "resourceGroups": "detach",
+ "resources": "detach"
+ },
+ "debugSetting": null,
+ "deletedResources": [],
+ "denySettings": {
+ "applyToChildScopes": false,
+ "excludedActions": null,
+ "excludedPrincipals": null,
+ "mode": "none"
+ },
+ "deploymentId": "/subscriptions/00000000-0000-0000-0000-000000000000/demoRg/providers/Microsoft.Resources/deployments/demoStack-2023-06-08-14-58-28-fd6bb",
+ "deploymentScope": null,
+ "description": null,
+ "detachedResources": [],
+ "duration": "PT30.1685405S",
+ "error": null,
+ "failedResources": [],
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/demoRg/providers/Microsoft.Resources/deploymentStacks/demoStack",
+ "location": null,
+ "name": "demoStack",
+ "outputs": null,
+ "parameters": {},
+ "parametersLink": null,
+ "provisioningState": "succeeded",
+ "resourceGroup": "demoRg",
+ "resources": [
+ {
+ "denyStatus": "none",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/demoRg/providers/Microsoft.Network/virtualNetworks/vnetthmimleef5fwk",
+ "resourceGroup": "demoRg",
+ "status": "managed"
+ },
+ {
+ "denyStatus": "none",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/demoRg/providers/Microsoft.Storage/storageAccounts/storethmimleef5fwk",
+ "resourceGroup": "demoRg",
+ "status": "managed"
+ }
+ ],
+ "systemData": {
+ "createdAt": "2023-06-08T14:58:28.377564+00:00",
+ "createdBy": "johndole@contoso.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-06-08T14:58:28.377564+00:00",
+ "lastModifiedBy": "johndole@contoso.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "template": null,
+ "templateLink": null,
+ "type": "Microsoft.Resources/deploymentStacks"
+}
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Get-AzResourceGroupDeploymentStack `
+ -ResourceGroupName "demoRg" `
+ -Name "demoStack"
+```
+
+The output shows two managed resources - one storage account and one virtual network:
+
+```output
+Id : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Resources/deploymentStacks/demoStack
+Name : demoStack
+ProvisioningState : succeeded
+ResourcesCleanupAction : detach
+ResourceGroupsCleanupAction : detach
+DenySettingsMode : none
+CreationTime(UTC) : 6/5/2023 8:55:48 PM
+DeploymentId : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Resources/deployments/demoStack-2023-06-05-20-55-48-38d09
+Resources : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Network/virtualNetworks/vnetzu6pnx54hqubm
+ /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Storage/storageAccounts/storezu6pnx54hqubm
+```
+++
+You can also verify the deployment by listing the managed resources in the deployment stack:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group show \
+ --name 'demoStack' \
+ --resource-group 'demoRg' \
+ --output 'json'
+```
+
+The output is similar to:
+
+```output
+{
+ "actionOnUnmanage": {
+ "managementGroups": "detach",
+ "resourceGroups": "detach",
+ "resources": "detach"
+ },
+ "debugSetting": null,
+ "deletedResources": [],
+ "denySettings": {
+ "applyToChildScopes": false,
+ "excludedActions": null,
+ "excludedPrincipals": null,
+ "mode": "none"
+ },
+ "deploymentId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Resources/deployments/demoStack-2023-06-05-20-55-48-38d09",
+ "deploymentScope": null,
+ "description": null,
+ "detachedResources": [],
+ "duration": "PT29.006353S",
+ "error": null,
+ "failedResources": [],
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Resources/deploymentStacks/demoStack",
+ "location": null,
+ "name": "demoStack",
+ "outputs": null,
+ "parameters": {},
+ "parametersLink": null,
+ "provisioningState": "succeeded",
+ "resourceGroup": "demoRg",
+ "resources": [
+ {
+ "denyStatus": "none",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Network/virtualNetworks/vnetzu6pnx54hqubm",
+ "resourceGroup": "demoRg",
+ "status": "managed"
+ },
+ {
+ "denyStatus": "none",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Storage/storageAccounts/storezu6pnx54hqubm",
+ "resourceGroup": "demoRg",
+ "status": "managed"
+ }
+ ],
+ "systemData": {
+ "createdAt": "2023-06-05T20:55:48.006789+00:00",
+ "createdBy": "johndole@contoso.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-06-05T20:55:48.006789+00:00",
+ "lastModifiedBy": "johndole@contoso.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "template": null,
+ "templateLink": null,
+ "type": "Microsoft.Resources/deploymentStacks"
+}
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+(Get-AzResourceGroupDeploymentStack -Name "demoStack" -ResourceGroupName "demoRg").Resources
+```
+
+The output is similar to:
+
+```output
+Status DenyStatus Id
+ - --
+managed none /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Network/virtualNetworks/vnetzu6pnx54hqubm
+managed none /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demoRg/providers/Microsoft.Storage/storageAccounts/storezu6pnx54hqubm
+```
+++
+## Update the deployment stack
+
+To update a deployment stack, make the necessary modifications to the underlying Bicep file, and then rerun the command for creating the deployment stack or use the set command in Azure PowerShell.
+
+In this tutorial, you perform the following activities:
+
+- Update a property of a managed resource.
+- Add a resource to the stack.
+- Detach a managed resource.
+- Attach an existing resource to the stack.
+- Delete a managed resource.
+
+### Update a managed resource
+
+At the end of the previous step, you have one stack with two managed resources. You will update a property of the storage account resource.
+
+Edit the **main.bicep** file to change the sku name from `Standard_LRS` to `Standard_GRS`:
+
+```bicep
+resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+ name: storageAccountName
+ location: location
+ kind: 'StorageV2'
+ sku: {
+ name: 'Standard_GRS'
+ }
+}
+```
+
+Update the managed resource by running the following command:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group create \
+ --name 'demoStack' \
+ --resource-group 'demoRg' \
+ --template-file './main.bicep' \
+ --deny-settings-mode 'none'
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+The following sample shows the set command. You can also use the `Create-AzResourceGroupDeploymentStack` commandlet.
+
+```azurepowershell
+Set-AzResourceGroupDeploymentStack `
+ -Name "demoStack" `
+ -ResourceGroupName "demoRg" `
+ -TemplateFile "./main.bicep" `
+ -DenySettingsMode "none"
+```
+++
+You can verify the SKU property by running the following command:
+
+# [CLI](#tab/azure-cli)
+
+az resource list --resource-group 'demoRg'
+
+# [PowerShell](#tab/azure-powershell)
+
+Get-azStorageAccount -ResourceGroupName "demoRg"
+++
+### Add a managed resource
+
+At the end of the previous step, you have one stack with two managed resources. You will add one more storage account resource to the stack.
+
+Edit the **main.bicep** file to include another storage account definition:
+
+```bicep
+resource storageAccount1 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+ name: '1${storageAccountName}'
+ location: location
+ kind: 'StorageV2'
+ sku: {
+ name: 'Standard_LRS'
+ }
+}
+```
+
+Update the deployment stack by running the following command:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group create \
+ --name 'demoStack' \
+ --resource-group 'demoRg' \
+ --template-file './main.bicep' \
+ --deny-settings-mode 'none'
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Set-AzResourceGroupDeploymentStack `
+ -Name "demoStack" `
+ -ResourceGroupName "demoRg" `
+ -TemplateFile "./main.bicep" `
+ -DenySettingsMode "none"
+```
+++
+You can verify the deployment by listing the managed resources in the deployment stack:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group show \
+ --name 'demoStack' \
+ --resource-group 'demoRg' \
+ --output 'json'
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+(Get-AzResourceGroupDeploymentStack -Name "demoStack" -ResourceGroupName "demoRg").Resources
+```
+++
+You shall see the new storage account in addition to the two existing resources.
+
+### Detach a managed resource
+
+At the end of the previous step, you have one stack with three managed resources. You will detach one of the managed resources. After the resource is detached, it will remain in the resource group.
+
+Edit the **main.bicep** file to remove the following storage account definition from the previous step:
+
+```bicep
+resource storageAccount1 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+ name: '1${storageAccountName}'
+ location: location
+ kind: 'StorageV2'
+ sku: {
+ name: 'Standard_LRS'
+ }
+}
+```
+
+Update the deployment stack by running the following command:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group create \
+ --name 'demoStack' \
+ --resource-group 'demoRg' \
+ --template-file './main.bicep' \
+ --deny-settings-mode 'none'
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Set-AzResourceGroupDeploymentStack `
+ -Name "demoStack" `
+ -ResourceGroupName "demoRg" `
+ -TemplateFile "./main.bicep" `
+ -DenySettingsMode "none"
+```
+++
+You can verify the deployment by listing the managed resources in the deployment stack:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group show \
+ --name 'demoStack' \
+ --resource-group 'demoRg' \
+ --output 'json'
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+(Get-AzResourceGroupDeploymentStack -Name "demoStack" -ResourceGroupName "demoRg").Resources
+```
+++
+You shall see two managed resources in the stack. However, the detached the resource is still listed in the resource group. You can list the resources in the resource group by running the following command:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az resource list --resource-group 'demoRg'
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Get-azResource -ResourceGroupName "demoRg"
+```
+
+There are three resources in the resource group, even though the stack only contains two resources.
+++
+### Attach an existing resource to the stack
+
+At the end of the previous step, you have one stack with two managed resources. There is an unmanaged resource in the same resource group as the managed resources. You will attach this unmanaged resource to the stack.
+
+Edit the **main.bicep** file to include the storage account definition of the unmanaged resource:
+
+```bicep
+resource storageAccount1 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+ name: '1${storageAccountName}'
+ location: location
+ kind: 'StorageV2'
+ sku: {
+ name: 'Standard_LRS'
+ }
+}
+```
+
+Update the deployment stack by running the following command:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group create \
+ --name 'demoStack' \
+ --resource-group 'demoRg' \
+ --template-file './main.bicep' \
+ --deny-settings-mode 'none'
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Set-AzResourceGroupDeploymentStack `
+ -Name "demoStack" `
+ -ResourceGroupName "demoRg" `
+ -TemplateFile "./main.bicep" `
+ -DenySettingsMode "none"
+```
+++
+You can verify the deployment by listing the managed resources in the deployment stack:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group show \
+ --name 'demoStack' \
+ --resource-group 'demoRg' \
+ --output 'json'
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+(Get-AzResourceGroupDeploymentStack -Name "demoStack" -ResourceGroupName "demoRg").Resources
+```
+++
+You shall see three managed resources.
+
+### Delete a managed resource
+
+At the end of the previous step, you have one stack with three managed resources. In one of the previous steps, you detached a managed resource. Sometimes, you might want to delete a resource instead of detaching one. To delete a resource, you use a delete-resources switch with the create/set command.
+
+Edit the **main.bicep** file to remove the following storage account definition:
+
+```bicep
+resource storageAccount1 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+ name: '1${storageAccountName}'
+ location: location
+ kind: 'StorageV2'
+ sku: {
+ name: 'Standard_LRS'
+ }
+}
+```
+
+Run the following command with the delete-resources switch:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group create \
+ --name 'demoStack' \
+ --resource-group 'demoRg' \
+ --template-file './main.bicep' \
+ --deny-settings-mode 'none' \
+ --delete-resources
+```
+
+In addition to the `delete-resources` switch, there are two other switches available: `delete-all` and `delete-resource-groups`. For more information, see [Control detachment and deletion](./deployment-stacks.md#control-detachment-and-deletion).
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Set-AzResourceGroupDeploymentStack `
+ -Name "demoStack" `
+ -ResourceGroupName "demoRg" `
+ -TemplateFile "./main.bicep" `
+ -DenySettingsMode "none" `
+ -DeleteResources
+```
+
+In addition to the `DeleteResources` switch, there are two other switches available: `DeleteAll` and `DeleteResourceGroups`. For more information, see [Control detachment and deletion](./deployment-stacks.md#control-detachment-and-deletion).
+++
+You can verify the deployment by listing the managed resources in the deployment stack:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group show \
+ --name 'demoStack' \
+ --resource-group 'demoRg' \
+ --output 'json'
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+(Get-AzResourceGroupDeploymentStack -Name "demoStack" -ResourceGroupName "demoRg").Resources
+```
+++
+You shall see two managed resources in the stack. The resource is also removed from the resource group. You can verify the resource group by running the following command:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az resource list --resource-group 'demoRg'
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Get-azResource -ResourceGroupName "demoRg"
+```
+++
+## Configure deny settings
+
+When creating a deployment stack, it is possible to assign a specific type of permissions to the managed resources, which prevents their deletion by unauthorized security principals. These settings are refereed as deny settings.
+
+# [PowerShell](#tab/azure-powershell)
+
+The Azure PowerShell includes these parameters to customize the deny assignment:
+
+- `DenySettingsMode`: Defines the operations that are prohibited on the managed resources to safeguard against unauthorized security principals attempting to delete or update them. This restriction applies to everyone unless explicitly granted access. The values include: `None`, `DenyDelete`, and `DenyWriteAndDelete`.
+- `DenySettingsApplyToChildScopes`: Deny settings are applied to child Azure management scopes.
+- `DenySettingsExcludedActions`: List of role-based management operations that are excluded from the deny settings. Up to 200 actions are permitted.
+- `DenySettingsExcludedPrincipals`: List of Azure Active Directory (Azure AD) principal IDs excluded from the lock. Up to five principals are permitted.
+
+# [CLI](#tab/azure-cli)
+
+The Azure CLI includes these parameters to customize the deny assignment:
+
+- `deny-settings-mode`: Defines the operations that are prohibited on the managed resources to safeguard against unauthorized security principals attempting to delete or update them. This restriction applies to everyone unless explicitly granted access. The values include: `none`, `denyDelete`, and `denyWriteAndDelete`.
+- `deny-settings-apply-to-child-scopes`: Deny settings are applied to child Azure management scopes.
+- `deny-settings-excluded-actions`: List of role-based access control (RBAC) management operations excluded from the deny settings. Up to 200 actions are allowed.
+- `deny-settings-excluded-principals`: List of Azure Active Directory (Azure AD) principal IDs excluded from the lock. Up to five principals are allowed.
+++
+In this tutorial, you configure the deny settings mode. For more information about other deny settings, see [Protect managed resources against deletion](./deployment-stacks.md#protect-managed-resources-against-deletion).
+
+At the end of the previous step, you have one stack with two managed resources.
+
+Run the following command with the deny settings mode switch set to deny-delete:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group create \
+ --name 'demoStack' \
+ --resource-group 'demoRg' \
+ --template-file './main.bicep' \
+ --deny-settings-mode 'denyDelete'
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Set-AzResourceGroupDeploymentStack `
+ -Name "demoStack" `
+ -ResourceGroupName "demoRg" `
+ -TemplateFile "./main.bicep" `
+ -DenySettingsMode "DenyDelete"
+```
+++
+The following delete command shall fail because the deny settings mode is set to deny-delete:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az resource delete \
+ --resource-group 'demoRg' \
+ --name '<storage-account-name>' \
+ --resource-type 'Microsoft.Storage/storageAccounts'
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+Remove-AzResource `
+ -ResourceGroupName "demoRg" `
+ -ResourceName "\<storage-account-name\>" `
+ -ResourceType "Microsoft.Storage/storageAccounts"
+++
+Update the stack with the deny settings mode to none, so you can complete the rest of the tutorial:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group create \
+ --name 'demoStack' \
+ --resource-group 'demoRg' \
+ --template-file './main.bicep' \
+ --deny-settings-mode 'none'
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Set-AzResourceGroupDeploymentStack `
+ -Name "demoStack" `
+ -ResourceGroupName "demoRg" `
+ -TemplateFile "./main.bicep" `
+ -DenySettingsMode "none"
+```
+++
+## Export template from the stack
+
+By exporting a deployment stack, you can generate a Bicep file. This Bicep file serves as a resource for future development and subsequent deployments.
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group export \
+ --name 'demoStack' \
+ --resource-group 'demoRg'
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Export-AzResourceGroupDeploymentStack `
+ -Name "demoStack" `
+ -ResourceGroupName "demoRg" `
+```
+++
+You can pipe the output to a file.
+
+## Delete the deployment stack
+
+To delete the deployment stack, and the managed resources, run the following command:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli
+az stack group delete \
+ --name 'demoStack' \
+ --resource-group 'demoRg' \
+ --delete-all
+```
+
+If you run the delete commands without the **delete all** parameters, the managed resources are detached but not deleted. For example:
+
+```azurecli
+az stack group delete \
+ --name 'demoStack' \
+ --resource-group 'demoRg'
+```
+
+The following parameters can be used to control between detach and delete.
+
+- `--delete-all`: Delete both the resources and the resource groups.
+- `--delete-resources`: Delete the resources only.
+- `--delete-resource-groups`: Delete the resource groups only.
+
+For more information, see [Delete deployment stacks](./deployment-stacks.md#delete-deployment-stacks).
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Remove-AzResourceGroupDeploymentStack `
+ -Name "demoStack" `
+ -ResourceGroupName "demoRg" `
+ -DeleteAll
+```
+
+If you run the delete commands without the **delete all** parameters, the managed resources are detached but not deleted. For example:
+
+```azurepowershell
+Remove-AzResourceGroupDeploymentStack `
+ -Name "demoStack" `
+ -ResourceGroupName "demoRg"
+```
+
+The following parameters can be used to control between detach and delete.
+
+- `DeleteAll`: delete both resource groups and the managed resources.
+- `DeleteResources`: delete the managed resources only.
+- `DeleteResourceGroups`: delete the resource groups only.
+
+For more information, see [Delete deployment stacks](./deployment-stacks.md#delete-deployment-stacks).
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Deployment stacks](./deployment-stacks.md)
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | mediaservices / liveEvents / liveOutputs | Live event | 1-256 | Alphanumerics and hyphens.<br><br>Start with alphanumeric. | > | mediaservices / streamingEndpoints | Media service | 1-24 | Alphanumerics and hyphens.<br><br>Start with alphanumeric. |
+## Microsoft.MobileNetwork
+
+> [!div class="mx-tableFixed"]
+> | Entity | Scope | Length | Valid Characters |
+> | | | | |
+> | mobileNetworks | Resource Group | 1-64 | Alphanumerics and hyphens.<br><br>Start with alphanumeric. |
+> | mobileNetworks / sites | Mobile Network | 1-64 | Alphanumerics and hyphens.<br><br>Start with alphanumeric. |
+> | mobileNetworks / slices | Mobile Network | 1-64 | Alphanumerics and hyphens.<br><br>Start with alphanumeric. |
+> | mobileNetworks / services | Mobile Network | 1-64 | Alphanumerics and hyphens.<br><br>Start with alphanumeric. <br><br> The following words cannot be used on their own as the name: `default`, `requested`, `service`.|
+> | mobileNetworks / dataNetworks | Mobile Network | 1-64 | Alphanumeric, hyphens and a period/dot (`.`) <br><br> Start and end with alphanumeric. <br><br> Note: A period/dot (`.`) must be followed by an alphanumeric character. |
+> | mobileNetworks / simPolicies | Mobile Network | 1-64 | Alphanumerics and hyphens.<br><br>Start with alphanumeric. |
+> | packetCoreControlPlanes | Resource Group | 1-64 | Alphanumeric, underscores and hyphens. <br><br> Start with alphanumeric. |
+> | packetCoreControlPlanes / packetCoreDataPlanes | Packet Core Control Plane | 1-64 | Alphanumeric, underscores and hyphens. <br><br> Start with alphanumeric. |
+> | packetCoreControlPlanes / packetCoreDataPlanes / attachedDataNetworks | Mobile Network | 1-64 | Alphanumeric, hyphens and a period/dot (`.`) <br><br> Start and end with alphanumeric. <br><br> Note: A period/dot (`.`) must be followed by an alphanumeric character. |
+> | simGroups | Resource Group | 1-64 | Alphanumeric, underscores and hyphens <br><br> Start with alphanumeric |
+> | simGroups / sims | Sim Group | 1-64 | Alphanumeric, underscores and hyphens <br><br> Start with alphanumeric |
+ ## Microsoft.NetApp > [!div class="mx-tableFixed"]
backup Sap Hana Database With Hana System Replication Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-with-hana-system-replication-backup.md
Title: Back up SAP HANA System Replication databases on Azure VMs description: In this article, discover how to back up SAP HANA databases with HANA System Replication enabled. Previously updated : 07/06/2023 Last updated : 07/11/2023 --++ # Back up SAP HANA System Replication databases on Azure VMs (preview)
SAP HANA databases are critical workloads that require a low recovery-point obje
You can also switch the protection of SAP HANA database on Azure VM (standalone) on Azure Backup to HSR. [Learn more](#switch-database-protection-from-standalone-to-hsr-on-azure-backup). >[!Note]
->- The support for HSR + Database scenario is currently not available because there is a restriction to have VM and Vault in the same region.
+>- The support for **HSR + DR** scenario is currently not available because there is a restriction to have VM and Vault in the same region.
>- For more information about the supported configurations and scenarios, see [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md). ## Prerequisites
backup Backup Powershell Sample Backup Encrypted Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/backup-powershell-sample-backup-encrypted-vm.md
description: In this article, learn how to use an Azure PowerShell Script sample
Last updated 03/05/2019 --++ # Back up an encrypted Azure virtual machine with PowerShell
backup Backup Powershell Script Find Recovery Services Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/backup-powershell-script-find-recovery-services-vault.md
description: Learn how to use an Azure PowerShell script to find the Recovery Se
Last updated 1/28/2020 --++ # PowerShell Script to find the Recovery Services vault where a Storage Account is registered
backup Backup Powershell Script Undelete File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/backup-powershell-script-undelete-file-share.md
description: Learn how to use an Azure PowerShell script to undelete an accident
Last updated 02/02/2020 --++ # PowerShell script to undelete an accidentally deleted File share
backup Delete Recovery Services Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/delete-recovery-services-vault.md
Last updated 03/06/2023 --++ # PowerShell script to delete a Recovery Services vault
backup Disable Soft Delete For File Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/disable-soft-delete-for-file-shares.md
Title: Script Sample - Disable Soft delete for File Share
description: Learn how to use a script to disable soft delete for file shares in a storage account. Last updated 02/02/2020--++ # Disable soft delete for file shares in a storage account
backup Geo Code List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/geo-code-list.md
description: Learn about geo-codes mapped with the respective regions.
Last updated 03/07/2022 --++ # Geo-code mapping
backup Install Latest Microsoft Azure Recovery Services Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/install-latest-microsoft-azure-recovery-services-agent.md
Title: Script Sample - Install the latest MARS agent on on-premises Windows serv
description: Learn how to use a script to install the latest MARS agent on your on-premises Windows servers in a storage account. Last updated 06/23/2021--++ # PowerShell Script to install the latest MARS agent on an on-premises Windows server
backup Microsoft Azure Recovery Services Powershell All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/microsoft-azure-recovery-services-powershell-all.md
description: Learn how to use a script to configure Backup for on-premises Windo
Last updated 06/23/2021--++ # PowerShell Script to configure Backup for on-premises Windows server
backup Register Microsoft Azure Recovery Services Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/register-microsoft-azure-recovery-services-agent.md
Title: Script Sample - Register an on-premises Windows server or client machine
description: Learn about how to use a script to registering an on-premises Windows Server or client machine with a Recovery Services vault. Last updated 06/23/2021--++ # PowerShell Script to register an on-premises Windows server or a client machine with Recovery Services vault
backup Set File Folder Backup Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/set-file-folder-backup-policy.md
Title: Script Sample - Create a new or modify the current file and folder backup
description: Learn about how to use a script to create a new policy or modify the current file and folder Backup policy. Last updated 06/23/2021--++ # PowerShell Script to create a new or modify the current file and folder backup policy
backup Set System State Backup Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/set-system-state-backup-policy.md
Title: Script Sample - Create a new or modify the current system state backup po
description: Learn about how to use a script to create a new or modify the current system state backup policy. Last updated 06/23/2021--++ # PowerShell Script to create a new or modify the current system state backup policy
bastion Connect Vm Native Client Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-vm-native-client-linux.md
Verify that the following roles and ports are configured in order to connect to
## <a name="ssh"></a>Connect to a Linux VM
-The steps in the following sections help you connect to a Linux VM from a Linux native client using the **az network bastion** command. This extension can be installed by running, `az extension add --name ssh`.
+The steps in the following sections help you connect to a Linux VM from a Linux native client using the **az network bastion** command. This extension can be installed by running, `az extension add --name bastion`.
When you connect using this command, file transfers aren't supported. If you want to upload files, connect using the [az network bastion tunnel](#tunnel) command instead.
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
GPT-3.5 models can understand and generate natural language or code. The most ca
The `gpt-35-turbo` model supports 4096 max input tokens and the `gpt-35-turbo-16k` model supports up to 16,384 tokens.
+`gpt-35-turbo` and `gpt-35-turbo-16k` share the same [quota](../how-to/quota.md).
+ Like GPT-4, use the Chat Completions API to use GPT-3.5 Turbo. To learn more about how to interact with GPT-3.5 Turbo and the Chat Completions API check out our [in-depth how-to](../how-to/chatgpt.md). ## Embeddings models
cognitive-services Integrate Synapseml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/integrate-synapseml.md
recommendations: false
# Use Azure OpenAI with large datasets
-Azure OpenAI can be used to solve a large number of natural language tasks through prompting the completion API. To make it easier to scale your prompting workflows from a few examples to large datasets of examples, we have integrated the Azure OpenAI service with the distributed machine learning library [SynapseML](https://www.microsoft.com/research/blog/synapseml-a-simple-multilingual-and-massively-parallel-machine-learning-library/). This integration makes it easy to use the [Apache Spark](https://spark.apache.org/) distributed computing framework to process millions of prompts with the OpenAI service. This tutorial shows how to apply large language models at a distributed scale using Azure Open AI and Azure Synapse Analytics.
+Azure OpenAI can be used to solve a large number of natural language tasks through prompting the completion API. To make it easier to scale your prompting workflows from a few examples to large datasets of examples, we have integrated the Azure OpenAI Service with the distributed machine learning library [SynapseML](https://www.microsoft.com/research/blog/synapseml-a-simple-multilingual-and-massively-parallel-machine-learning-library/). This integration makes it easy to use the [Apache Spark](https://spark.apache.org/) distributed computing framework to process millions of prompts with the OpenAI service. This tutorial shows how to apply large language models at a distributed scale using Azure Open AI and Azure Synapse Analytics.
## Prerequisites
communication-services End Of Call Survey Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/end-of-call-survey-tutorial.md
call.feature(Features.CallSurvey).submitSurvey({
-<!-- ## Find different types of errors
+## Find different types of errors
+
+
### Failures while submitting survey:
-API will return the error messages when data validation failed or unable to submit the survey.
-- At least one survey rating is required.-- In default scale X should be 1 to 5. - where X is either of-- overallRating.score-- audioRating.score-- videoRating.score-- ScreenshareRating.score-- ${propertyName}: ${rating.score} should be between ${rating.scale?.lowerBound} and ${rating.scale?.upperBound}. ;-- ${propertyName}: ${rating.scale?.lowScoreThreshold} should be between ${rating.scale?.lowerBound} and ${rating.scale?.upperBound}. ;-- ${propertyName} lowerBound: ${rating.scale?.lowerBound} and upperBound: ${rating.scale?.upperBound} should be between 0 and 100. ;-- event discarded [ACS failed to submit survey, due to network or other error] -->
+The API will return the following error messages if data validation fails or the survey can't be submitted.
+
+- At least one survey rating is required.
+
+- In default scale X should be 1 to 5. - where X is either of:
+ - overallRating.score
+ - audioRating.score
+ - videoRating.score
+ - ScreenshareRating.score
+
+- \{propertyName\}: \{rating.score\} should be between \{rating.scale?.lowerBound\} and \{rating.scale?.upperBound\}.
+
+- \{propertyName\}: \{rating.scale?.lowScoreThreshold\} should be between \{rating.scale?.lowerBound\} and \{rating.scale?.upperBound\}.
+
+- \{propertyName\} lowerBound: \{rating.scale?.lowerBound\} and upperBound: \{rating.scale?.upperBound\} should be between 0 and 100.
+
+- Please try again [ACS failed to submit survey, due to network or other error].
+
+### We will return any error codes with a message.
+
+- Error code 400 (bad request) for all the error messages except one.
+
+```
+{ message: validationErrorMessage, code: 400 }
+```
+
+- One 408 (timeout) when event discarded:
+
+```
+{ message: "Please try again.", code: 408 }
+```
+ ## All possible values
connectors Connectors Native Reqres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-reqres.md
Title: Handle inbound or incoming HTTPS calls
+ Title: Receive inbound or incoming HTTPS calls
description: Receive and respond to HTTPS requests sent to workflows in Azure Logic Apps. ms.suite: integration ms.reviewers: estfan, azla Previously updated : 08/29/2022 Last updated : 07/11/2023 tags: connectors
-# Handle incoming or inbound HTTPS requests sent to workflows in Azure Logic Apps
+# Receive incoming or inbound HTTPS calls or requests to workflows in Azure Logic Apps
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-To run your logic app workflow after receiving an HTTPS request from another service, you can start your workflow with the Request built-in trigger. Your workflow can then respond to the HTTPS request by using Response built-in action.
+This how-to guide shows how to run your logic app workflow after receiving an HTTPS call or request from another service by using the Request built-in trigger. When your workflow uses this trigger, you can then respond to the HTTPS request by using the Response built-in action.
-The following list describes some example tasks that your workflow can perform when you use the Request trigger and Response action:
+> [!NOTE]
+>
+> The Response action works only when you use the Request trigger.
+
+For example, this list describes some tasks that your workflow can perform when you use the Request trigger and Response action:
* Receive and respond to an HTTPS request for data in an on-premises database.
-* Receive and respond to an HTTPS request from another logic app workflow.
+* Receive and respond to an HTTPS request sent from another logic app workflow.
* Trigger a workflow run when an external webhook event happens.
To run your workflow by sending an outgoing or outbound request instead, use the
* The logic app workflow where you want to receive the inbound HTTPS request. To start your workflow with a Request trigger, you have to start with a blank workflow. To use the Response action, your workflow must start with the Request trigger.
-If you're new to Azure Logic Apps, see the following documentation:
-
-* [What is Azure Logic Apps](../logic-apps/logic-apps-overview.md)
-
-* [Quickstart: Create a Consumption logic app workflow in multi-tenant Azure Logic Apps](../logic-apps/quickstart-create-example-consumption-workflow.md)
-
-* [Create a Standard logic app workflow in single-tenant Azure Logic Apps](../logic-apps/create-single-tenant-workflows-azure-portal.md)
- <a name="add-request-trigger"></a> ## Add a Request trigger
-The Request trigger creates a manually callable endpoint that can handle *only* inbound requests over HTTPS. When the calling service sends a request to this endpoint, the Request trigger fires and runs the logic app workflow. For information about how to call this trigger, review [Call, trigger, or nest workflows with HTTPS endpoints in Azure Logic Apps](../logic-apps/logic-apps-http-endpoint.md).
+The Request trigger creates a manually callable endpoint that handles *only* inbound requests over HTTPS. When the caller sends a request to this endpoint, the Request trigger fires and runs the workflow. For information about how to call this trigger, review [Call, trigger, or nest workflows with HTTPS endpoints in Azure Logic Apps](../logic-apps/logic-apps-http-endpoint.md).
## [Consumption](#tab/consumption)
-1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
+1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app and blank workflow in the designer.
-1. On the designer, under the search box, select **Built-in**. In the search box, enter **http request**. From the triggers list, select the trigger named **When a HTTP request is received**.
+1. On the designer, [follow these general steps to find and add the Request built-in trigger named **When a HTTP request is received**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
- ![Screenshot showing Azure portal, Consumption workflow designer, search box with "http request" entered, and "When a HTTP request" trigger selected.](./media/connectors-native-reqres/select-request-trigger-consumption.png)
-
- The HTTP request trigger information box appears on the designer.
-
- ![Screenshot showing Consumption workflow with Request trigger information box.](./media/connectors-native-reqres/request-trigger-consumption.png)
-
-1. In the trigger information box, provide the following values as necessary:
+1. After the trigger information box appears, provide the following information as required:
| Property name | JSON property name | Required | Description | ||--|-|-| | **HTTP POST URL** | {none} | Yes | The endpoint URL that's generated after you save your workflow and is used for sending a request that triggers your workflow. | | **Request Body JSON Schema** | `schema` | No | The JSON schema that describes the properties and values in the incoming request body. The designer uses this schema to generate tokens for the properties in the request. That way, your workflow can parse, consume, and pass along outputs from the Request trigger into your workflow. <br><br>If you don't have a JSON schema, you can generate the schema from a sample payload by using the **Use sample payload to generate schema** capability. |
- |||||
The following example shows a sample JSON schema:
The Request trigger creates a manually callable endpoint that can handle *only*
1. To check that the inbound call has a request body that matches your specified schema, follow these steps: 1. To enforce the inbound message to have the same exact fields that your schema describes, in your schema, add the **`required`** property and specify the required fields. Add the **`addtionalProperties`** property, and set the value to **`false`**.
-
+ For example, the following schema specifies that the inbound message must have the **`msg`** field and not any other fields: ```json
The Request trigger creates a manually callable endpoint that can handle *only*
||--|-|-| | **Method** | `method` | No | The method that the incoming request must use to call the logic app | | **Relative path** | `relativePath` | No | The relative path for the parameter that the logic app's endpoint URL can accept |
- |||||
The following example adds the **Method** property:
The Request trigger creates a manually callable endpoint that can handle *only*
## [Standard](#tab/standard)
-1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
-
-1. On the designer, select **Choose an operation**. On the pane that appears, under the search box, select **Built-in**.
-
-1. In the search box, enter **http request**. From the triggers list, select the trigger named **When a HTTP request is received**.
-
- ![Screenshot showing Azure portal, Standard workflow designer, search box with "http request" entered, and "When a HTTP request" trigger selected.](./media/connectors-native-reqres/select-request-trigger-standard.png)
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource and blank workflow in the designer.
- The HTTP request trigger information box appears on the designer.
+1. On the designer, [follow these general steps to find and add the Request built-in trigger named **When a HTTP request is received**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
- ![Screenshot showing Standard workflow with Request trigger information box.](./media/connectors-native-reqres/request-trigger-standard.png)
-
-1. In the trigger information box, provide the following values as necessary:
+1. After the trigger information box appears, provide the following information as required:
| Property name | JSON property name | Required | Description | ||--|-|-| | **HTTP POST URL** | {none} | Yes | The endpoint URL that's generated after you save your workflow and is used for sending a request that triggers your workflow. | | **Request Body JSON Schema** | `schema` | No | The JSON schema that describes the properties and values in the incoming request body. The designer uses this schema to generate tokens for the properties in the request. That way, your workflow can parse, consume, and pass along outputs from the Request trigger into your workflow. <br><br>If you don't have a JSON schema, you can generate the schema from a sample payload by using the **Use sample payload to generate schema** capability. |
- |||||
The following example shows a sample JSON schema:
The Request trigger creates a manually callable endpoint that can handle *only*
1. To check that the inbound call has a request body that matches your specified schema, follow these steps: 1. To enforce the inbound message to have the same exact fields that your schema describes, in your schema, add the **`required`** property and specify the required fields. Add the **`addtionalProperties`** property, and set the value to **`false`**.
-
+ For example, the following schema specifies that the inbound message must have the **`msg`** field and not any other fields: ```json
The Request trigger creates a manually callable endpoint that can handle *only*
} ```
- 1. In the Request trigger's title bar, select the ellipses button (**...**).
+ 1. On the designer, select the Request trigger. On the information pane that opens, select the **Settings** tab.
- 1. In the trigger's settings, turn on **Schema Validation**, and select **Done**.
+ 1. Expand **Data Handling**, and set **Schema Validation** to **On**.
If the inbound call's request body doesn't match your schema, the trigger returns an **HTTP 400 Bad Request** error.
-1. To add other properties or parameters to the trigger, open the **Add new parameter** list, and select the parameters that you want to add.
+1. To add other properties or parameters to the trigger, select the **Parameters** tab, open the **Add new parameter** list, and select the parameters that you want to add.
| Property name | JSON property name | Required | Description | ||--|-|-| | **Method** | `method` | No | The method that the incoming request must use to call the logic app | | **Relative path** | `relativePath` | No | The relative path for the parameter that the logic app's endpoint URL can accept |
- |||||
The following example adds the **Method** property:
The following table lists the outputs from the Request trigger:
| JSON property name | Data type | Description | |--|--|-|
-| `headers` | Object | A JSON object that describes the headers from the request |
-| `body` | Object | A JSON object that describes the body content from the request |
-||||
+| **headers** | Object | A JSON object that describes the headers from the request |
+| **body** | Object | A JSON object that describes the body content from the request |
<a name="add-response"></a>
When you use the Request trigger to receive inbound requests, you can model the
> * In a Standard logic app *stateless* workflow, the Response action must appear last in your workflow. If the action appears > anywhere else, Azure Logic Apps still won't run the action until all other actions finish running. - ## [Consumption](#tab/consumption)
-1. On the workflow designer, under the step where you want to add the Response action, select **New step**.
-
- Or, to add an action between steps, move your pointer over the arrow between those steps. Select the plus sign (**+**) that appears, and then select **Add an action**.
-
- The following example adds the Response action after the Request trigger from the preceding section:
-
- ![Screenshot showing Azure portal, Consumption workflow, and "New step" selected.](./media/connectors-native-reqres/add-response-consumption.png)
-
-1. On the designer, under the **Choose an operation** search box, select **Built-in**. In the search box, enter **response**. From the actions list, select the **Response** action.
+1. On the workflow designer, [follow these general steps to find and add the Response built-in action named **Response**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
For simplicity, the following examples show a collapsed Request trigger.
- ![Screenshot showing Azure portal, Consumption workflow, "Choose an operation" search box with "response" entered, and and Response action selected](./media/connectors-native-reqres/select-response-action-consumption.png)
+1. In the action information box, add the required values for the response message.
-1. In the Response action information box, add the required values for the response message.
+ | Property name | JSON property name | Required | Description |
+ ||--|-|-|
+ | **Status Code** | `statusCode` | Yes | The status code to return in the response |
+ | **Headers** | `headers` | No | A JSON object that describes one or more headers to include in the response |
+ | **Body** | `body` | No | The response body |
- In some fields, clicking inside their boxes opens the dynamic content list. You can then select tokens that represent available outputs from previous steps in the workflow. Properties from the schema specified in the earlier example now appear in the dynamic content list.
+ When you select inside any text fields, the dynamic content list automatically opens. You can then select tokens that represent any available outputs from previous steps in the workflow. The properties from the schema that you specify also appear in this dynamic content list. You can select these properties to use in your workflow.
- For example, for the **Headers** box, include **Content-Type** as the key name, and set the key value to **application/json** as mentioned earlier in this article. For the **Body** box, you can select the trigger body output from the dynamic content list.
+ For example, in the **Headers** field, include **Content-Type** as the key name, and set the key value to **application/json** as mentioned earlier in this article. For the **Body** box, you can select the trigger body output from the dynamic content list.
![Screenshot showing Azure portal, Consumption workflow, and Response action information.](./media/connectors-native-reqres/response-details-consumption.png)
When you use the Request trigger to receive inbound requests, you can model the
![Screenshot showing Azure portal, Consumption workflow, and Response action headers in "Switch to text" view.](./media/connectors-native-reqres/switch-to-text-view-consumption.png)
- The following table has more information about the properties that you can set in the Response action.
-
- | Property name | JSON property name | Required | Description |
- ||--|-|-|
- | **Status Code** | `statusCode` | Yes | The status code to return in the response |
- | **Headers** | `headers` | No | A JSON object that describes one or more headers to include in the response |
- | **Body** | `body` | No | The response body |
- |||||
-
-1. To add more properties for the action, such as a JSON schema for the response body, open the **Add new parameter** list, and select the parameters that you want to add.
+1. To add more properties for the action, such as a JSON schema for the response body, from the **Add new parameter** list, select the parameters that you want to add.
1. When you're done, save your workflow. On the designer toolbar, select **Save**. ## [Standard](#tab/standard)
-1. On the workflow designer, under the step where you want to add the Response action, select plus sign (**+**), and then select **Add new action**.
-
- Or, to add an action between steps, move your pointer over the arrow between those steps. Select the plus sign (**+**) that appears, and then select **Add an action**.
-
- The following example adds the Response action after the Request trigger from the preceding section:
-
- ![Screenshot showing Azure portal, Standard workflow, and "Add an action" selected.](./media/connectors-native-reqres/add-response-standard.png)
+1. On the workflow designer, [follow these general steps to find and add the Response built-in action named **Response**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
-1. On the designer, under the **Choose an operation** search box, select **Built-in**. In the search box, enter **response**. From the actions list, select the **Response** action.
+1. In the action information box, add the required values for the response message:
- ![Screenshot showing Azure portal, Standard workflow, "Choose an operation" search box with "response" entered, and and Response action selected](./media/connectors-native-reqres/select-response-action-standard.png)
-
-1. In the Response action information box, add the required values for the response message.
+ | Property name | JSON property name | Required | Description |
+ ||--|-|-|
+ | **Status Code** | `statusCode` | Yes | The status code to return in the response |
+ | **Headers** | `headers` | No | A JSON object that describes one or more headers to include in the response |
+ | **Body** | `body` | No | The response body |
- In some fields, clicking inside their boxes opens the dynamic content list. You can then select tokens that represent available outputs from previous steps in the workflow. Properties from the schema specified in the earlier example now appear in the dynamic content list.
+ When you select inside any text fields, you get the option to open the dynamic content list (lightning icon). You can then select tokens that represent any available outputs from previous steps in the workflow. The properties from the schema that you specify also appear in this dynamic content list. You can select these properties to use in your workflow.
- For example, for the **Headers** box, include **Content-Type** as the key name, and set the key value to **application/json** as mentioned earlier in this article. For the **Body** box, you can select the trigger body output from the dynamic content list.
+ For example, for the **Headers** box, enter **Content-Type** as the key name, and set the key value to **application/json** as mentioned earlier in this article. For the **Body** box, you can select the trigger body output from the dynamic content list.
![Screenshot showing Azure portal, Standard workflow, and Response action information.](./media/connectors-native-reqres/response-details-standard.png)
When you use the Request trigger to receive inbound requests, you can model the
![Screenshot showing Azure portal, Standard workflow, and Response action headers in "Switch to text" view.](./media/connectors-native-reqres/switch-to-text-view-standard.png)
- The following table has more information about the properties that you can set in the Response action.
-
- | Property name | JSON property name | Required | Description |
- ||--|-|-|
- | **Status Code** | `statusCode` | Yes | The status code to return in the response |
- | **Headers** | `headers` | No | A JSON object that describes one or more headers to include in the response |
- | **Body** | `body` | No | The response body |
- |||||
- 1. To add more properties for the action, such as a JSON schema for the response body, open the **Add new parameter** list, and select the parameters that you want to add. 1. When you're done, save your workflow. On the designer toolbar, select **Save**.
container-instances Container Instances Tutorial Deploy Confidential Container Default Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-deploy-confidential-container-default-portal.md
Open the overview for the container group by navigating to **Resource Groups** >
2. Once its status is *Running*, navigate to the IP address in your browser.
+ :::image type="content" source="media/container-instances-confidential-containers-tutorials/confidential-containers-aci-hello-world.png" alt-text="Screenshot of the hello world application running, PNG.":::
+
+ The presence of the attestation report below the Azure Container Instances logo confirms that the container is running on hardware that supports a hardware-based and attested trusted execution environment (TEE).
+ If you deploy to hardware that does not support a TEE, for example by choosing a region where the [ACI Confidential SKU is not available](./container-instances-region-availability.md#linux-container-groups), no attestation report will be shown.
Congratulations! You have deployed a confidential container on Azure Container Instances which is displaying a hardware attestation report in your browser.
container-instances Container Instances Tutorial Deploy Confidential Containers Cce Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-deploy-confidential-containers-cce-arm.md
With the ARM template that you've crafted and the Azure CLI confcom extension, y
* **Subscription**: select an Azure subscription. * **Resource group**: select **Create new**, enter a unique name for the resource group, and then select **OK**.
- * **Location**: select a location for the resource group. Example: **North Europe**.
+ * **Location**: select a location for the resource group. Choose a region where the [Confidential SKU is supported](./container-instances-region-availability.md#linux-container-groups). Example: **North Europe**.
* **Name**: accept the generated name for the instance, or enter a name. * **Image**: accept the default image name. This sample Linux image displays a hardware attestation.
Use the Azure portal or a tool such as the [Azure CLI](container-instances-quick
![Screenshot of browser view of app deployed using Azure Container Instances, PNG.](media/container-instances-confidential-containers-tutorials/confidential-containers-aci-hello-world.png)
+ The presence of the attestation report below the Azure Container Instances logo confirms that the container is running on hardware that supports a TEE.
+ If you deploy to hardware that does not support a TEE, for example by choosing a region where the ACI Confidential SKU is not available, no attestation report will be shown.
+ ## Next Steps Now that you have deployed a confidential container group on ACI, you can learn more about how policies are enforced.
cosmos-db Hierarchical Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/hierarchical-partition-keys.md
PaymentEvent item = new PaymentEvent()
PartitionKey partitionKey = new PartitionKeyBuilder() .Add(item.TenantId) .Add(item.UserId)
+ .Add(item.SessionId)
.Build(); // Create the item in the container
cosmos-db Pre Migration Steps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/pre-migration-steps.md
The [Azure Cosmos DB Migration for MongoDB extension](/sql/azure-data-studio/ext
> [!NOTE]
-> Azure Cosmos DB Migration for MongoDB extension does not perform an end-to-end assessment. We recommend you to go through [the supported features and syntax](./feature-support-42.md), [Azure Cosmos DB limits and quotas](../concepts-limits.md#per-account-limits) in detail, as well as perform a proof-of-concept prior to the actual migration.
+> We recommend you to go through [the supported features and syntax](./feature-support-42.md), [Azure Cosmos DB limits and quotas](../concepts-limits.md#per-account-limits) in detail, as well as perform a proof-of-concept prior to the actual migration.
### Manual discovery (legacy)
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Previously updated : 07/06/2023 Last updated : 07/09/2023 # Product updates for Azure Cosmos DB for PostgreSQL
Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g
Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters. ### June 2023
-* General availability: Customer defined database name is now available in [all regions](./resources-regions.md) at [cluster provisioning](./quickstart-create-portal.md) time.
+* General availability: 99.99% monthly availability [Service Level Agreement (SLA)](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services).
+
+### June 2023
+* General availability: Customer-defined database name is now available in [all regions](./resources-regions.md) at [cluster provisioning](./quickstart-create-portal.md) time.
* If the database name is not specified, the default `citus` name is used. * General availability: [Managed PgBouncer settings](./reference-parameters.md#managed-pgbouncer-parameters) are now configurable on all clusters. * Learn more about [connection pooling](./concepts-connection-pool.md).
cosmos-db Throughput Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/throughput-serverless.md
Azure Cosmos DB is available in two different capacity modes: [provisioned throu
| Best suited for | Workloads with sustained traffic requiring predictable performance | Workloads with intermittent or unpredictable traffic and low average-to-peak traffic ratio | | How it works | For each of your containers, you configure some amount of provisioned throughput expressed in [Request Units (RUs)](request-units.md) per second. Every second, this quantity of Request Units is available for your database operations. Provisioned throughput can be updated manually or adjusted automatically with [autoscale](provision-throughput-autoscale.md). | You run your database operations against your containers without having to configure any previously provisioned capacity. | | Geo-distribution | Available (unlimited number of Azure regions) | Unavailable (serverless accounts can only run in a single Azure region) |
-| Maximum storage per container | Unlimited | 50 GB<sup>1</sup> |
+| Maximum storage per container | Unlimited | 1 TB<sup>1</sup> |
| Performance | < 10-ms latency for point-reads and writes covered by SLA | < 10-ms latency for point-reads and < 30 ms for writes covered by SLO | | Billing model | Billing is done on a per-hour basis for the RU/s provisioned, regardless of how many RUs were consumed. | Billing is done on a per-hour basis for the number of RUs consumed by your database operations. |
-<sup>1</sup> Serverless containers up to 1 TB are currently in preview with Azure Cosmos DB. To try the new feature, register the *"Azure Cosmos DB Serverless 1 TB Container Preview"* [preview feature in your Azure subscription](../azure-resource-manager/management/preview-features.md).
+<sup>1</sup> Serverless containers up to 1 TB is GA. Maximum RU/sec availability is dependent on data stored in the container. See, [Serverless Performance](serverless-performance.md)
## Estimating your expected consumption
databox-online Azure Stack Edge Migrate Fpga Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-migrate-fpga-gpu.md
Title: Migration guide for Azure Stack Edge Pro FPGA to GPU physical device
-description: This guide contains instructions to migrate workloads from an Azure Stack Edge Pro FPGA device to an Azure Stack Edge Pro GPU device.
+ Title: Migration guide for Azure Stack Edge Pro FPGA to Pro 2 physical device
+description: This guide contains instructions to migrate workloads from an Azure Stack Edge Pro FPGA device to an Azure Stack Edge Pro 2 or Azure Stack Edge GPU device.
Previously updated : 06/02/2021 Last updated : 07/07/2023
-# Migrate workloads from an Azure Stack Edge Pro FPGA to an Azure Stack Edge Pro GPU
+# Migrate workloads from an Azure Stack Edge Pro FPGA to an Azure Stack Edge Pro 2 or Azure Stack Edge GPU device
-This article describes how to migrate workloads and data from an Azure Stack Edge Pro FPGA device to an Azure Stack Edge Pro GPU device. The migration process begins with a comparison of the two devices, a migration plan, and a review of migration considerations. The migration procedure gives detailed steps ending with verification and device cleanup.
+This article describes how to migrate workloads and data from an Azure Stack Edge Pro FPGA device to an Azure Stack Edge Pro 2 or Pro GPU device. The migration process begins with selection of a new device, a migration plan, and a review of migration considerations. The migration procedure gives detailed steps ending with verification and device cleanup.
[!INCLUDE [Azure Stack Edge Pro FPGA end-of-life](../../includes/azure-stack-edge-fpga-eol.md)]
This article describes how to migrate workloads and data from an Azure Stack Edg
Migration is the process of moving workloads and application data from one storage location to another. This entails making an exact copy of an organizationΓÇÖs current data from one storage device to another storage device - preferably without disrupting or disabling active applications - and then redirecting all input/output (I/O) activity to the new device.
-This migration guide provides a step-by-step walkthrough of the steps required to migrate data from an Azure Stack Edge Pro FPGA device to an Azure Stack Edge Pro GPU device. This document is intended for information technology (IT) professionals and knowledge workers who are responsible for operating, deploying, and managing Azure Stack Edge devices in the datacenter.
+This migration guide provides a step-by-step walkthrough of the steps required to migrate data from an Azure Stack Edge Pro FPGA device to an alternate Azure Stack Edge device. This document is intended for information technology (IT) professionals and knowledge workers who are responsible for operating, deploying, and managing Azure Stack Edge devices in the datacenter.
-In this article, the Azure Stack Edge Pro FPGA device is referred to as the *source* device and the Azure Stack Edge Pro GPU device is the *target* device.
+In this article, the Azure Stack Edge Pro FPGA device is referred to as the *source* device and the alternate Azure Stack Edge device is the *target* device.
## Comparison summary
-This section provides a comparative summary of capabilities between the Azure Stack Edge Pro GPU vs. the Azure Stack Edge Pro FPGA devices. The hardware in both the source and the target device is largely identical; only the hardware acceleration card and the storage capacity may differ.<!--Please verify: These components MAY, but need not necessarily, differ?-->
+This section provides a comparative summary of capabilities between the Azure Stack Edge Pro FPGA device and Azure Stack Edge Pro 2 and Azure Stack Edge Pro GPU devices. Azure Stack Edge Pro 2 devices are offered in a range of options to meet a variety of cost and functionality needs.
+<!--Please verify: These components MAY, but need not necessarily, differ?-->
+
+### [Migrate to Azure Stack Edge Pro 2](#tab/migrate-to-ase-pro2)
+
+| Capability | Azure Stack Edge Pro 2 (Target device) | Azure Stack Edge Pro FPGA (Source device)|
+|-|--||
+| Hardware | Hardware acceleration: 1 or 2 Nvidia A2 GPUs. <br> 64 GB, 128 GB, or 256 GB of memory. <br> 2x 10 GB BASE-T iWarp RDMA-capable network ports. <br> 2x optical 100 GB/sec RoCE RDMA-capable network ports. <br> Power supply units - 1. <br> For more information, see [Azure Stack Edge Pro 2 technical specifications](azure-stack-edge-pro-2-technical-specifications-compliance.md). | Hardware acceleration: Intel Arria 10 FPGA. <br> 128 GB of memory. <br> 2x copper 1 GB/sec network ports. <br> 4x optical 25 GB/sec RDMA-capable network ports. <br> Power supply units - 2. <br> For more information, see [Azure Stack Edge Pro FPGA technical specifications](azure-stack-edge-technical-specifications-compliance.md). |
+| Usable storage | 720 GB - 2.5 TB <br> After reserving space for resiliency and internal use. | 12.5 TB <br> After reserving space for internal use. |
+| Security | Certificates | |
+| Workloads | IoT Edge workloads <br> VM workloads <br> Kubernetes workloads| IoT Edge workloads |
+| Pricing | [Azure Stack Edge pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/). | [Azure Stack Edge pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/). |
++
+### [Migrate to Azure Stack Edge Pro GPU](#tab/migrate-to-ase-pro-gpu)
| Capability | Azure Stack Edge Pro GPU (Target device) | Azure Stack Edge Pro FPGA (Source device)| |-|--||
-| Hardware | Hardware acceleration: 1 or 2 Nvidia T4 GPUs <br> Compute, memory, network interface, power supply unit, and power cord specifications are identical to the device with FPGA. | Hardware acceleration: Intel Arria 10 FPGA <br> Compute, memory, network interface, power supply unit, and power cord specifications are identical to the device with GPU. |
-| Usable storage | 4.19 TB <br> After reserving space for parity resiliency and internal use | 12.5 TB <br> After reserving space for internal use |
+| Hardware | Hardware acceleration: 1 or 2 Nvidia T4 GPUs. <br> Compute, memory, network interface, power supply unit, and power cord specifications are identical to the device with FPGA. <br> For more information, see [Azure Stack Edge Pro GPU technical specifications](azure-stack-edge-gpu-technical-specifications-compliance.md). | Hardware acceleration: Intel Arria 10 FPGA. <br> Compute, memory, network interface, power supply unit, and power cord specifications are identical to the device with GPU. <br> For more information, see [Azure Stack Edge Pro FPGA technical specifications](azure-stack-edge-technical-specifications-compliance.md). |
+| Usable storage | 4.19 TB <br> After reserving space for parity resiliency and internal use. | 12.5 TB <br> After reserving space for internal use. |
| Security | Certificates | | | Workloads | IoT Edge workloads <br> VM workloads <br> Kubernetes workloads| IoT Edge workloads |
-| Pricing | [Pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/) | [Pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/)|
+| Pricing | [Azure Stack Edge pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/). | [Azure Stack Edge pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/). |
++ ## Migration plan To create your migration plan, consider the following information: - Develop a schedule for migration. -- When you migrate data, you may experience a downtime. We recommend that you schedule migration during a downtime maintenance window as the process is disruptive. You will set up and restore configurations in this downtime as described later in this document.-- Understand the total length of downtime and communicate it to all the stakeholders.-- Identify the local data that needs to be migrated from the source device. As a precaution, ensure that all the data on the existing storage has a recent backup. -
+- When you migrate data, you may experience downtime. We recommend that you schedule migration during a downtime maintenance window as the process is disruptive. You will set up and restore configurations in this downtime as described later in this document.
+- Understand the total duration of downtime and communicate it to all stakeholders.
+- Identify local data to be migrated from the source device. As a precaution, ensure that all data on the existing storage has a recent backup.
## Migration considerations Before you proceed with the migration, consider the following information: -- An Azure Stack Edge Pro GPU device can't be activated against an Azure Stack Edge Pro FPGA resource. You should create a new resource for the Azure Stack Edge Pro GPU device as described in [Create an Azure Stack Edge Pro GPU order](azure-stack-edge-gpu-deploy-prep.md#create-a-new-resource).-- The Machine Learning models deployed on the source device that used the FPGA will need to be changed for the target device with GPU. For help with the models, you can contact Microsoft Support. The custom models deployed on the source device that did not use the FPGA (used CPU only) should work as-is on the target device (using CPU).
+- An Azure Stack Edge device can't be activated against an Azure Stack Edge Pro FPGA resource. Instead, create a new resource for the target Azure Stack Edge device as described in [Create an Azure Stack Edge Pro GPU order](azure-stack-edge-gpu-deploy-prep.md#create-a-new-resource).
+- The Machine Learning models deployed on the source device that used the FPGA will need to be changed for the target device. For help with the models, you can contact Microsoft Support. The custom models deployed on the source device that did not use the FPGA (used CPU only) should work as-is on the target device (using CPU).
- The IoT Edge modules deployed on the source device may require changes before the modules can be successfully deployed on the target device. - The source device supports NFS 3.0 and 4.1 protocols. The target device only supports NFS 3.0 protocol. - The source device support SMB and NFS protocols. The target device supports storage via the REST protocol using storage accounts in addition to the SMB and NFS protocols for shares.
Before you proceed with the migration, consider the following information:
## Migration steps at-a-glance
-This table summarizes the overall flow for migration, describing the steps required for migration and the location where to take these steps.
+The following table summarizes the overall flow for migration, describing the steps required for migration and the location where the steps take place.
| In this phase | Do this step| On this device | ||-|-|
This table summarizes the overall flow for migration, describing the steps requi
## Prepare source device
-The preparation includes that you identify the Edge cloud shares, Edge local shares, and the IoT Edge modules deployed on the device.
+The preparation includes that you identify the Edge cloud shares, Edge local shares, and the IoT Edge modules deployed on the device.
+
+### [Migrate to Azure Stack Edge Pro 2](#tab/migrate-to-ase-pro2)
+
+### 1. Record configuration data
+
+Do these steps on your source device via the local UI.
+
+Record the configuration data on the *source* device. Use the [Deployment checklist](azure-stack-edge-pro-2-deploy-checklist.md) to help you record the device configuration. During migration, you'll use this configuration information to configure the new target device.
+
+### 2. Back up share data
+
+The device data can be of one of the following types:
+
+- Data in Edge cloud shares
+- Data in local shares
+
+#### Data in Edge cloud shares
+
+Edge cloud shares tier data from your device to Azure. Do these steps on your *source* device via the Azure portal.
+
+- Make a list of all the Edge cloud shares and users that you have on the source device.
+- Make a list of all the bandwidth schedules that you have. You will recreate these bandwidth schedules on your target device.
+- Depending on the network bandwidth available, configure bandwidth schedules on your device to maximize the data tiered to the cloud. That minimizes the local data on the device.
+- Ensure that the shares are fully tiered to the cloud. The tiering can be confirmed by checking the share status in the Azure portal.
+
+#### Data in Edge local shares
+
+Data in Edge local shares stays on the device. Do these steps on your *source* device via the Azure portal.
+
+- Make a list of the Edge local shares on the device.
+- Since you'll be doing a one-time migration of the data, create a copy of the Edge local share data to another on-premises server. You can use copy tools such as `robocopy` (SMB) or `rsync` (NFS) to copy the data. Optionally you may have already deployed a third-party data protection solution to back up the data in your local shares. The following third-party solutions are supported for use with Azure Stack Edge Pro FPGA devices:
+
+ | Third-party software | Reference to the solution |
+ |--||
+ | Cohesity | [https://www.cohesity.com/solution/cloud/azure/](https://www.cohesity.com/solution/cloud/azure/) <br> For details, contact Cohesity. |
+ | Commvault | [https://www.commvault.com/azure](https://www.commvault.com/azure) <br> For details, contact Commvault. |
+ | Veritas | [http://veritas.com/azure](http://veritas.com/azure) <br> For details, contact Veritas. |
+ | Veeam | [https://www.veeam.com/kb4041](https://www.veeam.com/kb4041) <br> For details, contact Veeam. |
++
+### 3. Prepare IoT Edge workloads
+
+- If you have deployed IoT Edge modules and are using FPGA acceleration, you may need to modify the modules before these will run on the GPU device. Follow the instructions in [Modify IoT Edge modules](azure-stack-edge-gpu-modify-fpga-modules-gpu.md).
+
+<! If you have deployed IoT Edge workloads, the configuration data is shared on a share on the device. Back up the data in these shares.-->
+
+### [Migrate to Azure Stack Edge Pro GPU](#tab/migrate-to-ase-pro-gpu)
### 1. Record configuration data
Data in Edge local shares stays on the device. Do these steps on your *source* d
<! If you have deployed IoT Edge workloads, the configuration data is shared on a share on the device. Back up the data in these shares.--> + ## Prepare target device
+ Use the following steps to prepare the target device.
+
+### [Migrate to Azure Stack Edge Pro 2](#tab/migrate-to-ase-pro2)
+
+### 1. Create new order
+
+You need to create a new order (and a new resource) for your *target* device. The target device must be activated against the GPU resource and not against the FPGA resource.
+
+To place an order, [Create a new Azure Stack Edge resource](azure-stack-edge-pro-2-deploy-prep.md#create-a-new-resource) in the Azure portal.
++
+### 2. Set up, activate
+
+You need to set up and activate the *target* device against the new resource you created earlier.
+
+Follow these steps to configure the *target* device via the Azure portal:
+
+1. Gather the information required in the [Deployment checklist](azure-stack-edge-pro-2-deploy-checklist.md). You can use the information that you saved from the source device configuration.
+1. [Unpack](azure-stack-edge-pro-2-deploy-install.md#unpack-the-device), [rack mount](azure-stack-edge-pro-2-deploy-install.md#rack-mount-the-device) and [cable your device](azure-stack-edge-pro-2-deploy-install.md#cable-the-device).
+1. [Connect to the local UI of the device](azure-stack-edge-pro-2-deploy-connect.md).
+1. Configure the network using a different set of IP addresses (if using static IPs) than the ones that you used for your old device. See how to [configure network settings](azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy.md).
+1. Assign the same device name as your old device and provide a DNS domain. See how to [configure device setting](azure-stack-edge-pro-2-deploy-set-up-device-update-time.md).
+1. Configure certificates on the new device. See how to [configure certificates](azure-stack-edge-pro-2-deploy-configure-certificates.md).
+1. Get the activation key from the Azure portal and activate the new device. See how to [activate the device](azure-stack-edge-pro-2-deploy-activate.md).
+
+You are now ready to restore the share data and deploy the workloads that you were running on the old device.
+
+### [Migrate to Azure Stack Edge Pro GPU](#tab/migrate-to-ase-pro-gpu)
+ ### 1. Create new order You need to create a new order (and a new resource) for your *target* device. The target device must be activated against the GPU resource and not against the FPGA resource.
Follow these steps to configure the *target* device via the Azure portal:
You are now ready to restore the share data and deploy the workloads that you were running on the old device. ++ ## Migrate data You will now copy data from the source device to the Edge cloud shares and Edge local shares on your *target* device.
After migration, verify that all the data has migrated and the workloads have be
## Erase data, return
-After the data migration is complete, erase local data and return the source device. Follow the steps in [Return your Azure Stack Edge Pro device](azure-stack-edge-return-device.md).
-
+After the data migration is complete, erase local data and return the source device. Follow the steps in [Return your Azure Stack Edge Pro FPGA device](azure-stack-edge-return-device.md).
## Next steps
defender-for-cloud How To Manage Attack Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-attack-path.md
description: Learn how to manage your attack path analysis and build queries to locate vulnerabilities in your multicloud environment. Previously updated : 03/27/2023 Last updated : 07/10/2023 # Identify and remediate attack paths
You can use Attack path analysis to locate the biggest risks to your environmen
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to **Microsoft Defender for Cloud** > **Recommendations** > **Attack path**.
+1. Navigate to **Microsoft Defender for Cloud** > **Attack path analysis**.
- :::image type="content" source="media/how-to-manage-attack-path/attack-path-icon.png" alt-text="Screenshot that shows where the icon is on the recommendations page to get to attack paths." lightbox="media/how-to-manage-attack-path/attack-path-icon.png":::
+ :::image type="content" source="media/how-to-manage-attack-path/attack-path-blade.png" alt-text="Screenshot that shows the attack path analysis blade on the main screen." lightbox="media/how-to-manage-attack-path/attack-path-blade.png":::
1. Select an attack path.
Attack path analysis also gives you the ability to see all recommendations by at
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to **Microsoft Defender for Cloud** > **Recommendations** > **Attack paths**.
+1. Navigate to **Microsoft Defender for Cloud** > **Attack path analysis**.
1. Select an attack path.
defender-for-cloud Kubernetes Workload Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md
Previously updated : 03/08/2022 Last updated : 07/11/2023 # Protect your Kubernetes data plane hardening
Last updated 03/08/2022
This page describes how to use Microsoft Defender for Cloud's set of security recommendations dedicated to Kubernetes data plane hardening. > [!TIP]
-> For a list of the security recommendations that might appear for Kubernetes clusters and nodes, see the [Container recommendations](recommendations-reference.md#container-recommendations) of the recommendations reference table.
+> For a list of the security recommendations that might appear for Kubernetes clusters and nodes, see the [Container recommendations](recommendations-reference.md#container-recommendations) section of the recommendations reference table.
## Set up your workload protection
-Microsoft Defender for Cloud includes a bundle of recommendations that are available once you've installed the **Azure Policy add-on for Kubernetes or extensions**.
+Microsoft Defender for Cloud includes a bundle of recommendations that are available once you've installed the **Azure Policy add-on/extension for Kubernetes**.
## Prerequisites
Microsoft Defender for Cloud includes a bundle of recommendations that are avail
## Enable Kubernetes data plane hardening
-When you enable Microsoft Defender for Containers, the "Azure Policy for Kubernetes" setting is enabled by default for the Azure Kubernetes Service, and for Azure Arc-enabled Kubernetes clusters in the relevant subscription. If you disable the setting, you can re-enable it later. Either in the Defender for Containers plan settings, or with  Azure Policy.
+You can enable the Azure policy for Kubernetes by one of two ways:
+- Enable for all current and future clusters using plan/connector settings
+ - [Enabling for Azure subscriptions or on-premises](#enabling-for-azure-subscriptions-or-on-premises)
+ - [Enabling for GCP projects](#enabling-for-gcp-projects)
+- [Enable for existing clusters using recommendations (specific clusters or all clusters)](#manually-deploy-the-add-on-to-clusters-using-recommendations-on-specific-clusters).
-When you enable this setting, the Azure Policy for Kubernetes pods are installed on the cluster. This allocates a small amount of CPU and memory for the pods to use. This allocation might reach maximum capacity, but it doesn't affect the rest of the CPU and memory on the resource.
+### Enable for all current and future clusters using plan/connector settings
-To enable Azure Kubernetes Service clusters and Azure Arc enabled Kubernetes clusters (Preview):
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+> [!NOTE]
+> When you enable this setting, the Azure Policy for Kubernetes pods are installed on the cluster. Doing so allocates a small amount of CPU and memory for the pods to use. This allocation might reach maximum capacity, but it doesn't affect the rest of the CPU and memory on the resource.
-1. Select the relevant subscription.
-
-1. On the Defender plans page, ensure that Containers is toggled to **On**.
+> [!NOTE]
+> Enablement for AWS via the connector is not supported due to a limitation in EKS that requires the cluster admin to add permissions for a new IAM role on the cluster itself.
-1. Select **Configure**.
-
- :::image type="content" source="media/kubernetes-workload-protections/configure-containers.png" alt-text="Screenshot showing where on the defenders plan to go to to select the configure button.":::
-
-1. On the Advanced configuration page, toggle each relevant component to **On**.
-
- :::image type="content" source="media/kubernetes-workload-protections/advanced-configuration.png" alt-text="Screenshot showing the toggles used to enable or disable them.":::
-
-1. Select **Save**.
+#### Enabling for Azure subscriptions or on-premises
-## Configure Defender for Containers components
+When you enable Microsoft Defender for Containers, the "Azure Policy for Kubernetes" setting is enabled by default for the Azure Kubernetes Service, and for Azure Arc-enabled Kubernetes clusters in the relevant subscription. If you disable the setting on initial configuration you can enable it afterwards manually.
-If you disabled any of the default protections when you enabled Microsoft Defender for Containers, you can change the configurations and reenable them.
-
-**To configure the Defender for Containers components**:
+If you disabled the "Azure Policy for Kubernetes" settings under the containers plan, you can follow the below steps to enable it across all clusters in your subscription:
1. Sign in to the [Azure portal](https://portal.azure.com).
If you disabled any of the default protections when you enabled Microsoft Defend
1. Select the relevant subscription.
-1. In the Monitoring coverage column of the Defender for Containers plan, select **Settings**.
+1. On the Defender plans page, ensure that Containers is toggled to **On**.
+
+1. Select **Settings**.
+
+ :::image type="content" source="media/kubernetes-workload-protections/containers-settings.png" alt-text="Screenshot showing the settings button in the Defender plan." lightbox="media/kubernetes-workload-protections/containers-settings.png":::
+
+1. In the Settings & Monitoring page, toggle the "Azure Policy for Kubernetes" to **On**.
-1. Ensure that Microsoft Defenders for Containers components (preview) is toggled to On.
+ :::image type="content" source="media/kubernetes-workload-protections/toggle-on-extensions.png" alt-text="Screenshot showing the toggles used to enable or disable the extensions." lightbox="media/kubernetes-workload-protections/toggle-on-extensions.png":::
- :::image type="content" source="media/kubernetes-workload-protections/toggled-on.png" alt-text="Screenshot showing that Microsoft Defender for Containers is toggled to on.":::
+#### Enabling for GCP projects
-1. Select **Edit configuration**.
+When you enable Microsoft Defender for Containers on a GCP connector, the "Azure Policy Extension for Azure Arc" setting is enabled by default for the Google Kubernetes Engine in the relevant project. If you disable the setting on initial configuration you can enable it afterwards manually.
- :::image type="content" source="media/kubernetes-workload-protections/edit-configuration.png" alt-text="Screenshot showing the edit configuration button.":::
+If you disabled the "Azure Policy Extension for Azure Arc" settings under the GCP connector, you can follow the below steps to [enable it on your GCP connector](https://learn.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?tabs=aks-deploy-portal%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Caks-removeprofile-api&pivots=defender-for-container-gke#protect-google-kubernetes-engine-gke-clusters).
-1. On the Advanced configuration page, toggle each relevant component to **On**.
+### Manually deploy the add-on to clusters using recommendations on specific clusters
- :::image type="content" source="media/kubernetes-workload-protections/toggles.png" alt-text="Screenshot showing each option and the toggles to enable or disable them.":::
+You can manually configure the Kubernetes data plane hardening add-on, or extension on specific cluster through the Recommendations page using the following recommendations:
-1. Select **Confirm**.
+- **Azure Recommendations** - `"Azure Policy add-on for Kubernetes should be installed and enabled on your clusters"`, or `"Azure policy extension for Kubernetes should be installed and enabled on your clusters"`.
+- **GCP Recommendation** - `"GKE clusters should have Microsoft Defender's extension for Azure Arc installed"`.
+- **AWS Recommendation** - `"EKS clusters should have Microsoft Defender's extension for Azure Arc installed"`.
-## Deploy the add-on to specified clusters
+Once enabled, the hardening recommendation becomes available (some of the recommendations require another configuration to work).
-You can manually configure the Kubernetes data plane hardening add-on, or extension protection through the Recommendations page. This can be accomplished by remediating the `Azure Policy add-on for Kubernetes should be installed and enabled on your clusters` recommendation, or `Azure policy extension for Kubernetes should be installed and enabled on your clusters`.
+> [!NOTE]
+> For AWS it isn't possible to do onboarding at scale using the connector, but it can be installed on all clusters or specific clusters using the recommendation ["EKS clusters should have Microsoft Defender's extension for Azure Arc installed"](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/38307993-84fb-4636-8ce7-3a64466bb5cc).
-**To Deploy the add-on to specified clusters**:
-1. From the recommendations page, search for the recommendation `Azure Policy add-on for Kubernetes should be installed and enabled on your clusters`, or `Azure policy extension for Kubernetes should be installed and enabled on your clusters`.
+**To deploy the add-on to specified clusters**:
- :::image type="content" source="./media/defender-for-kubernetes-usage/recommendation-to-install-policy-add-on-for-kubernetes.png" alt-text="Recommendation **Azure Policy add-on for Kubernetes should be installed and enabled on your clusters**.":::
+1. From the recommendations page, search for the relevant recommendation:
+ - **Azure** - `Azure Kubernetes Service clusters should have the Azure Policy add-on for Kubernetes installed` or `Azure policy extension for Kubernetes should be installed and enabled on your clusters`
+ - **AWS** - `EKS clusters should have Microsoft Defender's extension for Azure Arc installed`
+ - **GCP** - `GKE clusters should have Microsoft Defender's extension for Azure Arc installed`
+
+ :::image type="content" source="./media/kubernetes-workload-protections/azure-kubernetes-service-clusters-recommendation.png" alt-text="Screenshot showing the Azure Kubernetes service clusters recommendation." lightbox="media/kubernetes-workload-protections/azure-kubernetes-service-clusters-recommendation.png":::
> [!TIP] > The recommendation is included in five different security controls and it doesn't matter which one you select in the next step. 1. From any of the security controls, select the recommendation to see the resources on which you can install the add-on.
-1. Select the relevant cluster, and **Remediate**.
+1. Select the relevant cluster, and select **Remediate**.
- :::image type="content" source="./media/defender-for-kubernetes-usage/recommendation-to-install-policy-add-on-for-kubernetes-details.png" alt-text="Recommendation details page for Azure Policy add-on for Kubernetes should be installed and enabled on your clusters.":::
+ :::image type="content" source="./media/kubernetes-workload-protections/azure-kubernetes-service-clusters-recommendation-remediation.png" alt-text="Screenshot that shows how to select the cluster to remediate." lightbox="media/kubernetes-workload-protections/azure-kubernetes-service-clusters-recommendation-remediation.png":::
## View and configure the bundle of recommendations
-1. Approximately 30 minutes after the add-on installation completes, Defender for Cloud shows the clustersΓÇÖ health status for the following recommendations, each in the relevant security control as shown:
-
- > [!NOTE]
- > If you're installing the add-on for the first time, these recommendations will appear as new additions in the list of recommendations.
-
- > [!TIP]
- > Some recommendations have parameters that must be customized via Azure Policy to use them effectively. For example, to benefit from the recommendation **Container images should be deployed only from trusted registries**, you'll have to define your trusted registries.
- >
- > If you don't enter the necessary parameters for the recommendations that require configuration, your workloads will be shown as unhealthy.
-
- | Recommendation name | Security control | Configuration required |
- |--|||
- | Container CPU and memory limits should be enforced | Protect applications against DDoS attack | **Yes** |
- | Container images should be deployed only from trusted registries | Remediate vulnerabilities | **Yes** |
- | Least privileged Linux capabilities should be enforced for containers | Manage access and permissions | **Yes** |
- | Containers should only use allowed AppArmor profiles | Remediate security configurations | **Yes** |
- | Services should listen on allowed ports only | Restrict unauthorized network access | **Yes** |
- | Usage of host networking and ports should be restricted | Restrict unauthorized network access | **Yes** |
- | Usage of pod HostPath volume mounts should be restricted to a known list | Manage access and permissions | **Yes** |
- | Container with privilege escalation should be avoided | Manage access and permissions | No |
- | Containers sharing sensitive host namespaces should be avoided | Manage access and permissions | No |
- | Immutable (read-only) root filesystem should be enforced for containers | Manage access and permissions | No |
- | Kubernetes clusters should be accessible only over HTTPS | Encrypt data in transit | No |
- | Kubernetes clusters should disable automounting API credentials | Manage access and permissions | No |
- | Kubernetes clusters should not use the default namespace | Implement security best practices | No |
- | Kubernetes clusters should not grant CAPSYSADMIN security capabilities | Manage access and permissions | No |
- | Privileged containers should be avoided | Manage access and permissions | No |
- | Running containers as root user should be avoided | Manage access and permissions | No |
--
-For recommendations with parameters that need to be customized, you will need to set the parameters:
+Approximately 30 minutes after the add-on installation completes, Defender for Cloud shows the clustersΓÇÖ health status for the following recommendations, each in the relevant security control as shown:
+
+> [!NOTE]
+> If you're installing the add-on/extension for the first time, these recommendations will appear as new additions in the list of recommendations.
+
+> [!TIP]
+> Some recommendations have parameters that must be customized via Azure Policy to use them effectively. For example, to benefit from the recommendation **Container images should be deployed only from trusted registries**, you'll have to define your trusted registries. If you don't enter the necessary parameters for the recommendations that require configuration, your workloads will be shown as unhealthy.
+
+| Recommendation name | Security Control | Configuration required |
+||--||
+| Container CPU and memory limits should be enforced | Protect applications against DDoS attack | **Yes** |
+| Container images should be deployed only from trusted registries | Remediate vulnerabilities | **Yes** |
+| Least privileged Linux capabilities should be enforced for containers | Manage access and permissions | **Yes** |
+| Containers should only use allowed AppArmor profiles | Remediate security configurations | **Yes** |
+| Services should listen on allowed ports only | Restrict unauthorized network access | **Yes** |
+| Usage of host networking and ports should be restricted | Restrict unauthorized network access | **Yes** |
+| Usage of pod HostPath volume mounts should be restricted to a known list | Manage access and permissions | **Yes** |
+| Container with privilege escalation should be avoided | Manage access and permissions | No |
+| Containers sharing sensitive host namespaces should be avoided | Manage access and permissions | No |
+| Immutable (read-only) root filesystem should be enforced for containers | Manage access and permissions | No |
+| Kubernetes clusters should be accessible only over HTTPS | Encrypt data in transit | No |
+| Kubernetes clusters should disable automounting API credentials | Manage access and permissions | No |
+| Kubernetes clusters should not use the default namespace | Implement security best practices | No |
+| Kubernetes clusters should not grant CAPSYSADMIN security capabilities | Manage access and permissions | No |
+| Privileged containers should be avoided | Manage access and permissions | No |
+| Running containers as root user should be avoided | Manage access and permissions | No |
++
+For recommendations with parameters that need to be customized, you need to set the parameters:
**To set the parameters**:
For recommendations with parameters that need to be customized, you will need to
1. Open the **Parameters** tab and modify the values as required.
- :::image type="content" source="media/kubernetes-workload-protections/containers-parameter-requires-configuration.png" alt-text="Modifying the parameters for one of the recommendations in the Kubernetes data plane hardening protection bundle.":::
+ :::image type="content" source="media/kubernetes-workload-protections/containers-parameter-requires-configuration.png" alt-text="Screenshot showing where to modify the parameters for one of the recommendations in the Kubernetes data plane hardening protection bundle." lightbox="media/kubernetes-workload-protections/containers-parameter-requires-configuration.png":::
1. Select **Review + save**.
For recommendations with parameters that need to be customized, you will need to
1. Open the recommendation details page and select **Deny**:
- :::image type="content" source="./media/defender-for-kubernetes-usage/enforce-workload-protection-example.png" alt-text="Deny option for Azure Policy parameter.":::
+ :::image type="content" source="./media/defender-for-kubernetes-usage/enforce-workload-protection-example.png" alt-text="Screenshot showing the Deny option for Azure Policy parameter." lightbox="media/defender-for-kubernetes-usage/enforce-workload-protection-example.png":::
- This will open the pane where you set the scope.
+ The pane to set the scope opens.
-1. When you've set the scope, select **Change to deny**.
+1. Set the scope and select **Change to deny**.
**To see which recommendations apply to your clusters**:
-1. Open Defender for Cloud's [asset inventory](asset-inventory.md) page and use the resource type filter to **Kubernetes services**.
+1. Open Defender for Cloud's [asset inventory](asset-inventory.md) page and set the resource type filter to **Kubernetes services**.
1. Select a cluster to investigate and review the available recommendations available for it.
-When viewing a recommendation from the workload protection set, you'll see the number of affected pods ("Kubernetes components") listed alongside the cluster. For a list of the specific pods, select the cluster and then select **Take action**.
+When you view a recommendation from the workload protection set, the number of affected pods ("Kubernetes components") is listed alongside the cluster. For a list of the specific pods, select the cluster and then select **Take action**.
**To test the enforcement, use the two Kubernetes deployments below**: - One is for a healthy deployment, compliant with the bundle of workload protection recommendations. -- The other is for an unhealthy deployment, non-compliant with *any* of the recommendations.
+- The other is for an unhealthy deployment, noncompliant with *any* of the recommendations.
-Deploy the example .yaml files as-is, or use them as a reference to remediate your own workload (step VIII).
+Deploy the example .yaml files as-is, or use them as a reference to remediate your own workload.
## Healthy deployment example .yaml file
spec:
targetPort: 9001 ``` -- ## Next steps In this article, you learned how to configure Kubernetes data plane hardening.
-For other related material, see the following pages:
+For related material, see the following pages:
- [Defender for Cloud recommendations for compute](recommendations-reference.md#recs-compute) - [Alerts for AKS cluster level](alerts-reference.md#alerts-k8scluster)
defender-for-cloud Sql Azure Vulnerability Assessment Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-manage.md
Possible causes:
- Switching to express configuration failed due to a database policy error. Database policies aren't visible in the Azure portal for Defender for SQL vulnerability assessment, so we check for them during the validation stage of switching to express configuration. **Solution**: Disable all database policies for the relevant server and then try to switch to express configuration again.
- Cosnider using the [provided PowerShell script](powershell-sample-vulnerability-assessment-azure-sql.md) for assistance.
+- Consider using the [provided PowerShell script](powershell-sample-vulnerability-assessment-azure-sql.md) for assistance.
+ ## Classic configuration
You can use Azure PowerShell cmdlets to programmatically manage your vulnerabili
| [Update-AzSqlInstanceVulnerabilityAssessmentSetting](/powershell/module/az.sql/Update-AzSqlInstanceVulnerabilityAssessmentSetting) | Updates the vulnerability assessment settings of a managed instance. | + For a script example, see [Azure SQL vulnerability assessment PowerShell support](/archive/blogs/sqlsecurity/azure-sql-vulnerability-assessment-now-with-powershell-support). #### Azure CLI
To handle Boolean types as true/false, set the baseline result with binary input
} ``` + ## Next steps
To handle Boolean types as true/false, set the baseline result with binary input
- Learn more about [data discovery and classification](/azure/azure-sql/database/data-discovery-and-classification-overview). - Learn more about [storing vulnerability assessment scan results in a storage account accessible behind firewalls and VNets](/azure/azure-sql/database/sql-database-vulnerability-assessment-storage). - Check out [common questions](faq-defender-for-databases.yml) about Azure SQL databases.++
defender-for-cloud View And Remediate Vulnerabilities For Images Running On Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/view-and-remediate-vulnerabilities-for-images-running-on-aks.md
+
+ Title: How-to view and remediate runtime threat findings on Microsoft Defender for Cloud
+description: Learn how to view and remediate runtime threat findings
+++ Last updated : 07/11/2023++
+# View and remediate vulnerabilities for images running on your AKS clusters
+
+Defender for Cloud gives its customers the ability to prioritize the remediation of vulnerabilities in images that are currently being used within their environment using the [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462ce) recommendation.
+
+To provide findings for the recommendation, Defender CSPM uses [agentless container registry vulnerability assessment](concept-agentless-containers.md#agentless-container-registry-vulnerability-assessment) to create a full inventory of your K8s clusters and their workloads and correlates that inventory with the [agentless container registry vulnerability assessment](concept-agentless-containers.md#agentless-container-registry-vulnerability-assessment). The recommendation shows your running containers with the vulnerabilities associated with the images that are used by each container and provides vulnerability reports and remediation steps.
+
+Vulnerability assessment for containers reports vulnerabilities to Defender for Cloud, Defender for Cloud presents the findings and related information as recommendations, including related information such as remediation steps and relevant CVEs. You can view the identified vulnerabilities for one or more subscriptions, or for a specific resource.
+
+The resources are grouped into tabs:
+
+- **Healthy resources** ΓÇô relevant resources, which either aren't impacted or on which you've already remediated the issue.
+- **Unhealthy resources** ΓÇô resources that are still impacted by the identified issue.
+- **Not applicable resources** ΓÇô resources for which the recommendation can't give a definitive answer. The not applicable tab also includes reasons for each resource.
+
+First review and remediate vulnerabilities exposed via [attack paths](how-to-manage-attack-path.md), as they pose the greatest risk to your security posture; then use the following procedures to view, remediate, prioritize, and monitor vulnerabilities for your containers.
+
+## View vulnerabilities on a specific cluster
+
+**To view vulnerabilities for a specific cluster, do the following:**
+
+1. Open the **Recommendations** page, using the **>** arrow to open the sub-levels. If issues were found, you'll see the recommendation [Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5). Select the recommendation.
+
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-image-recommendation-line.png" alt-text="Screenshot showing the recommendation line for running container images should have vulnerability findings resolved." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-image-recommendation-line.png":::
+
+1. The recommendation details page opens showing the list of Kubernetes clusters ("affected resources") and categorizes them as healthy, unhealthy and not applicable, based on the images used by your workloads. Select the relevant cluster for which you want to remediate vulnerabilities.
+
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-select-cluster.png" alt-text="Screenshot showing the affected clusters for the recommendation." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-select-cluster.png":::
+
+1. The cluster details page opens. It lists all currently running containers categorized into three tabs based on the vulnerability assessments of the images used by those containers. Select the specific container you want to explore.
+
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-select-container.png" alt-text="Screenshot showing where to select a specific container." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-select-container.png":::
+
+1. This pane includes a list of the container vulnerabilities. Select each vulnerability to [resolve the vulnerability](#remediate-vulnerabilities).
+
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-list-vulnerabilities.png" alt-text="Screenshot showing the list of container vulnerabilities." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-list-vulnerabilities.png":::
+
+## View container images affected by a specific vulnerability
+
+**To view findings for a specific vulnerability, do the following:**
+1. Open the **Recommendations** page, using the **>** arrow to open the sub-levels. If issues were found, you'll see the recommendation [Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5). Select the recommendation.
+
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-image-recommendation-line.png" alt-text="Screenshot showing the recommendation line for running container images should have vulnerability findings resolved." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-image-recommendation-line.png":::
+
+1. The recommendation details page opens with additional information. This information includes the list of vulnerabilities impacting the clusters. Select the specific vulnerability.
+
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-select-vulnerability.png" alt-text="Screenshot showing the list of vulnerabilities impacting the container clusters." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-select-vulnerability.png":::
+
+1. The vulnerability details pane opens. This pane includes a detailed description of the vulnerability, images affected by that vulnerability, and links to external resources to help mitigate the threats, affected resources, and information on the software version that contributes to [resolving the vulnerability](#remediate-vulnerabilities).
+
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-containers-affected.png" alt-text="Screenshot showing the list of container images impacted by the vulnerability." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-containers-affected.png":::
+
+## Remediate vulnerabilities
+
+Use these steps to remediate each of the affected images found either in a specific cluster or for a specific vulnerability:
+
+1. Follow the steps in the remediation section of the recommendation pane.
+1. When you've completed the steps required to remediate the security issue, replace each affected image in your cluster, or replace each affected image for a specific vulnerability:
+ 1. Build a new image (including updates for each of the packages) that resolves the vulnerability according to the remediation details.
+ 1. Push the updated image to trigger a scan; it may take up to 24 hours for the previous image to be removed from the results, and for the new image to be included in the results.
+ 1. Use the new image across all vulnerable workloads.
+ 1. Remove the vulnerable image from the registry.
+1. Check the recommendations page for the recommendation [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c).
+1. If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
+
+## Next Steps
+
+- Learn how to [view and remediate vulnerabilities for registry images](view-and-remediate-vulnerability-assessment-findings.md).
+- Learn more about the Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
defender-for-cloud View And Remediate Vulnerability Assessment Findings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/view-and-remediate-vulnerability-assessment-findings.md
Title: How-to view and remediate vulnerability assessment findings for registry images
+ Title: How-to view and remediate vulnerability assessment findings for registry images on Microsoft Defender for Cloud
description: Learn how to view and remediate vulnerability assessment findings for registry images Previously updated : 05/30/2023 Last updated : 07/11/2023
-# View and remediate vulnerability assessment findings for registry images
+# View and remediate vulnerabilities for registry images
-Defender for Cloud filters and classifies findings from the scanner and presents them as a list of recommendations.
-
-First review and remediate vulnerabilities exposed via [attack paths](how-to-manage-attack-path.md), as those findings pose the greatest risk to your security posture; then use the following procedures to view, remediate, prioritize, and monitor vulnerability findings for your containers.
-
-## View findings of Vulnerability Assessment for Containers
-
-Vulnerability Assessment for Containers reports vulnerabilities to Defender for Cloud, Defender for Cloud presents the findings and related information as recommendations, including related information such as remediation steps and relevant CVEs. You can view the identified vulnerabilities for one or more subscriptions, or for a specific resource.
+Vulnerability assessment for containers reports vulnerabilities to Defender for Cloud, Defender for Cloud presents them and related information as recommendations, including related information such as remediation steps and relevant CVEs. You can view the identified vulnerabilities for one or more subscriptions, or for a specific resource.
The resources are grouped into tabs:
The resources are grouped into tabs:
- **Unhealthy resources** ΓÇô resources that are still impacted by the identified issue. - **Not applicable resources** ΓÇô resources for which the recommendation can't give a definitive answer. The not applicable tab also includes reasons for each resource.
- To view the findings, do the following:
+First review and remediate vulnerabilities exposed via [attack paths](how-to-manage-attack-path.md), as they pose the greatest risk to your security posture; then use the following procedures to view, remediate, prioritize, and monitor vulnerabilities for your containers.
-1. Open the **Recommendations** page. If issues were found, you'll see the recommendation [Container registry images should have vulnerability findings resolved (powered by MDVM)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5).
+## View vulnerabilities on a specific container registry
- :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/view-container-registry-images-recommendation.png" alt-text="Screenshot showing the recommendation line for container registry images should have vulnerability findings resolved." lightbox="media/view-and-remediate-vulnerability-assessment-findings/view-container-registry-images-recommendation.png":::
+1. Open the **Recommendations** page, using the **>** arrow to open the sub-levels. If issues were found, you'll see the recommendation [Container registry images should have vulnerability findings resolved (powered by MDVM)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5). Select the recommendation.
-1. Select the recommendation; the recommendation details page opens with additional information. This information includes the list of registries with vulnerable images ("Affected resources") and the remediation steps.
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/open-recommendations-page.png" alt-text="Screenshot showing the line for recommendation container registry images should have vulnerability findings resolved." lightbox="media/view-and-remediate-vulnerability-assessment-findings/open-recommendations-page.png":::
- :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/recommendation-details.png" alt-text="Screenshot showing the recommendation details." lightbox="media/view-and-remediate-vulnerability-assessment-findings/recommendation-details.png":::
+1. The recommendation details page opens with additional information. This information includes the list of registries with vulnerable images ("affected resources") and the remediation steps. Select the affected registry.
-1. Select a specific registry to see the repositories in it that have vulnerable images.
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/select-registry.png" alt-text="Screenshot showing the recommendation details and affected registries." lightbox="media/view-and-remediate-vulnerability-assessment-findings/select-registry.png":::
- :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/select-specific-registry.png" alt-text="Screenshot showing where to select the specific registry." lightbox="media/view-and-remediate-vulnerability-assessment-findings/select-specific-registry.png":::
+1. This opens the registry details with a list of repositories in it that have vulnerable images. Select the affected repository to see the images in it that are vulnerable.
- The registry details page opens with the list of affected repositories.
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/select-repo.png" alt-text="Screenshot showing where to select the specific repository." lightbox="media/view-and-remediate-vulnerability-assessment-findings/select-repo.png":::
-1. Select a specific repository to see the images in it that are vulnerable.
+1. The repository details page opens. It lists all vulnerable images on that repository with distribution of the severity of vulnerabilities per image. Select the unhealthy image to see the vulnerabilities.
- :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/select-specific-repo.png" alt-text="Screenshot showing where to select the specific repository." lightbox="media/view-and-remediate-vulnerability-assessment-findings/select-specific-repo.png":::
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/select-unhealthy-image.png" alt-text="Screenshot showing where to select the unhealthy image." lightbox="media/view-and-remediate-vulnerability-assessment-findings/select-unhealthy-image.png":::
+ ΓÇ»
+1. The list of vulnerabilities for the selected image opens. To learn more about a finding, select the finding.
- The repository details page opens. It lists the vulnerable images together with an assessment of the severity of the findings.
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/select-image-finding.png" alt-text="Screenshot showing the list of findings on the specific image." lightbox="media/view-and-remediate-vulnerability-assessment-findings/select-image-finding.png":::
-1. Select a specific image to see the vulnerabilities.
+1. The vulnerabilities details pane opens. This pane includes a detailed description of the issue and links to external resources to help mitigate the threats, affected resources, and information on the software version that contributes to resolving the vulnerability.
- :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/select-specific-image.png" alt-text="Screenshot showing where to select the specific image." lightbox="media/view-and-remediate-vulnerability-assessment-findings/select-specific-image.png":::
- ΓÇ»
-1. The list of findings for the selected image opens.
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/image-details.png" alt-text="Screenshot showing the details of the finding on the specific image." lightbox="media/view-and-remediate-vulnerability-assessment-findings/image-details.png":::
+
+## View images affected by a specific vulnerability
+
+1. Open the **Recommendations** page. If issues were found, you'll see the recommendation [Container registry images should have vulnerability findings resolved (powered by MDVM)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5). Select the recommendation.
- :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/list-of-findings-on-image.png" alt-text="Screenshot showing the list of findings on the specific image." lightbox="media/view-and-remediate-vulnerability-assessment-findings/list-of-findings-on-image.png":::
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/open-recommendations-page.png" alt-text="Screenshot showing the line for recommendation container registry images should have vulnerability findings resolved." lightbox="media/view-and-remediate-vulnerability-assessment-findings/open-recommendations-page.png":::
-1. To learn more about a finding, select the finding; the findings details pane opens.
-
- :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/image-finding-details.png" alt-text="Screenshot showing the finding details of the image." lightbox="media/view-and-remediate-vulnerability-assessment-findings/image-finding-details.png":::
+1. The recommendation details page opens with additional information. This information includes the list of vulnerabilities impacting the images. Select the specific vulnerability.
- This pane includes a detailed description of the issue and links to external resources to help mitigate the threats, and information on the software version that contributes to resolving the vulnerability.
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/select-specific-vulnerability.png" alt-text="Screenshot showing the list of vulnerabilities impacting the images." lightbox="media/view-and-remediate-vulnerability-assessment-findings/select-specific-vulnerability.png":::
-## Remediate the vulnerability for images in the registry
+1. The vulnerability finding details pane opens. This pane includes a detailed description of the vulnerability, images affected by that vulnerability, and links to external resources to help mitigate the threats, affected resources, and information on the software version that contributes to [resolving the vulnerability](#remediate-vulnerabilities).
+
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/specific-vulnerability-details.png" alt-text="Screenshot showing the list of images impacted by the vulnerability." lightbox="media/view-and-remediate-vulnerability-assessment-findings/specific-vulnerability-details.png":::
+
+## Remediate vulnerabilities
+
+Use these steps to remediate each of the affected images found either in a specific cluster or for a specific vulnerability:
1. Follow the steps in the remediation section of the recommendation pane.
-1. When you've completed the steps required to remediate the security issue, replace the image in your registry:
-1. Push the updated image to trigger a scan; it may take up to 24 hours for the previous image to be removed from the results, and for the new image to be included in the results.
-1. Check the recommendations page for the recommendation [Container registry images should have vulnerability findings resolved (powered by MDVM)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5).
- If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
-1. When you're sure the updated image has been pushed, scanned, and is no longer appearing in the recommendation, delete the ΓÇ£oldΓÇ¥ vulnerable image from your registry.
+1. When you've completed the steps required to remediate the security issue, replace each affected image in your registry or replace each affected image for a specific vulnerability:
+ 1. Build a new image, (including updates for each of the packages) that resolves the vulnerability according to the remediation details.
+ 1. Push the updated image to trigger a scan; it may take up to 24 hours for the previous image to be removed from the results, and for the new image to be included in the results.
+ 1.Delete the vulnerable image from the registry.
+
- > [!NOTE]
- > Kubernetes deployments using the vulnerable images must be updated with the new patched images.
+1. Check the recommendations page for the recommendation [Container registry images should have vulnerability findings resolved (powered by MDVM)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5).
+If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
## Next Steps
+- Learn how to [view and remediate vulnerabilities for images running on Azure Kubernetes clusters](view-and-remediate-vulnerabilities-for-images-running-on-aks.md).
+- Learn more about the Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
external-attack-surface-management What Is Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/what-is-discovery.md
For example, to discover ContosoΓÇÖs infrastructure, you might use the domain, c
| Data source | Example | |--|--| | WhoIs records | Other domain names registered to the same contact email or registrant org used to register contoso.com likely also belong to Contoso |
-| WhoIs records | All domain names registered to any @contoso.com email address likely also belong to Microsoft |
+| WhoIs records | All domain names registered to any @contoso.com email address likely also belong to Contoso |
| Whois records | Other domains associated with the same name server as contoso.com may also belong to Contoso | | DNS records | We can assume that Contoso also owns all observed hosts on the domains it owns and any websites that are associated with those hosts | | DNS records | Domains with other hosts resolving to the same IP blocks might also belong to Contoso if the organization owns the IP block |
global-secure-access Concept Global Secure Access Logs Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/concept-global-secure-access-logs-monitoring.md
+
+ Title: Global Secure Access (preview) logs and monitoring
+description: Learn about the available Global Secure Access (preview) logs and monitoring options.
++++ Last updated : 06/11/2023++++
+# Global Secure Access (preview) logs and monitoring
+
+As an IT administrator, you need to monitor the performance, experience, and availability of the traffic flowing through your networks. Within the Global Secure Access (preview) logs there are many data points that you can review to gain insights into your network traffic. This article describes the logs and dashboards that are available to you and some common monitoring scenarios.
+
+## Network traffic dashboard
+
+The Global Secure Access network traffic dashboard provides you with visualizations of the traffic flowing through the Microsoft Entra Private Access and Microsoft Entra Internet Access services, which include Microsoft 365 and Private Access traffic. The dashboard provides a summary of the data related to product deployment and insights. Within these categories you can see the number of users, devices, and applications seen in the last 24 hours. You can also see device activity and cross-tenant access.
+
+For more information, see [Global Secure Access network traffic dashboard](concept-traffic-dashboard.md).
+
+## Audit logs
+
+The Microsoft Entra ID audit log is a valuable source of information when researching or troubleshooting changes to your Microsoft Entra ID environment. Changes related to Global Secure Access are captured in the audit logs in several categories, such as filtering policy, forwarding profiles, remote network management, and more.
+
+For more information, see [Global Secure Access audit logs](how-to-access-audit-logs.md).
+
+## Traffic logs
+
+The Global Secure Access traffic logs provide a summary of the network connections and transactions that are occurring in your environment. These logs look at *who* accessed *what* traffic from *where* to *where* and with what *result*. The traffic logs provide a snapshot of all connections in your environment and breaks that down into traffic that applies to your traffic forwarding profiles. The logs details provide the traffic type destination, source IP, and more.
+
+For more information, see [Global Secure Access traffic logs](how-to-view-traffic-logs.md).
+
+## Enriched Office 365 logs
+
+The *Enriched Office 365 logs* provide you with the information you need to gain insights into the performance, experience, and availability of the Microsoft 365 apps your organization uses. You can integrate the logs with a Log Analytics workspace or third-party SIEM tool for further analysis.
+
+Customers use existing *Office Audit logs* for monitoring, detection, investigation, and analytics. We understand the importance of these logs and have partnered with Microsoft 365 to include SharePoint logs. These enriched logs include details like client information and original public IP details that can be used for troubleshooting security scenarios.
+
+For more information, see [Enriched Office 365 logs](how-to-view-enriched-logs.md).
++
+## Next steps
+
+- [Learn how to access, store, and analyze Azure AD activity logs](../active-directory/reports-monitoring/howto-access-activity-logs.md)
global-secure-access Concept Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/concept-private-access.md
+
+ Title: Learn about Microsoft Entra Private Access
+description: Learn Microsoft Entra Private Access works.
++++ Last updated : 06/20/2023++++++
+# Learn about Microsoft Entra Private Access
+
+Microsoft Entra Private Access unlocks the ability to specify the fully qualified domain names (FQDNs) and IP addresses that you consider private or internal, so you can manage how your organization accesses them. With Private Access, you can modernize how your organization's users access private apps and resources. Remote workers don't need to use a VPN to access these resources if they have the Global Secure Access Client installed. The client quietly and seamlessly connects them to the resources they need.
+
+Private Access provides two ways to configure the private resources that you want to tunnel through the service. You can configure Quick Access, which is the primary group of FQDNs and IP addresses that you want to secure. You can also configure a Global Secure Access app for per-app access, which allows you to specify a subset of private resources that you want to secure. The Global Secure Access app provides a granular approach to securing your private resources.
+
+The features of Microsoft Entra Private Access provide a quick and easy way to replace your VPN to allow secure access to your internal resources with an easy-one time configuration, using the secure capabilities of Conditional Access.
+
+## Quick Access and Global Secure Access apps
+
+When you configure the Quick Access and Global Secure Access apps, you create a new enterprise application. The app serves as a container for the private resources that you want to secure. The application has its own [Microsoft Entra ID Application Proxy connector](how-to-configure-connectors.md) to broker the connection between the service and the internal resource. You can assign users and groups to the app, and then use Conditional Access policies to control access to the app.
+
+Quick Access and Per-app Access are similar, but there are a few key concepts to understand so you can decide how to configure each one.
+
+### Quick Access app
+
+Quick Access is the primary group of FQDNs and IP addresses that you want to secure. As you're planning your Global Secure Access deployment, review your list of private resources and determine which resources you *always* want to tunnel through the service. This primary group of FQDNs, IP addresses, and IP ranges is what you add to Quick Access.
+
+![Diagram of the Quick Access app process with traffic flowing through the service to the app, and granting access through App Proxy.](media/concept-private-access/quick-access-diagram.png)
+
+### Global Secure Access app
+
+A Global Secure Access app could be configured if any of the following scenarios sound familiar:
+
+- I need to apply a different set of Conditional Access policies to a subset of users.
+- I have a few private resources that I want to secure, but they should have a different set of access policies.
+- I have a subset of private resources that I only want to secure for a specific time frame.
+
+![Diagram of the Global Secure Access app process with traffic flowing through the service to the app, and granting access through App Proxy.](media/concept-private-access/private-access-diagram.png)
+
+The Global Secure Access app takes a more detailed approach to securing your private resources. You can create multiple per-app access apps to secure different private resources. Paired with Conditional Access policies, you have a powerful yet fine-grained way to secure your private resources.
+
+## Next steps
+
+- [Configure Quick Access](how-to-configure-quick-access.md)
+
global-secure-access Concept Remote Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/concept-remote-network-connectivity.md
+
+ Title: Global Secure Access (preview) remote network connectivity
+description: Learn about remote network connectivity in Global Secure Access (preview).
++++ Last updated : 06/01/2023++++
+# Understand remote network connectivity
+
+Global Secure Access (preview) supports two connectivity options: installing a client on end-user device and configuring a remote network, for example a branch location with a physical router. Remote network connectivity streamlines how your end-users and guests connect from a remote network without needing to install the Global Secure Access Client.
+
+This article describes the key concepts of remote network connectivity along with common scenarios where it may be useful.
+
+## What is a remote network?
+
+Remote networks are remote locations or networks that require internet connectivity. For example, many organizations have a central headquarters and branch office locations in different geographic areas. These branch offices need access to corporate data and services. They need a secure way to talk to the data center, headquarters, and remote workers. The security of remote networks is crucial for many types of organizations.
+
+Remote networks, such as a branch location, are typically connected to the corporate network through a dedicated Wide Area Network (WAN) or a Virtual Private Network (VPN) connection. Employees in the branch location connect to the network using customer premises equipment (CPE).
+
+## Current challenges of remote network security
+
+**Bandwidth requirements have grown** ΓÇô The number of devices requiring Internet access has increased exponentially. Traditional networks are difficult to scale. With the advent of Software as a Service (SaaS) applications like Microsoft 365, there are ever-growing demands of low latency and jitter-less communication that traditional technologies like Wide Area Network (WAN) and Multi-Protocol Label Switching (MPLS) struggle with.
+
+**IT teams are expensive** ΓÇô Typically, firewalls are placed on physical devices on-premises, which requires an IT team for setup and maintenance. Maintaining an IT team at every branch location is expensive.
+
+**Evolving threats** ΓÇô Malicious actors are finding new avenues to attack the devices at the edge of networks. Edge devices in branch offices or even home offices are often the most vulnerable point of attack.
+
+## How does Global Secure Access remote network connectivity work?
+
+To connect a remote network to Global Secure Access, you set up an Internet Protocol Security (IPSec) tunnel between your on-premises equipment and the Global Secure Access endpoint. Traffic that you specify is routed through the IPSec tunnel to the nearest Global Secure Access endpoint. You can apply security policies in the Microsoft Entra admin center.
+
+Global Secure Access remote network connectivity provides a secure solution between a remote network and the
+Global Secure Access service. It doesn't provide a secure connection between one remote network and another.
+To learn more about secure remote network-to-remote network connectivity, see the [Azure Virtual WAN documentation](../virtual-wan/index.yml).
+
+## Why remote network connectivity may be important for you?
+Maintaining security of a corporate network is increasingly difficult in a world of remote work and distributed teams. Security Service Edge (SSE) promises a world of security where customers can access their corporate resources from anywhere in the world without needing to back haul their traffic to headquarters.
+
+## Common remote network connectivity scenarios
+
+### I donΓÇÖt want to install clients on thousands of devices on-premises.
+Generally, SSE is enforced by installing a client on a device. The client creates a tunnel to the nearest SSE endpoint and routes all Internet traffic through it. SSE solutions inspect the traffic and enforce security policies. If your users aren't mobile and based in a physical branch location, then remote network connectivity for that branch location removes the pain of installing a client on every device. You can connect the entire branch location by creating an IPSec tunnel between the core router of the branch office and the Global Secure Access endpoint.
+
+### I can't install clients on all the devices my organization owns.
+Sometimes, clients can't be installed on all devices. Global Secure Access currently provides clients for Windows. But what about Linux, mainframes, cameras, printers and other types of devices that are on premises and sending traffic to the Internet? This traffic still needs to be monitored and secured. When you connect a remote network, you can set policies for all traffic from that location regardless of the device where it originated.
+
+### I have guests on my network who don't have the client installed.
+Guest devices on your network may not have the client installed. To ensure that those devices adhere to your network security policies, you need their traffic routed through the Global Secure Access endpoint. Remote network connectivity solves this problem. No clients need to be installed on guest devices. All outgoing traffic from the remote network is going through security evaluation by default.
++
+## Next steps
+- [List all remote networks](how-to-list-remote-networks.md)
+- [Manage remote networks](how-to-manage-remote-networks.md)
global-secure-access Concept Traffic Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/concept-traffic-dashboard.md
+
+ Title: Global Secure Access (preview) network traffic dashboard
+description: Learn how to use the Global Secure Access (preview) network traffic dashboard.
++++ Last updated : 05/15/2023++++
+# Global Secure Access (preview) network traffic dashboard
+
+The Global Secure Access (preview) network traffic dashboard provides you with visualizations of the network traffic acquired by the Microsoft Entra Private and Microsoft Entra Internet Access services. The dashboard compiles the data from your network configurations, including devices, users, and tenants into several widgets that provide you with answers to the following questions:
+
+- How many active devices are deployed on my network?
+- Was there a recent change to the number of active devices?
+- What are the most used applications?
+- How many unique users are accessing the network across all my tenants?
+
+This article describes each of the widgets and how you can use the data on the dashboard to monitor and improve your network configurations.
+
+## How to access the dashboard
+
+Viewing the Global Secure Access dashboard requires a Reports Reader role in Microsoft Entra ID.
+
+To access the dashboard:
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/#home).
+1. Go to **Global Secure Access (preview)** > **Dashboard**.
+
+ :::image type="content" source="media/concept-traffic-dashboard/traffic-dashboard.png" alt-text="Screenshot of the Private access profile, with the view applications link highlighted." lightbox="media/concept-traffic-dashboard/traffic-dashboard-expanded.png":::
+
+## Relationship map
+
+This widget provides a summary of how many users and devices are using the service and how many applications were secured through the service.
+
+- **Users**: The number of distinct users seen in the last 24 hours. The data uses the *user principal name (UPN)*.
+- **Devices**: The number of distinct devices seen in the last 24 hours. The data uses the *device ID*.
+- **Workloads**: The number of distinct destinations seen in the last 24 hours. The data uses fully qualified domain names (FQDNs) and IP addresses.
+
+![Screenshot of the relationship map widget.](media/concept-traffic-dashboard/relationship-map.png)
+
+## Product deployment
+
+There are two product deployment widgets that look at the active and inactive devices that you have deployed.
+
+- **Active devices**: The number of distinct device IDs seen in the last 24 hours and the % change during that time.
+- **Inactive devices**: The number of distinct device IDs that were seen in the last seven days, but not during the last 24 hours. The % change during the last 24 hours is also displayed.
+
+![Screenshot of the product deployment widget.](media/concept-traffic-dashboard/product-deployment.png)
+
+## Product insights
+
+There are two product insights widgets that look at your cross-tenant access and top used applications.
+
+### Cross-tenant access
+
+- **Sign-ins**: The number of sign-ins through Microsoft Entra ID to Microsoft 365 in the last 24 hours. This widget provides you with information about the activity in your tenant.
+- **Total distinct tenants**: The number of distinct tenant IDs seen in the last 24 hours.
+- **Unseen tenants**: The number of distinct tenant IDs that were seen in the last 24 hours, but not in the previous seven days.
+- **Users**: The number of distinct user sign-ins to other tenants in the last 24 hours.
+- **Devices**: The number of distinct devices that signed in to other tenants in the last 24 hours.
+
+![Screenshot of the product insights widget.](media/concept-traffic-dashboard/product-insights.png)
+
+### Top used destinations
+
+The top-visited destinations are displayed in the second product insight widget. You can change this view to look at the following options:
+
+- **Transactions**: Displayed by default and shows the total number of transactions in the last 24 hours.
+- **Users**: The number of distinct users (UPN) accessing the destination in the last 24 hours.
+- **Devices**: The number of distinct device IDs accessing the destination in the last 24 hours.
+
+![Screenshot of the top destinations widget with the number of transactions field highlighted.](media/concept-traffic-dashboard/product-insights-top-destinations.png)
++
+## Next steps
+
+- [Explore the traffic logs](how-to-view-traffic-logs.md)
+- [Access the audit logs](how-to-access-audit-logs.md)
global-secure-access Concept Traffic Forwarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/concept-traffic-forwarding.md
+
+ Title: Global Secure Access (preview) traffic forwarding profiles
+description: Learn about the traffic forwarding profiles for Global Secure Access (preview).
++++ Last updated : 06/09/2023+++++
+# Global Secure Access (preview) traffic forwarding profiles
+
+With the traffic forwarding profiles in Global Secure Access (preview), you can apply policies to the network traffic that your organization needs to secure and manage. Network traffic is evaluated against the traffic forwarding policies you configure. The profiles are applied and the traffic goes through the service to the appropriate apps and resources.
+
+This article describes the traffic forwarding profiles and how they work.
+
+## Traffic forwarding
+
+**Traffic forwarding** enables you to configure the type of network traffic to tunnel through the Microsoft Entra Private Access and Microsoft Entra Internet Access services. You set up profiles to manage how specific types of traffic are managed.
+
+When traffic comes through Global Secure Access, the service evaluates the type of traffic first through the **Microsoft 365 profile** and then through the **Private access profile**. Any traffic that doesn't match the first two profiles isn't forwarded to Global Secure Access.
+
+For each traffic forwarding profile, you can configure three main details:
+
+- What traffic to forward to the service
+- What Conditional Access policies to apply
+- How your end-users connect to the service
+
+## Microsoft 365
+
+The Microsoft 365 traffic forwarding profile includes SharePoint Online, Exchange Online, and Microsoft 365 apps. All of the destinations for these apps are automatically included in the profile. Within each of the three main groups of destinations, you can choose to forward that traffic to Global Secure Access or bypass the service.
+
+Microsoft 365 traffic is forwarded to the service by either connecting through a [remote network](concept-remote-network-connectivity.md), such as branch office location, or through the [Global Secure Access desktop client](how-to-install-windows-client.md).
+
+## Private access
+
+With the Private Access profile, you can route traffic to your private resources. This traffic forwarding profile requires configuring Quick Access, which includes the fully qualified domain names (FQDNs) and IP addresses of the private apps and resources you want to forward to the service.
+
+Private access traffic can be forwarded to the service by connecting through the [Global Secure Access desktop client](how-to-install-windows-client.md).
++
+## Next steps
+
+- [Manage the Microsoft 365 traffic profile](how-to-manage-microsoft-365-profile.md)
+- [Manage the Private access traffic profile](how-to-manage-private-access-profile.md)
+- [Configure Quick Access](how-to-configure-quick-access.md)
global-secure-access Concept Universal Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/concept-universal-conditional-access.md
+
+ Title: Learn about Universal Conditional Access through Global Secure Access
+description: Conditional Access concepts.
++++ Last updated : 06/21/2023++++++
+# Universal Conditional Access through Global Secure Access
+
+In addition to sending traffic to Global Secure Access (preview), administrators can use Conditional Access policies to secure traffic profiles. They can mix and match controls as needed like requiring multifactor authentication, requiring a compliant device, or defining an acceptable sign-in risk. Applying these controls to network traffic not just cloud applications allows for what we call universal Conditional Access.
+
+Conditional Access on traffic profiles provides administrators with enormous control over their security posture. Administrators can enforce [Zero Trust principles](/security/zero-trust/) using policy to manage access to the network. Using traffic profiles allows consistent application of policy. For example, applications that don't support modern authentication can now be protected behind a traffic profile.
+
+This functionality allows administrators to consistently enforce Conditional Access policy based on [traffic profiles](concept-traffic-forwarding.md), not just applications or actions. Administrators can target specific traffic profiles like Microsoft 365 or private, internal resources with these policies. Users can access these configured endpoints or traffic profiles only when they satisfy the configured Conditional Access policies.
+
+## Prerequisites
+
+* Administrators who interact with **Global Secure Access preview** features must have one or more of the following role assignments depending on the tasks they're performing.
+ * [Global Secure Access Administrator role](../active-directory/roles/permissions-reference.md)
+ * [Conditional Access Administrator](../active-directory/roles/permissions-reference.md#conditional-access-administrator) or [Security Administrator](../active-directory/roles/permissions-reference.md#security-administrator) to create and interact with Conditional Access policies.
+* The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+* To use the Microsoft 365 traffic forwarding profile, a Microsoft 365 E3 license is recommended.
+
+### Known limitations
+
+- Continuous access evaluation is not currently supported for Universal Conditional Access for Microsoft 365 traffic.
+- Applying Conditional Access policies to Private Access traffic is not currently supported. To model this behavior, you can apply a Conditional Access policy at the application level for Quick Access and Global Secure Access apps. For more information, see [Apply Conditional Access to Private Access apps](how-to-target-resource-private-access-apps.md).
+- Applying Conditional Access policies to Internet traffic is not currently supported. Internet traffic is in private preview. To request access to the private preview, complete [the private preview interest form](https://aka.ms/entra-ia-preview).
+- Microsoft 365 traffic can be accessed through remote network connectivity without the Global Secure Access Client; however the Conditional Access policy isn't enforced. In other words, Conditional Access policies for the Global Secure Access Microsoft 365 traffic are only enforced when a user has the Global Secure Access Client.
++
+## Conditional Access policies
+
+With Conditional Access, you can enable access controls and security policies for the network traffic acquired by Microsoft Entra Internet Access and Microsoft Entra Private Access.
+
+- Create a policy that targets all [Microsoft 365 traffic](how-to-target-resource-microsoft-365-profile.md).
+- Apply Conditional Access policies to your [Private Access apps](how-to-target-resource-private-access-apps.md), such as Quick Access.
+- Enable [Global Secure Access signaling in Conditional Access](how-to-source-ip-restoration.md) so the source IP address is visible in the appropriate logs and reports.
++
+## User experience
+
+When users sign in to a machine with the Global Secure Access Client installed, configured, and running for the first time they're prompted to sign in. When users attempt to access a resource protected by a policy. like the previous example, the policy is enforced and they're prompted to sign in if they haven't already. Looking at the system tray icon for the Global Secure Access Client you see a red circle indicating it's signed out or not running.
++
+When a user signs in the Global Secure Access Client has a green circle that you're signed in, and the client is running.
++
+## Next steps
+
+- [Enable source IP restoration](how-to-source-ip-restoration.md)
+- [Create a Conditional Access policy for Microsoft 365 traffic](how-to-target-resource-microsoft-365-profile.md)
+- [Create a Conditional Access policy for Private Access apps](how-to-target-resource-private-access-apps.md)
global-secure-access How To Access Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-access-audit-logs.md
+
+ Title: How to access Global Secure Access (preview) audit logs
+description: Learn how to access Global Secure Access (preview) audit logs.
++++ Last updated : 06/01/2023++++
+# How to access the Global Secure Access (preview) audit logs
+
+The Microsoft Entra ID audit logs are a valuable source of information when investigating or troubleshooting changes to your Microsoft Entra ID environment. Changes related to Global Secure Access are captured in the audit logs in several categories, such as traffic forwarding profiles, remote network management, and more. This article describes how to use the audit log to track changes to your Global Secure Access environment.
+
+## Prerequisites
+
+To access the audit log for your tenant, you must have one of the following roles:
+
+- Reports Reader
+- Security Reader
+- Security Administrator
+- Global Reader
+- Global Administrator
+
+Audit logs are available in [all editions of Microsoft Entra](../active-directory/reports-monitoring/concept-audit-logs.md). Storage and integration with analysis and monitoring tools may require additional licenses and roles.
+
+## Access the audit logs
+
+There are several ways to view the audit logs. For more information on the options and recommendations for when to use each option, see [How to access activity logs](/azure/active-directory/reports-monitoring/howto-access-activity-logs).
+
+### Access audit logs from the Microsoft Entra admin center
+
+You can access the audit logs from **Global Secure Access** and from **Microsoft Entra ID Monitoring and health**.
+
+**From Global Secure Access:**
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/#home) using one of the required roles.
+1. Go to **Global Secure Access** > **Audit logs**. The filters are pre-populated with the categories and activities related to Global Secure Access.
+
+**From Microsoft Entra ID Monitoring and health:**
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/#home) using one of the required roles.
+1. Go to **Identity** > **Monitoring and health** > **Audit logs**.
+1. Select the **Date** range you want to query.
+1. Open the **Service** filter, select **Global Secure Access**, and select the **Apply** button.
+1. Open the **Category** filter, select at least one of the available options, and select the **Apply** button.
+
+## Save audit logs
+
+Audit log data is only kept for 30 days by default, which may not be long enough for every organization. You may also want to integrate your logs with other services for enhanced monitoring and analysis if you need to view or query logs after 30 days.
+
+- [Stream activity logs to an event hub](../active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) to integrate with other tools, like Azure Monitor or Splunk.
+- [Export activity logs for storage](../active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md).
+- [Monitor activity in real-time with Microsoft Sentinel](../sentinel/quickstart-onboard.md).
++
+## Next steps
+
+- [View network traffic logs](how-to-view-traffic-logs.md)
+- [Access the enriched Microsoft 365 logs](how-to-view-enriched-logs.md)
global-secure-access How To Assign Traffic Profile To Remote Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-assign-traffic-profile-to-remote-network.md
+
+ Title: How to assign a remote network to a traffic forwarding profile for Global Secure Access (preview)
+description: Learn how to assign a remote network to a traffic forwarding profile for Global Secure Access (preview).
++++ Last updated : 06/09/2023+++
+# Assign a remote network to a traffic forwarding profile for Global Secure Access (preview)
+
+If you're tunneling your Microsoft 365 traffic through the Microsoft Entra Internet Access service, you can assign remote networks to the traffic forwarding profile. Your end users can access Microsoft 365 resources by connecting to the service from a remote network, such as a branch office location.
+
+There are multiple ways to assign a remote network to the traffic forwarding profile:
+
+- When you create or manage a remote network in the Microsoft Entra admin center
+- When you enable or manage the traffic forwarding profile in the Microsoft Entra admin center
+- Using the Microsoft Graph API
+
+## Prerequisites
+
+To assign a remote network to a traffic forwarding profile to, you must have:
+
+- A **Global Secure Access Administrator** role in Microsoft Entra ID.
+- The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+- To use the Microsoft 365 traffic forwarding profile, a Microsoft 365 E3 license is recommended.
+
+### Known limitations
+
+- At this time, remote networks can only be assigned to the Microsoft 365 traffic forwarding profile.
+
+## Assign the Microsoft 365 traffic profile to a remote network
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Secure Access Administrator](../active-directory/roles/permissions-reference.md).
+1. Go to **Global Secure Access (preview)** > **Devices** > **Remote network**.
+1. Select a remote network.
+1. Select **Traffic profiles**.
+1. Select (or unselect) the checkbox for **Microsoft 365 traffic forwarding profile**.
+1. Select **Save**.
+
+![Screenshot of the traffic profiles in Remote networks.](media/how-to-assign-traffic-profile-to-remote-network/remote-network-traffic-profile.png)
+
+## Assign a remote network to the Microsoft 365 traffic forwarding profile
+
+1. Go to **Global Secure Access** > **Connect** > **Traffic forwarding**.
+1. Select the **Add/edit assignments** button for **Microsoft 365 traffic profile**.
+
+![Screenshot of the add/edit assignment button on the Microsoft 365 traffic profile.](media/how-to-assign-traffic-profile-to-remote-network/microsoft-365-traffic-profile-remote-network-button.png)
+
+### Assign a traffic profile to a remote network using the Microsoft Graph API
+
+Associating a traffic profile to your remote network using the Microsoft Graph API is two-step process. First, you need to get the traffic forwarding profile ID. This ID is unique for all tenants. With the traffic forwarding profile ID, you can assign the traffic forwarding profile with your remote network.
+
+A traffic forwarding profile can be assigned using Microsoft Graph on the `/beta` endpoint.
+
+1. Open a web browser and navigate to the Graph Explorer at https://aka.ms/ge.
+1. SelectΓÇ»**GET** as the HTTP method from the dropdown.
+1. Select the API version toΓÇ»**beta**.
+1. Enter the query:
+ ```
+ GET https://graph.microsoft.com/beta/networkaccess/forwardingprofiles
+ ```
+1. SelectΓÇ»**Run query**.
+1. Find the ID of the desired traffic forwarding profile.
+1. Select PATCH as the HTTP method from the dropdown.
+1. Enter the query:
+ ```
+ PATCH https://graph.microsoft.com/beta/networkaccess/branches/d2b05c5-1e2e-4f1d-ba5a-1a678382ef16/forwardingProfiles
+ {
+ "@odata.context": "#$delta",
+ "value":
+ [{
+ "ID": "1adaf535-1e31-4e14-983f-2270408162bf"
+ }]
+ }
+ ```
+1. Select **Run query** to update the branch.
++
+## Next steps
+- [List remote networks](how-to-list-remote-networks.md)
global-secure-access How To Compliant Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-compliant-network.md
+
+ Title: Enable compliant network check with Conditional Access
+description: Require known compliant network locations with Conditional Access.
++++ Last updated : 07/07/2023++++++
+# Enable compliant network check with Conditional Access
+
+Organizations who use Conditional Access along with the Global Secure Access preview, can prevent malicious access to Microsoft apps, third-party SaaS apps, and private line-of-business (LoB) apps using multiple conditions to provide defense-in-depth. These conditions may include device compliance, location, and more to provide protection against user identity or token theft. Global Secure Access introduces the concept of a compliant network within Conditional Access and continuous access evaluation. This compliant network check ensures users connect from a verified network connectivity model for their specific tenant and are compliant with security policies enforced by administrators.
+
+The Global Secure Access Client installed on devices or configured remote network allows administrators to secure resources behind a compliant network with advanced Conditional Access controls. This compliant network makes it easier for administrators to manage and maintain, without having to maintain a list of all of an organization's locations IP addresses. Administrators don't need to hairpin traffic through their organization's VPN egress points to ensure security.
+
+This compliant network check is specific to each tenant.
+
+- Using this check you can ensure that other organizations using Microsoft's Global Secure Access services can't access your resources.
+ - For example: Contoso can protect their services like Exchange Online and SharePoint Online behind their compliant network check to ensure only Contoso users can access these resources.
+ - If another organization like Fabrikam was using a compliant network check, they wouldn't pass Contoso's compliant network check.
+
+The compliant network is different than [IPv4, IPv6, or geographic locations](/azure/active-directory/conditional-access/location-condition) you may configure in Microsoft Entra ID. No administrator upkeep is required.
+
+## Prerequisites
+
+* Administrators who interact with **Global Secure Access preview** features must have one or more of the following role assignments depending on the tasks they're performing.
+ * The **Global Secure Access Administrator** role to manage the Global Secure Access preview features
+ * [Conditional Access Administrator](/azure/active-directory/roles/permissions-reference#conditional-access-administrator) or [Security Administrator](/azure/active-directory/roles/permissions-reference#security-administrator) to create and interact with Conditional Access policies and named locations.
+* The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+* To use the Microsoft 365 traffic forwarding profile, a Microsoft 365 E3 license is recommended.
+
+### Known limitations
+
+- Continuous access evaluation is not currently supported for compliant network check.
+
+## Enable Global Secure Access signaling for Conditional Access
+
+To enable the required setting to allow the compliant network check, an administrator must take the following steps.
+
+1. Sign in to the **Microsoft Entra admin center** as a Global Secure Access Administrator.
+1. Select the toggle to **Enable Global Secure Access signaling in Conditional Access**.
+1. Browse to **Microsoft Entra ID Conditional Access** > **Named locations**.
+ 1. Confirm you have a location called **All Network Access locations of my tenant** with location type **Network Access**. Organizations can optionally mark this location as trusted.
++
+> [!CAUTION]
+> If your organization has active Conditional Access policies based on compliant network check, and you disable Global Secure Access signaling in Conditional Access, you may unintentionally block targeted end-users from being able to access the resources. If you must disable this feature, first delete any corresponding Conditional Access policies.
+
+## Protect Exchange and SharePoint Online behind the compliant network
+
+The following example shows a Conditional Access policy that requires Exchange Online and SharePoint Online to be accessed from behind a compliant network as part of the preview.
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a Conditional Access Administrator or Security Administrator.
+1. Browse to **Microsoft Entra ID** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users or workload identities**.
+ 1. Under **Include**, select **All users**.
+ 1. Under **Exclude**, select **Users and groups** and choose your organization's [emergency access or break-glass accounts](#user-exclusions).
+1. Under **Target resources** > **Include**, and select **Select apps**.
+ 1. Choose **Office 365 Exchange Online** and **Office 365 SharePoint Online**.
+1. Under **Conditions** > **Location**.
+ 1. Set **Configure** to **Yes**
+ 1. Under **Include**, select **Any location**.
+ 1. Under **Exclude**, select **Selected locations**
+ 1. Select the **All Network Access locations of my tenant** location.
+ 1. Select **Select**.
+1. Under **Access controls**:
+ 1. **Grant**, select **Block Access**, and select **Select**.
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After administrators confirm the policy settings using [report-only mode](../active-directory/conditional-access/howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+
+### User exclusions
++
+## Try your compliant network policy
+
+1. On an end-user device with the [NaaS client installed and running](how-to-install-windows-client.md)
+1. Browse to [https://outlook.office.com/mail/](https://outlook.office.com/mail/) or `https://yourcompanyname.sharepoint.com/`, you have access to resources.
+1. Pause the NaaS client by right-clicking the application in the Windows tray and selecting **Pause**.
+1. Browse to [https://outlook.office.com/mail/](https://outlook.office.com/mail/) or `https://yourcompanyname.sharepoint.com/`, you're blocked from accessing resources with an error message that says **You cannot access this right now**.
++
+## Troubleshooting
+
+Verify the new named location was automatically created using [Microsoft Graph](https://developer.microsoft.com/graph/graph-explorer).
+
+GET https://graph.microsoft.com/beta/identity/conditionalAccess/namedLocations
+++
+## Next steps
+
+[The Global Secure Access Client for Windows (preview)](how-to-install-windows-client.md)
global-secure-access How To Configure Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-configure-connectors.md
+
+ Title: How to configure connectors for Microsoft Entra Private Access
+description: Learn how to configure App Proxy connectors for Microsoft Entra Private Access.
++++ Last updated : 06/27/2023++++
+# How to configure App Proxy connectors for Microsoft Entra Private Access
+
+Connectors are lightweight agents that sit on-premises and facilitate the outbound connection to the Global Secure Access service. Connectors must be installed on a Windows Server that has access to the backend application. You can organize connectors into connector groups, with each group handling traffic to specific applications. To learn more about connectors, see [Understand Azure AD Application Proxy connectors](../active-directory/app-proxy/application-proxy-connectors.md).
+
+## Prerequisites
+
+To add an on-premises application to Azure Active Directory (Azure AD) you need:
+
+* The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+* An Application Administrator account.
+
+User identities must be synchronized from an on-premises directory or created directly within your Azure AD tenants. Identity synchronization allows Azure AD to pre-authenticate users before granting them access to App Proxy published applications and to have the necessary user identifier information to perform single sign-on (SSO).
+
+### Windows server
+
+To use Application Proxy, you need a Windows server running Windows Server 2012 R2 or later. You'll install the Application Proxy connector on the server. This connector server needs to connect to the Application Proxy services in Azure, and the on-premises applications that you plan to publish.
+
+For high availability in your environment, we recommend having more than one Windows server.
+
+### Prepare your on-premises environment
+
+Start by enabling communication to Azure data centers to prepare your environment for Azure AD Application Proxy. If there's a firewall in the path, make sure it's open. An open firewall allows the connector to make HTTPS (TCP) requests to the Application Proxy.
+
+> [!IMPORTANT]
+> If you are installing the connector for Azure Government cloud follow the [prerequisites](../active-directory/hybrid/connect/reference-connect-government-cloud.md#allow-access-to-urls) and [installation steps](../active-directory/hybrid/connect/reference-connect-government-cloud.md). This requires enabling access to a different set of URLs and an additional parameter to run the installation.
+
+#### Open ports
+
+Open the following ports to **outbound** traffic.
+
+| Port number | How it's used |
+| -- | |
+| 80 | Downloading certificate revocation lists (CRLs) while validating the TLS/SSL certificate |
+| 443 | All outbound communication with the Application Proxy service |
+
+If your firewall enforces traffic according to originating users, also open ports 80 and 443 for traffic from Windows services that run as a Network Service.
+
+#### Allow access to URLs
+
+Allow access to the following URLs:
+
+| URL | Port | How it's used |
+| | | |
+| `*.msappproxy.net` <br> `*.servicebus.windows.net` | 443/HTTPS | Communication between the connector and the Application Proxy cloud service |
+| `crl3.digicert.com` <br> `crl4.digicert.com` <br> `ocsp.digicert.com` <br> `crl.microsoft.com` <br> `oneocsp.microsoft.com` <br> `ocsp.msocsp.com`<br> | 80/HTTP | The connector uses these URLs to verify certificates. |
+| `login.windows.net` <br> `secure.aadcdn.microsoftonline-p.com` <br> `*.microsoftonline.com` <br> `*.microsoftonline-p.com` <br> `*.msauth.net` <br> `*.msauthimages.net` <br> `*.msecnd.net` <br> `*.msftauth.net` <br> `*.msftauthimages.net` <br> `*.phonefactor.net` <br> `enterpriseregistration.windows.net` <br> `management.azure.com` <br> `policykeyservice.dc.ad.msft.net` <br> `ctldl.windowsupdate.com` <br> `www.microsoft.com/pkiops` | 443/HTTPS | The connector uses these URLs during the registration process. |
+| `ctldl.windowsupdate.com` <br> `www.microsoft.com/pkiops` | 80/HTTP | The connector uses these URLs during the registration process. |
+
+You can allow connections to `*.msappproxy.net`, `*.servicebus.windows.net`, and other URLs above if your firewall or proxy lets you configure access rules based on domain suffixes. If not, you need to allow access to the [Azure IP ranges and Service Tags - Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). The IP ranges are updated each week.
+
+> [!IMPORTANT]
+> Avoid all forms of inline inspection and termination on outbound TLS communications between Azure AD Application Proxy connectors and Azure AD Application Proxy Cloud services.
+
+## Install and register a connector
+
+To use Private Access, install a connector on each Windows server you're using for Microsoft Entra Private Access. The connector is an agent that manages the outbound connection from the on-premises application servers to Global Secure Access. You can install a connector on servers that also have other authentication agents installed such as Azure AD Connect.
+
+> [!NOTE]
+> Setting up App Proxy connectors and connector groups require planning and testing to ensure you have the right configuration for your organization. If you don't already have connector groups set up, pause this process and return when you have a connector group ready.
+>
+>The minimum version of connector required for Private Access is **1.5.3417.0**.
+
+**To install the connector**:
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a Global Administrator of the directory that uses Application Proxy.
+ - For example, if the tenant domain is contoso.com, the admin should be admin@contoso.com or any other admin alias on that domain.
+1. Select your username in the upper-right corner. Verify you're signed in to a directory that uses Application Proxy. If you need to change directories, select **Switch directory** and choose a directory that uses Application Proxy.
+1. Go to **Global Secure Access (Preview)** > **Connect** > **Connectors**.
+1. Select **Download connector service**.
+
+ ![Screenshot of the Download connector service button in the App proxy page.](media/how-to-configure-connectors/app-proxy-download-connector-service.png)
+1. Read the Terms of Service. When you're ready, select **Accept terms & Download**.
+1. At the bottom of the window, select **Run** to install the connector. An install wizard opens.
+1. Follow the instructions in the wizard to install the service. When you're prompted to register the connector with the Application Proxy for your Microsoft Entra ID tenant, provide your Global Administrator credentials.
+ - For Internet Explorer (IE): If IE Enhanced Security Configuration is set to On, you may not see the registration screen. To get access, follow the instructions in the error message. Make sure that Internet Explorer Enhanced Security Configuration is set to Off.
+
+## Things to know
+
+If you've previously installed a connector, reinstall it to get the latest version. When upgrading, uninstall the existing connector and delete any related folders. To see information about previously released versions and what changes they include, see [Application Proxy: Version Release History](../active-directory/app-proxy/application-proxy-release-version-history.md).
+
+If you choose to have more than one Windows server for your on-premises applications, you need to install and register the connector on each server. You can organize the connectors into connector groups. For more information, see [Connector groups](../active-directory/app-proxy/application-proxy-connector-groups.md).
+
+If you have installed connectors in different regions, you can optimize traffic by selecting the closest Application Proxy cloud service region to use with each connector group, see [Optimize traffic flow with Azure Active Directory Application Proxy](../active-directory/app-proxy/application-proxy-network-topology.md).
+
+## Verify the installation and registration
+
+You can use the Global Secure Access portal or your Windows server to confirm that a new connector installed correctly.
+
+### Verify the installation through the Microsoft Entra admin center
+
+To confirm the connector installed and registered correctly:
+
+1. Sign in to your tenant directory in the Microsoft Entra admin center.
+1. Go to **Global Secure Access (Preview)** > **Connect** > **Connectors**
+ - All of your connectors and connector groups appear on this page.
+1. View a connector to verify its details.
+ - Expand the connector to view the details if it's not already expanded.
+ - An active green label indicates that your connector can connect to the service. However, even though the label is green, a network issue could still block the connector from receiving messages.
+
+ ![Screenshot of the connector groups and connector group details.](media/how-to-configure-connectors/app-proxy-connectors-status.png)
+
+For more help with installing a connector, see [Problem installing the Application Proxy Connector](../active-directory/app-proxy/application-proxy-connector-installation-problem.md).
+
+### Verify the installation through your Windows server
+
+To confirm the connector installed and registered correctly:
+1. Select the **Windows** key and enter `services.msc` to open the Windows Services Manager.
+1. Check to see if the status for the following services **Running**.
+ - *Microsoft Azure AD Application Proxy Connector* enables connectivity.
+ - *Microsoft Azure AD Application Proxy Connector Updater* is an automated update service.
+ - The updater checks for new versions of the connector and updates the connector as needed.
+
+ ![Screenshot of the App proxy connector and connector updater services in Windows Services Manager.](media/how-to-configure-connectors/app-proxy-services.png)
+
+1. If the status for the services isn't **Running**, right-click to select each service and choose **Start**.
+
+## Create connector groups
+
+To create as many connector groups as you want:
+
+1. Go to **Global Secure Access (Preview)** > **Connect** > **Connectors**.
+1. Select **New connector group**.
+1. Give your new connector group a name, then use the dropdown menu to select which connectors belong in this group.
+1. Select **Save**.
+
+To learn more about connector groups, see [Publish applications on separate networks and locations using connector groups](../active-directory/app-proxy/application-proxy-connector-groups.md).
++
+## Next steps
+
+The next step for getting started with Microsoft Entra Private Access is to configure the Quick Access or Global Secure Access application:
+- [Configure Quick Access to your private resources](how-to-configure-quick-access.md)
+- [Configure per-app access for Microsoft Entra Private Access](how-to-configure-per-app-access.md)
global-secure-access How To Configure Customer Premises Equipment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-configure-customer-premises-equipment.md
+
+ Title: How to configure customer premises equipment for Global Secure Access (preview)
+description: Learn how to configure customer premises equipment for Global Secure Access (preview).
++++ Last updated : 06/08/2023++++
+# Configure customer premises equipment for Global Secure Access (preview)
+
+IPSec tunnel is a bidirectional communication. One side of the communication is established when [adding a device link to a remote network](how-to-manage-remote-network-device-links.md) in Global Secure Access (preview). During that process, you enter your public IP address and BGP addresses in the Microsoft Entra admin center to tell us about your network configurations.
+
+The other side of the communication channel is configured on your customer premises equipment (CPE). This article provides the steps to set up your CPE using the network configurations provided by Microsoft.
+
+## Prerequisites
+
+To configure your customer premises equipment (CPE), you must have:
+
+- A **Global Secure Access Administrator** role in Microsoft Entra ID.
+- Sent an email to Global Secure Access onboarding according to the onboarding process in the **Remote network** area of Global Secure Access.
+- Received the connectivity information from Global Secure Access onboarding.
+- The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+
+## How to configure your customer premises equipment
+
+To onboard to Global Secure Access remote network connectivity, you must have completed the [onboarding process](how-to-create-remote-networks.md#onboard-your-tenant-for-remote-networks). In order to configure your CPE, you need the connectivity information provided by the Global Secure Access onboarding team.
+
+Once you have the details you need, go to the preferred interface of your CPE (UX or API), and enter the information you received to set up the IPSec tunnel. Follow the instructions provided by the CPE provider.
+
+> [!IMPORTANT]
+>The crypto profile you specified for the device link should match with what you specify on your CPE. If you chose the "default" IKE policy when configuring the device link, use the configurations described in the [Remote network configurations](reference-remote-network-configurations.md) article.
++
+## Next steps
+
+- [How to manage remote networks](how-to-manage-remote-networks.md)
+- [How to manage remote network device links](how-to-manage-remote-network-device-links.md)
global-secure-access How To Configure Per App Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-configure-per-app-access.md
+
+ Title: How to configure Per-app Access using Global Secure Access applications
+description: Learn how to configure Per-app Access using Global Secure Access applications for Microsoft Entra Private Access
++++ Last updated : 07/07/2023++++
+# How to configure Per-app Access using Global Secure Access applications
+
+Microsoft Entra Private Access provides secure access to your organization's internal resources. You create a Global Secure Access application and specify the internal, private resources that you want to secure. By configuring a Global Secure Access application, you're creating per-app access to your internal resources. Global Secure Access application provides a more detailed ability to manage how the resources are accessed on a per-app basis.
+
+This article describes how to configure Per-app Access using Global Secure Access applications.
+
+## Prerequisites
+
+To configure a Global Secure Access app, you must have:
+
+- The **Global Secure Access Administrator** and **Application Administrator** roles in Microsoft Entra ID
+- The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+
+To manage App Proxy connector groups, which is required for Global Secure Access apps, you must have:
+
+- An **Application Administrator** role in Microsoft Entra ID
+- An Azure AD Premium P1/P2 license
+
+### Known limitations
+
+- Avoid overlapping app segments between Quick Access and Global Secure Access apps.
+- Tunneling traffic to Private Access destinations by IP address is supported only for IP ranges outside of the end-user device local subnet.
+- At this time, Private Access traffic can only be acquired with the Global Secure Access Client. Remote networks can't be assigned to the Private access traffic forwarding profile.
+
+## Setup overview
+
+Per-App Access is configured by creating a new Global Secure Access app. You create the app, select a connector group, and add network access segments. These settings make up the individual app that you can assign users and groups to.
+
+To configure Per-App Access, you need to have a connector group with at least one active [Microsoft Entra ID Application Proxy](../active-directory/app-proxy/application-proxy.md) connector. This connector group handles the traffic to this new application. With Connectors, you can isolate apps per network and connector.
+
+To summarize, the overall process is as follows:
+
+1. Create an App Proxy connector group, if you don't already have one.
+1. Create a Global Secure Access app.
+1. Assign users and groups to the app.
+1. Configure Conditional Access policies.
+1. Enable Microsoft Entra Private Access.
+
+Let's look at each of these steps in more detail.
+
+## Create an App Proxy connector group
+
+To configure a Global Secure Access app, you must have a connector group with at least one active App Proxy connector.
+
+If you don't already have a connector set up, see [Configure connectors](how-to-configure-connectors.md).
+
+## Create a Global Secure Access application
+
+To create a new app, you provide a name, select a connector group, and then add application segments. App segments include the fully qualified domain names (FQDNs) and IP addresses you want to tunnel through the service. You can complete all three steps at the same time, or you can add them after the initial setup is complete.
+
+### Choose name and connector group
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** with the appropriate roles.
+1. Go to **Global Secure Access (preview)** > **Applications** > **Enterprise applications**.
+1. Select **New application**.
+
+ ![Screenshot of the Enterprise apps and Add new application button.](media/how-to-configure-per-app-access/new-enterprise-app.png)
+
+1. Enter a name for the app.
+1. Select a Connector group from the dropdown menu.
+ - Existing connector groups appear in the dropdown menu.
+1. Select the **Save** button at the bottom of the page to create your app without adding private resources.
+
+### Add application segment
+
+The **Add application segment** process is where you define the FQDNs and IP addresses that you want to include in the traffic for the Global Secure Access app. You can add sites when you create the app and return to add more or edit them later.
+
+You can add fully qualified domain names (FQDN), IP addresses, and IP address ranges.
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)**.
+1. Go to **Global Secure Access** > **Applications** > **Enterprise applications**.
+1. Select **New application**.
+1. Select **Add application segment**.
+
+ ![Screenshot of the Add application segment button.](media/how-to-configure-per-app-access/enterprise-app-add-application-segment.png)
+
+ - **IP address**: Internet Protocol version 4 (IPv4) address, such as 192.0.2.1, that identifies a device on the network.
+ - **Fully qualified domain name** (including wildcard FQDNs): Domain name that specifies the exact location of a computer or a host in the Domain Name System (DNS).
+ - **IP address range (CIDR)**: Classless Inter-Domain Routing is a way of representing a range of IP addresses in which an IP address is followed by a suffix that indicates the number of network bits in the subnet mask. For example, 192.0.2.0/24 indicates that the first 24 bits of the IP address represent the network address, while the remaining 8 bits represents the host address.
+ - **IP address range (IP to IP)**: Range of IP addresses from start IP (such as 192.0.2.1) to end IP (such as 192.0.2.10).
+1. Enter the appropriate detail for what you selected.
+1. Enter the port. The following table provides the most commonly used ports and their associated networking protocols:
+
+ | Port | Protocol |
+ | | |
+ | 22 | Secure Shell (SSH) |
+ | 80 | Hypertext Transfer Protocol (HTTP) |
+ | 443 | Hypertext Transfer Protocol Secure (HTTPS) |
+ | 445 | Server Message Block (SMB) file sharing |
+ | 3389 | Remote Desktop Protocol (RDP) |
+1. Select the **Save** button when you're finished.
+
+> [!NOTE]
+> You can add up to 500 application segments to your app.
+>
+> Do not overlap FQDNs, IP addresses, and IP ranges between your Quick Access app and any Private Access apps.
+
+### Assign users and groups
+
+You need to grant access to the app you created by assigning users and/or groups to the app. For more information, see [Assign users and groups to an application.](../active-directory/manage-apps/assign-user-or-group-access-portal.md)
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)**.
+1. Go to **Global Secure Access** > **Applications** > **Enterprise applications**.
+1. Search for and select your application.
+1. Select **Users and groups** from the side menu.
+1. Add users and groups as needed.
+
+> [!NOTE]
+> Users must be directly assigned to the app or to the group assigned to the app. Nested groups are not supported.
+
+## Update application segments
+
+You can add or update the FQDNs and IP addresses included in your app at any time.
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)**.
+1. Go to **Global Secure Access** > **Applications** > **Enterprise applications**.
+1. Search for and select your application.
+1. Select **Network access properties** from the side menu.
+ - To add a new FQDN or IP address, select **Add application segment**.
+ - To edit an existing app, select it from the **Destination type** column.
+
+## Enable or disable access with the Global Secure Access Client
+
+You can enable or disable access to the Global Secure Access app using the Global Secure Access Client. This option is selected by default, but can be disabled, so the FQDNs and IP addresses included in the app segments aren't tunneled through the service.
+
+![Screenshot of the enable access checkbox.](media/how-to-configure-per-app-access/per-app-access-enable-checkbox.png)
+
+## Assign Conditional Access policies
+
+Conditional Access policies for Per-app Access are configured at the application level for each app. Conditional Access policies can be created and applied to the application from two places:
+
+- Go to **Global Secure Access (preview)** > **Applications** > **Enterprise applications**. Select an application and then select **Conditional Access** from the side menu.
+- Go to **Microsoft Entra ID** > **Protection** > **Conditional Access** > **Policies**. Select **+ Create new policy**.
+
+For more information, see [Apply Conditional Access policies to Private Access apps](how-to-target-resource-private-access-apps.md).
+
+## Enable Microsoft Entra Private Access
+
+Once you have your app configured, your private resources added, users assigned to the app, you can enable the Private access traffic forwarding profile. You can enable the profile before configuring a Global Secure Access app, but without the app and profile configured, there's no traffic to forward.
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)**.
+1. Go to **Global Secure Access** > **Connect** > **Traffic forwarding**.
+1. Select the checkbox for **Private access profile**.
+
+![Screenshot of the traffic forwarding page with the Private access profile enabled.](media/how-to-configure-per-app-access/private-access-traffic-profile.png)
++
+## Next steps
+
+The next step for getting started with Microsoft Entra Private Access is to [enable the Private Access traffic forwarding profile](how-to-manage-private-access-profile.md).
+
+For more information about Private Access, see the following articles:
+- [Learn about traffic management profiles](concept-traffic-forwarding.md)
+- [Manage the Private Access traffic profile](how-to-manage-private-access-profile.md)
+- [Apply Conditional Access policies to the Global Secure Access application](how-to-target-resource-private-access-apps.md)
global-secure-access How To Configure Quick Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-configure-quick-access.md
+
+ Title: How to configure Quick Access for Global Secure Access
+description: Learn how to configure Quick Access for Microsoft Entra Private Access.
++++ Last updated : 06/27/2023+++++
+# How to configure Quick Access for Global Secure Access
+
+With Global Secure Access, you can define specific fully qualified domain names (FQDNs) or IP addresses of private resources to include in the traffic for Microsoft Entra Private Access. Your organization's employees can then access the apps and sites that you specify. This article describes how to configure Quick Access for Microsoft Entra Private Access.
+
+## Prerequisites
+
+To configure Quick Access, you must have:
+
+- The **Global Secure Access Administrator** and **Application Administrator** roles in Microsoft Entra ID
+- The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+
+To manage App Proxy connector groups, which is required for Quick Access, you must have:
+
+- An **Application Administrator** role in Microsoft Entra ID
+- An Azure AD Premium P1/P2 license
+
+### Known limitations
+
+- Avoid overlapping app segments between Quick Access and per-app access.
+- Tunneling traffic to Private Access destinations by IP address is supported only for IP ranges outside of the end-user device local subnet.
+- At this time, Private access traffic can only be acquired with the Global Secure Access Client. Remote networks can't be assigned to the Private access traffic forwarding profile.
+
+## Setup overview
+
+Configuring your Quick Access settings is a major component to utilizing Microsoft Entra Private Access. When you configure Quick Access for the first time, Private Access creates a new enterprise application. The properties of this new app are automatically configured to work with Private Access.
+
+To configure Quick Access, you need to have a connector group with at least one active [Microsoft Entra ID Application Proxy](../active-directory/app-proxy/application-proxy.md) connector. The connector group handles the traffic to this new application. Once you have Quick Access and an App proxy connector group configured, you need to grant access to the app.
+
+To summarize, the overall process is as follows:
+
+1. Create a connector group with at least one active App Proxy connector, if you don't already have one.
+1. Configure Quick Access, which creates a new enterprise app.
+1. Assign users and groups to the app.
+1. Configure Conditional Access policies.
+1. Enable the Private access traffic forwarding profile.
+
+Let's look at each of these steps in more detail.
+
+## Create an App Proxy connector group
+
+To configure Quick Access, you must have a connector group with at least one active App Proxy connector.
+
+If you don't already have a connector group set up, see [Configure connectors for Quick Access](how-to-configure-connectors.md).
+
+## Configure Quick Access
+
+On the Quick Access page, you provide a name for the Quick Access app, select a connector group, and add application segments, which include FQDNs and IP addresses. You can complete all three steps at the same time, or you can add the application segments after the initial setup is complete.
+
+### Name and connector group
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** with the appropriate roles.
+1. Go to **Global Secure Access (preview)** > **Applications** > **Quick access**.
+1. Enter a name. *We recommend using the name Quick Access*.
+1. Select a Connector group from the dropdown menu.
+
+ ![Screenshot of the Quick Access app name.](media/how-to-configure-quick-access/new-quick-access-name.png)
+
+ - Existing connector groups appear in the dropdown menu.
+1. Select the **Save** button at the bottom of the page to create your "Quick Access" app without FQDNs and IP addresses.
+
+### Add Quick Access application segment
+
+The **Add Quick Access application segment** portion of this process is where you define the FQDNs and IP addresses that you want to include in the traffic for Microsoft Entra Private Access. You can add these resources when you create the Quick Access app and return to add more or edit them later.
+
+You can add fully qualified domain names (FQDN), IP addresses, and IP address ranges.
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)**.
+1. Go to **Global Secure Access** > **Applications** > **Quick Access**.
+1. Select **Add Quick Access application segment**.
+
+ ![Screenshot of the Add Quick Access application segment button.](media/how-to-configure-quick-access/add-quick-access-application-segment.png)
+
+1. In the **Create application segment** panel that opens, select a **Destination type**. Choose from one of the following options. Depending on what you select, the subsequent fields change accordingly.
+ - **IP address**: Internet Protocol version 4 (IPv4) address, such as 192.0.2.1, that identifies a device on the network.
+ - **Fully qualified domain name** (including wildcard FQDNs): Domain name that specifies the exact location of a computer or a host in the Domain Name System (DNS).
+ - **IP address range (CIDR)**: Classless Inter-Domain Routing is a way of representing a range of IP addresses in which an IP address is followed by a suffix indicating the number of network bits in the subnet mask. For example 192.0.2.0/24 indicates that the first 24 bits of the IP address represent the network address, while the remaining 8 bits represents the host address.
+ - **IP address range (IP to IP)**: Range of IP addresses from start IP (such as 192.0.2.1) to end IP (such as 192.0.2.10).
+1. Enter the appropriate detail for what you selected.
+1. Enter the port. The following table provides the most commonly used ports and their associated networking protocols:
+
+ | Port | Protocol |
+ | -- | -- |
+ | 22 | Secure Shell (SSH) |
+ | 80 | Hypertext Transfer Protocol (HTTP) |
+ | 443 | Hypertext Transfer Protocol Secure (HTTPS) |
+ | 445 | Server Message Block (SMB) file sharing |
+ | 3389 | Remote Desktop Protocol (RDP) |
+
+1. Select the **Save** button when you're finished.
+
+> [!NOTE]
+> You can add up to 500 application segments to your Quick Access app.
+>
+> Do not overlap FQDNs, IP addresses, and IP ranges between your Quick Access app and any Private Access apps.
+
+## Assign users and groups
+
+When you configure Quick Access, a new enterprise app is created on your behalf. You need to grant access to the Quick Access app you created by assigning users and/or groups to the app.
+
+You can view the properties from **Quick Access** or navigate to **Enterprise applications** and search for your Quick Access app.
+
+1. Select the **Edit application settings** button from Quick Access.
+
+ ![Screenshot of the edit application settings button.](media/how-to-configure-quick-access/edit-application-settings.png)
+
+1. Select **Users and groups** from the side menu.
+
+1. Add users and groups as needed.
+ - For more information, see [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md).
+
+> [!NOTE]
+> Users must be directly assigned to the app or to the group assigned to the app. Nested groups are not supported.
+
+## Update Quick Access application segments
+
+You can add or update the FQDNs and IP addresses included in your Quick Access app at any time.
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)**.
+1. Go to **Global Secure Access** > **Applications** > **Quick Access**.
+ - To add an FQDN or IP address, select **Add Quick Access application segment**.
+ - To edit an FQDN or IP address, select it from the **Destination type** column.
+
+## Link Conditional Access policies
+
+Conditional Access policies can be applied to your Quick Access app. Applying Conditional Access policies provides more options for managing access to applications, sites, and services.
+
+Creating a Conditional Access policy is covered in detail in [How to create a Conditional Access policy for Private Access apps](how-to-target-resource-private-access-apps.md).
+
+## Enable Microsoft Entra Private Access
+
+Once you have your Quick Access app configured, your private resources added, users assigned to the app, you can enable the Private access profile from the **Traffic forwarding** area of Global Secure Access. You can enable the profile before configuring Quick Access, but without the app and profile configured, there's no traffic to forward.
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)**.
+1. Go to **Global Secure Access** > **Connect** > **Traffic forwarding**.
+1. Select the checkbox for **Private access profile**.
+
+![Screenshot of the traffic forwarding page with the Private access profile enabled.](media/how-to-configure-quick-access/private-access-traffic-profile.png)
++
+## Next steps
+
+The next step for getting started with Microsoft Entra Private Access is to [enable the Private Access traffic forwarding profile](how-to-manage-private-access-profile.md).
+
+For more information about Private Access, see the following articles:
+- [Learn about traffic profiles](concept-traffic-forwarding.md)
+- [Configure per-app access](how-to-configure-per-app-access.md)
+
global-secure-access How To Create Remote Network Custom Ike Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-create-remote-network-custom-ike-policy.md
+
+ Title: How to create a remote network with a customer IKE policy for Global Secure Access (preview)
+description: Learn how to create a remote network with a customer IKE policy for Global Secure Access (preview).
++++ Last updated : 06/08/2023++++
+# Create a remote network with a customer IKE policy for Global Secure Access (preview)
+
+IPSec tunnel is a bidirectional communication. This article provides the steps to set up the policy side the communication channel using the Microsoft Graph API. The other side of the communication is configured on your customer premises equipment.
+
+For more information about creating a remote network and the custom IKE policy, see [Create a remote network](how-to-create-remote-networks.md#create-a-remote-network-with-the-microsoft-entra-admin-center) and [Remote network configurations](reference-remote-network-configurations.md).
++
+## Prerequisites
+
+To create a remote network with a custom IKE policy, you must have:
+
+- A **Global Secure Access Administrator** role in Microsoft Entra ID.
+- Sent an email to Global Secure Access onboarding team according to the [onboarding process](how-to-create-remote-networks.md#onboard-your-tenant-for-remote-networks).
+- Received the connectivity information from Global Secure Access onboarding.
+- The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+
+## How to use Microsoft Graph to create a remote network with a custom IKE policy
+
+Remote networks with a custom IKE policy can be created using Microsoft Graph on the `/beta` endpoint.
+
+To get started, follow these instructions to work with remote networks using the Microsoft Graph API in Graph Explorer.
+
+1. Sign in to [Graph Explorer](https://aka.ms/ge).
+1. Select **POST** as the HTTP method from the dropdown.
+1. Set the API version to **beta**.
+1. Add the following query, then select the **Run query** button.
+
+```http
+ POST https://graph.microsoft.com/beta/networkaccess/connectivity/branches
+{
+ "name": "BranchOffice_CustomIKE",
+ "country": "United States",
+ "region": "eastUS",
+ "version": "1.0.0",
+ "bandwidthCapacity": 500,
+ "deviceLinks": [
+ {
+ "name": "CPE Link 1",
+ "ipAddress": "20.125.118.219",
+ "version": "1.0.0",
+ "deviceVendor": "Other",
+ "bgpConfiguration": {
+ "ipAddress": "172.16.11.5",
+ "asn": 8888
+ },
+
+ "tunnelConfiguration": {
+ "@odata.type": "#microsoft.graph.networkaccess.tunnelConfigurationIKEv2Custom",
+ "preSharedKey": "Detective5OutgrowDiligence",
+ "saLifeTimeSeconds": 300,
+ "ipSecEncryption": "gcmAes128",
+ "ipSecIntegrity": "gcmAes128",
+ "ikeEncryption": "gcmAes128",
+ "ikeIntegrity": "gcmAes128",
+ "dhGroup": "dhGroup14",
+ "pfsGroup": "pfs14"
+ }
+ },
+
+ {
+ "name": "CPE Link 2",
+ "ipAddress": "20.125.118.220",
+ "version": "1.0.0",
+ "deviceVendor": "Other",
+ "bgpConfiguration": {
+ "ipAddress": "172.16.11.6",
+ "asn": 8888
+ },
+
+ "tunnelConfiguration": {
+ "@odata.type": "#microsoft.graph.networkaccess.tunnelConfigurationIKEv2Custom",
+ "preSharedKey": "Detective5OutgrowDiligence",
+ "saLifeTimeSeconds": 300,
+ "ipSecEncryption": "gcmAes128",
+ "ipSecIntegrity": "gcmAes128",
+ "ikeEncryption": "gcmAes128",
+ "ikeIntegrity": "gcmAes128",
+ "dhGroup": "dhGroup14",
+ "pfsGroup": "pfs14"
+ }
+ }
+
+ ]
+}
+```
++
+## Next steps
+
+- [How to manage remote networks](how-to-manage-remote-networks.md)
+- [How to manage remote network device links](how-to-manage-remote-network-device-links.md)
global-secure-access How To Create Remote Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-create-remote-networks.md
+
+ Title: How to create remote networks for Global Secure Access (preview)
+description: Learn how to create remote networks for Global Secure Access (preview).
++++ Last updated : 06/29/2023+++
+# How to create a remote network
+
+Remote networks are remote locations, such as a branch office, or networks that require internet connectivity. Setting up remote networks connects your users in remote locations to Global Secure Access (preview). Once a remote network is configured, you can assign a traffic forwarding profile to manage your corporate network traffic. Global Secure Access provides remote network connectivity so you can apply network security policies to your outbound traffic.
+
+There are multiple ways to connect remote networks to Global Secure Access. In a nutshell, you're creating an Internet Protocol Security (IPSec) tunnel between a core router at your remote network and the nearest Global Secure Access endpoint. All internet-bound traffic is routed through the core router of the remote network for security policy evaluation in the cloud. Installation of a client isn't required on individual devices.
+
+This article explains how to create a remote network for Global Secure Access (preview).
+
+## Prerequisites
+
+To configure remote networks, you must have:
+
+- A **Global Secure Access Administrator** role in Microsoft Entra ID
+- Completed the [onboarding process](#onboard-your-tenant-for-remote-networks) for remote networks
+- The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+- To use the Microsoft 365 traffic forwarding profile, a Microsoft 365 E3 license is recommended.
+
+### Known limitations
+
+- At this time, the number of remote networks per tenant is limited to 10, and the number of device links per remote network is limited to four.
+- Customer premises equipment (CPE) devices must support the following protocols:
+ - Internet Protocol Security (IPSec)
+ - Internet Key Exchange Version 2 (IKEv2)
+ - Border Gateway Protocol (BGP)
+- Remote network connectivity solution uses *RouteBased* and *Responder* modes.
+- Microsoft 365 traffic can be accessed through remote network connectivity without the Global Secure Access Client; however the Conditional Access policy isn't enforced. In other words, Conditional Access policies for the Global Secure Access Microsoft 365 traffic are only enforced when a user has the Global Secure Access Client.
+
+## Onboard your tenant for remote networks
+
+Before you can set up remote networks, you need to onboard your tenant information with Microsoft. This one-time process enables your tenant to use remote network connectivity.
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a Global Secure Access Administrator.
+1. Go to **Global Secure Access (preview)** > **Devices** > **Remote network**.
+1. Select the link to the **Onboarding form** in the message at the top of the page.
+
+ ![Screenshot of the onboarding form link.](media/how-to-create-remote-networks/create-remote-network-onboarding-form-link.png)
+
+1. In the window that opens, review the Tenant ID and remote network region details.
+1. Select the **Next** button.
+
+ ![Screenshot of the first tab of the onboarding form.](media/how-to-create-remote-networks/onboard-tenant-info.png)
+
+1. Select the email address link. It sends a predrafted email in your default mail client on your device. Send that email to the Global Secure Access team. Once your tenant is processed - which may take up to seven business days - we'll send IPsec tunnel and BDG connectivity details to the email you used.
+
+ ![Screenshot of the send email steps for the onboard tenant process.](media/how-to-create-remote-networks/onboard-tenant-send-email.png)
+
+1. Once the email step is complete, return to this form, select the acknowledgment checkbox, and select the **Submit** button.
+
+You MUST complete the email step before selecting the checkbox.
+
+## Create a remote network with the Microsoft Entra admin center
+
+Remote networks are configured on three tabs. You must complete each tab in order. After completing the tab either select the next tab from the top of the page, or select the **Next** button at the bottom of the page.
+
+### Basics
+The first step is to provide the name and location of your remote network. Completing this tab is required.
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a Global Secure Access Administrator.
+1. Go to **Global Secure Access (preview)** > **Devices** > **Remote network**.
+1. Select the **Create remote network** button and provide the following details:
+ - **Name**
+ - **Region**
+1. Select the **Next** button.
+
+ ![Screenshot of the General tab of the create device link process.](media/how-to-create-remote-networks/create-basics-tab.png)
+
+### Connectivity
+
+The connectivity tab is where you add the device links for the remote network. You need to provide the device type, IP address, border gateway protocol (BGP) address, and autonomous system number (ASN) for each device link. You can also add device links after creating the remote network.
+
+This process is covered in detail in the [How to manage remote network device links](how-to-manage-remote-network-device-links.md).
+
+![Screenshot of the general tab of the create device link process.](media/how-to-create-remote-networks/device-link-general-tab.png)
+
+### Traffic forwarding profiles
+
+You can assign the remote network to a traffic forwarding profile when you create the remote network. You can also assign the remote network at a later time. For more information, see [Traffic forwarding profiles](concept-traffic-forwarding.md).
+
+1. Either select the **Next** button or select the **Traffic profiles** tab.
+1. Select the appropriate traffic forwarding profile.
+1. Select the **Review + Create** button.
+
+### Review and create
+
+The final tab in the process is to review all of the settings that you provided. Review the details provided here and select the **Create remote network** button.
+
+## Create remote networks using the Microsoft Graph API
+
+Global Secure Access remote networks can be viewed and managed using Microsoft Graph on the `/beta` endpoint. Creating a remote network and assigning a traffic forwarding profile are separate API calls.
+
+### Create a remote network
+
+1. Sign in to [Graph Explorer](https://aka.ms/ge).
+1. Select POST as the HTTP method.
+1. Select BETA as the API version.
+1. Add the following query to use Create Branches API (add hyperlink to the Graph API)
+ ```
+ POST https://graph.microsoft.com/beta/networkaccess/connectivity/branches
+ {
+ "name": "ContosoBranch",
+ "country": "United States ", //must be removed
+ "region": "East US",
+ "bandwidthCapacity": 1000, //must be removed. This goes under deviceLink.
+ "deviceLinks": [
+ {
+ "name": "CPE Link 1",
+ "ipAddress": "20.125.118.219",
+ "version": "1.0.0",
+ "deviceVendor": "Other",
+ "bgpConfiguration": {
+ "ipAddress": "172.16.11.5",
+ "asn": 8888
+ },
+ "tunnelConfiguration": {
+ "@odata.type": "#microsoft.graph.networkaccess.tunnelConfigurationIKEv2Default",
+ "preSharedKey": "Detective5OutgrowDiligence"
+ }
+ }]
+ }
+ ```
+
+1. Select **Run query** to create a remote network.
+
+### Assign a traffic forwarding profile
+
+Associating a traffic forwarding profile to your remote network using the Microsoft Graph API is two step process. First, locate the ID of the traffic profile. The ID is different for all tenants. Second, associate the traffic forwarding profile with your desired remote network.
+
+1. Sign in to [Graph Explorer](https://aka.ms/ge).
+1. SelectΓÇ»**PATCH** as the HTTP method from the dropdown.
+1. Select the API version toΓÇ»**beta**.
+1. Enter the query:
+ ```
+ GET https://graph.microsoft.com/beta/networkaccess/forwardingprofiles
+ ```
+1. SelectΓÇ»**Run query**.
+1. Find the ID of the desired traffic forwarding profile.
+1. Select PATCH as the HTTP method from the dropdown.
+1. Enter the query:
+ ```
+ PATCH https://graph.microsoft.com/beta/networkaccess/connectivity/branches/d2b05c5-1e2e-4f1d-ba5a-1a678382ef16/forwardingProfiles
+ {
+ "@odata.context": "#$delta",
+ "value":
+ [{
+ "ID": "1adaf535-1e31-4e14-983f-2270408162bf"
+ }]
+ }
+ ```
+
+1. Select **Run query** to update the remote network.
++
+## Next steps
+
+The next step for getting started with Microsoft Entra Internet Access is to [target the Microsoft 365 traffic profile with Conditional Access policy](how-to-target-resource-microsoft-365-profile.md).
+
+For more information about remote networks, see the following articles:
+- [List remote networks](how-to-list-remote-networks.md)
+- [Manage remote networks](how-to-manage-remote-networks.md)
+- [Learn how to add remote network device links](how-to-manage-remote-network-device-links.md)
global-secure-access How To Get Started With Global Secure Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-get-started-with-global-secure-access.md
+
+ Title: Get started with Global Secure Access (preview)
+description: Get started with Global Secure Access (preview) for Microsoft Entra.
++++ Last updated : 07/03/2023+++
+# Get started with Global Secure Access
+
+Global Secure Access (preview) is the centralized location in the Microsoft Entra admin center where you can configure and manage Microsoft Entra Private Access and Microsoft Entra Internet Access. Many features and settings apply to both services, but some are specific to one or the other.
+
+This guide helps you get started configuring both services for the first time.
+
+## Prerequisites
+
+Administrators who interact with **Global Secure Access preview** features must have the [Global Secure Access Administrator role](../active-directory/roles/permissions-reference.md). Some features may also require additional roles.
+
+To follow the [Zero Trust principle of least privilege](/security/zero-trust/), consider using [Privileged Identity Management (PIM)](../active-directory/privileged-identity-management/pim-configure.md) to activate just-in-time privileged role assignments.
+
+The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense). To use the Microsoft 365 traffic forwarding profile, a Microsoft 365 E3 license is recommended.
+
+There may be limitations with some features of the Global Secure Access preview, which are defined in the associated articles.
+
+## Access the Microsoft Entra admin center
+
+Global Secure Access (preview) is the area in the Microsoft Entra admin center where you configure and manage Microsoft Entra Internet Access and Microsoft Entra Private Access.
+
+- Go to [**https://entra.microsoft.com**](https://entra.microsoft.com/).
+
+If you encounter access issues, refer to this [FAQ regarding tenant restrictions](resource-faq.yml).
+
+## Microsoft Entra Internet Access
+
+Microsoft Entra Internet Access isolates the traffic for Microsoft 365 applications and resources, such as Exchange Online and SharePoint Online. Users can access these resources by connecting to the Global Secure Access Client or through a remote network, such as in a branch office location.
+
+### Install the client to access Microsoft 365 traffic
+
+![Diagram of the basic Microsoft Entra Internet Access traffic flow.](media/how-to-get-started-with-global-secure-access/internet-access-basic-option.png)
+
+1. [Enable the Microsoft 365 traffic forwarding profile](how-to-manage-microsoft-365-profile.md).
+1. [Install and configure the Global Secure Access Client on end-user devices](how-to-install-windows-client.md).
+1. [Enable universal tenant restrictions](how-to-universal-tenant-restrictions.md).
+1. [Enable enhanced Global Secure Access signaling](how-to-source-ip-restoration.md#enable-global-secure-access-signaling-for-conditional-access).
+
+After you complete these four steps, users with the Global Secure Access Client installed on their Windows device can securely access Microsoft 365 resources from anywhere. The user's source IP address appears in the traffic logs for Microsoft Entra Internet Access.
++
+### Create a remote network, apply Conditional Access, and review the logs
+
+![Diagram of the Microsoft Entra Internet Access traffic flow with remote networks and Conditional Access.](media/how-to-get-started-with-global-secure-access/internet-access-remote-networks-option.png)
+
+1. [Create a remote network](how-to-manage-remote-networks.md).
+1. [Target the Microsoft 365 traffic profile with Conditional Access policy](how-to-target-resource-microsoft-365-profile.md).
+1. [Review the Global Secure Access logs](concept-global-secure-access-logs-monitoring.md).
+
+After you complete these optional steps, users can connect to Microsoft 365 services without the Global Secure Access client if they're connecting through the remote network you created *and* if they meet the conditions you added to the Conditional Access policy.
+
+## Microsoft Entra Private Access
+
+Microsoft Entra Private Access provides a secure, zero-trust access solution for accessing internal resources without requiring a VPN. Configure Quick Access and enable the Private access traffic forwarding profile to specify the sites and apps you want routed through Microsoft Entra Private Access. At this time, the Global Secure Access Client must be installed on end-user devices to use Microsoft Entra Private Access, so that step is included in this section.
+
+### Configure Quick Access to your primary private resources
+
+Set up Quick Access for broader access to your network using Microsoft Entra Private Access.
+
+![Diagram of the Quick Access traffic flow for private resources.](media/how-to-get-started-with-global-secure-access/private-access-diagram-quick-access.png)
+
+1. [Configure an App Proxy connector and connector group](how-to-configure-connectors.md).
+1. [Configure Quick Access to your private resources](how-to-configure-quick-access.md).
+1. [Enable the Private Access traffic forwarding profile](how-to-manage-private-access-profile.md).
+1. [Install and configure the Global Secure Access Client on end-user devices](how-to-install-windows-client.md).
+
+After you complete these four steps, users with the Global Secure Access client installed on a Windows device can connect to your primary resources, through a Quick Access app and App Proxy connector.
+
+### Configure Global Secure Access apps for per-app access to private resources
+
+Create specific private apps for granular segmented access to private access resources using Microsoft Entra Private Access.
+
+![Diagram of the Global Secure Access app traffic flow for private resources.](media/how-to-get-started-with-global-secure-access/private-access-diagram-global-secure-access.png)
+
+1. [Configure an App Proxy connector and connector group](how-to-configure-connectors.md).
+1. [Create a private Global Secure Access application](how-to-configure-per-app-access.md).
+1. [Enable the Private Access traffic forwarding profile](how-to-manage-private-access-profile.md).
+1. [Install and configure the Global Secure Access Client on end-user devices](how-to-install-windows-client.md).
+
+After you complete these steps, users with the Global Secure Access client installed on a Windows device can connect to your private resources through a Global Secure Access app and App Proxy connector.
+
+Optionally:
+
+- [Secure Quick Access applications with Conditional Access policies](how-to-target-resource-private-access-apps.md).
+- [Review the Global Secure Access logs](concept-global-secure-access-logs-monitoring.md).
++
+## Next steps
+
+To get started with Microsoft Entra Internet Access, start by [enabling the Microsoft 365 traffic forwarding profile](how-to-manage-microsoft-365-profile.md).
+
+To get started with Microsoft Entra Private Access, start by [configuring an App Proxy connector group for the Quick Access app](how-to-configure-connectors.md).
+
global-secure-access How To Install Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-install-windows-client.md
+
+ Title: The Global Secure Access Client for Windows (preview)
+description: Install the Global Secure Access Client for Windows to enable client connectivity.
++ Last updated : 06/23/2023+++++
+# The Global Secure Access Client for Windows (preview)
+
+The Global Secure Access Client allows organizations control over network traffic at the end-user computing device, giving organizations the ability to route specific traffic profiles through Microsoft Entra Internet Access and Microsoft Entra Private Access. Routing traffic in this method allows for more controls like continuous access evaluation (CAE), device compliance, or multifactor authentication to be required for resource access.
+
+## Prerequisites
+
+- The Global Secure Access Client is supported on 64-bit versions of Windows 11 or Windows 10.
+- Devices must be either Azure AD joined or hybrid Azure AD joined.
+ - Azure AD registered devices aren't supported.
+- Local administrator credentials are required for installation.
+- The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+
+### Known limitations
+
+- Multiple user sessions on the same device, like those from a Remote Desktop Server (RDP), aren't supported.
+- Connecting to networks that use a captive portal, like some guest wireless network solutions, might fail. As a workaround you can [pause the Global Secure Access Client](#troubleshooting).
+- Virtual machines where both the host and guest Operating Systems have the Global Secure Access Client installed aren't supported. Individual virtual machines with the client installed are supported.
+- If the Global Secure Access Client isn't able to connect to the service (for example due to an authorization or Conditional Access failure), the service *bypasses* the traffic. Traffic is sent direct-and-local instead of being blocked. In this scenario, you can create a Conditional Access policy for the [compliant network check](how-to-compliant-network.md), to block traffic if the client isn't able to connect to the service.
++
+There are several other limitations based on the traffic forwarding profile in use:
+
+| Traffic forwarding profile | Limitation |
+| | |
+| [Microsoft 365](how-to-manage-microsoft-365-profile.md) | Tunneling [IPv6 traffic isn't currently supported](#disable-ipv6-and-secure-dns). |
+| [Microsoft 365](how-to-manage-microsoft-365-profile.md) and [Private access](how-to-manage-private-access-profile.md) | To tunnel network traffic based on rules of FQDNs (in the forwarding profile), [DNS over HTTPS (Secure DNS) needs to be disabled](#disable-ipv6-and-secure-dns). |
+| [Microsoft 365](how-to-manage-microsoft-365-profile.md) | The Global Secure Access Client currently only supports TCP traffic. Exchange Online uses the QUIC protocol for some traffic over UDP port 443 force this traffic to use HTTPS (443 TCP) by [blocking the QUIC traffic with a local firewall rule](#block-quic-when-tunneling-exchange-online-traffic). Non-HTTP protocols, such as POP3, IMAP, SMTP, are not acquired from the Client and are sent direct-and-local. |
+| [Microsoft 365](how-to-manage-microsoft-365-profile.md) and [Private access](how-to-manage-private-access-profile.md) | If the end-user device is configured to use a proxy server, locations that you wish to tunnel using the Global Secure Access Client must be excluded from that configuration. For examples, see [Proxy configuration example](#proxy-configuration-example). |
+| [Private access](how-to-manage-private-access-profile.md) | Single label domains, like `https://contosohome` for private apps aren't supported, instead use a fully qualified domain name (FQDN), like `https://contosohome.contoso.com`. Administrators can also choose to append DNS suffixes via Windows. |
+
+## Download the client
+
+The most current version of the Global Secure Access Client can be downloaded from the Microsoft Entra admin center.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Secure Access Administrator](../active-directory/roles/permissions-reference.md).
+1. Browse to **Global Secure Access (Preview)** > **Devices** > **Clients**.
+1. Select **Download**.
+
+ ![Screenshot of the download Windows client button.](media/how-to-install-windows-client/client-download-screen.png)
+
+## Install the client
+
+Organizations can install the client interactively, silently with the `/quiet` switch, or use mobile device management platforms like [Microsoft Intune to deploy it](/mem/intune/apps/apps-win32-app-management) to their devices.
+
+1. Copy the Global Secure Access Client setup file to your client machine.
+1. Run the setup file, like *GlobalSecureAccessInstaller 1.5.527*. Accept the software license terms.
+1. After the client is installed, users are prompted to sign in with their Microsoft Entra ID credentials.
+
+ :::image type="content" source="media/how-to-install-windows-client/client-install-first-sign-in.png" alt-text="Screenshot showing the sign-in box appears after client installation completes." lightbox="media/how-to-install-windows-client/client-install-first-sign-in.png":::
+
+1. After users sign in, the connection icon turns green, and double-clicking on it opens a notification with client information showing a connected state.
+
+ :::image type="content" source="media/how-to-install-windows-client/client-install-connected.png" alt-text="Screenshot showing the client is connected.":::
+
+## Troubleshooting
+
+To troubleshoot the Global Secure Access Client, right-click the client icon in the taskbar.
++
+- **Switch user**
+ - Forces sign-in screen to change user or reauthenticate the existing user.
+- **Pause**
+ - This option can be used to temporarily disable traffic tunneling. As this client is part of your organization's security posture we recommend leaving it running always.
+ - This option stops the Windows services related to client. When these services are stopped, traffic is no longer tunneled from the client machine to the cloud service. Network traffic behaves as if the client isn't installed while the client is paused. If the client machine is restarted, the services automatically restart with it.
+- **Resume**
+ - This option starts the underlying services related to the Global Secure Access Client. This option would be used to resume after temporarily pausing the client for troubleshooting. Traffic resumes tunneling from the client to the cloud service.
+- **Restart**
+ - This option stops and starts the Windows services related to client.
+- **Collect logs**
+ - Collect logs for support and further troubleshooting. These logs are collected and stored in `C:\Program Files\Global Secure Access Client\Logs` by default.
+ - These logs include information about the client machine, the related event logs for the services, and registry values including the traffic forwarding profiles applied.
+- **Client Checker**
+ - Runs a script to test client components ensuring the client is configured and working as expected.
+- **Connection Diagnostics** provides a live display of client status and connections tunneled by the client to the Global Secure Access service. 
+ - **Summary** tab shows general information about the client configuration including: policy version in use, last policy update date and time, and the ID of the tenant the client is configured to work with.
+ - Hostname acquisition state changes to green when new traffic acquired by FQDN is tunneled successfully based on a match of the destination FQDN in a traffic forwarding profile.
+ - **Flows** show a live list of connections initiated by the end-user device and tunneled by the client to the Global Secure Access edge. Each connection is new row.
+ - **Timestamp** is the time when the connection was first established.
+ - **Fully Qualified Domain Name (FQDN)** of the destination of the connection. If the decision to tunnel the connection was made based on an IP rule in the forwarding policy not by an FQDN rule, the FQDN column shows N/A.
+ - **Source** port of the end-user device for this connection.
+ - **Destination IP** is the destination of the connection.
+ - **Protocol** only TCP is supported currently.
+ - **Process** name that initiated the connection.
+ - **Flow** active provides a status of whether the connection is still open.
+ - **Sent data** provides the number of bytes sent by the end-user device over the connection.
+ - **Received data** provides the number of bytes received by the end-user device over the connection.
+ - **Correlation ID** is provided to each connection tunneled by the client. This ID allows tracing of the connection in the client logs (event viewer and ETL file) and the [Global Secure Access traffic logs](how-to-view-traffic-logs.md).
+ - **Flow ID** is the internal ID of the connection used by the client shown in the ETL file.
+ - **Channel name** identifies the traffic forwarding profile to which the connection is tunneled. This decision is taken according to the rules in the forwarding profile.
+ - **HostNameAcquisition** provides a list of hostnames that the client acquired based on the FQDN rules in the forwarding profile. Each hostname is shown in a new row. Future acquisition of the same hostname creates another row if DNS resolves the hostname (FQDN) to a different IP address.
+ - **Timestamp** is the time when the connection was first established.
+ - **FQDN** that is resolved.
+ - **Generated IP address** is an IP address generated by the client for internal purposes. This IP is shown in flows tab for connections that are established to the relative FQDN.
+ - **Original IP address** is the first IPv4 address in the DNS response when querying the FQDN. If the DNS server that the end-user device points to doesnΓÇÖt return an IPv4 address for the query, the original IP address shows `0.0.0.0`.
+ - **Services** shows the status of the Windows services related to the Global Secure Access Client. Services that are started have a green status icon, services that are stopped show a red status icon. All three Windows services must be started for the client to function.
+ - **Channels** list the traffic forwarding profiles assigned to the client and the state of the connection to the Global Secure Access edge.
+
+### Event logs
+
+Event logs related to the Global Secure Access Client can be found in the Event Viewer under `Applications and Services/Microsoft/Windows/Global Secure Access Client/Operational`. These events provide useful detail regarding the state, policies, and connections made by the client.
+
+### Disable IPv6 and secure DNS
+
+If you need assistance disabling IPv6 or secure DNS on Windows devices you're trying the preview with, the following script provides assistance.
+
+```powershell
+function CreateIfNotExists
+{
+ param($Path)
+ if (-NOT (Test-Path $Path))
+ {
+ New-Item -Path $Path -Force | Out-Null
+ }
+}
+
+$disableBuiltInDNS = 0x00
+
+# Prefer IPv4 over IPv6 with 0x20, disable IPv6 with 0xff, revert to default with 0x00.
+# This change takes effect after reboot.
+$setIpv6Value = 0x20
+Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Services\Tcpip6\Parameters" -Name "DisabledComponents" -Type DWord -Value $setIpv6Value
+
+# This section disables browser based secure DNS lookup.
+# For the Microsoft Edge browser.
+CreateIfNotExists "HKLM:\SOFTWARE\Policies\Microsoft"
+CreateIfNotExists "HKLM:\SOFTWARE\Policies\Microsoft\Edge"
+
+Set-ItemProperty -Path "HKLM:\SOFTWARE\Policies\Microsoft\Edge" -Name "DnsOverHttpsMode" -Value "off"
+
+Set-ItemProperty -Path "HKLM:\SOFTWARE\Policies\Microsoft\Edge" -Name "BuiltInDnsClientEnabled" -Type DWord -Value $disableBuiltInDNS
+
+# For the Google Chrome browser.
+
+CreateIfNotExists "HKLM:\SOFTWARE\Policies\Google"
+CreateIfNotExists "HKLM:\SOFTWARE\Policies\Google\Chrome"
+
+Set-ItemProperty -Path "HKLM:\SOFTWARE\Policies\Google\Chrome" -Name "DnsOverHttpsMode" -Value "off"
+
+Set-ItemProperty -Path "HKLM:\SOFTWARE\Policies\Google\Chrome" -Name "BuiltInDnsClientEnabled" -Type DWord -Value $disableBuiltInDNS
+```
+
+### Proxy configuration example
+
+Example proxy PAC file containing exclusions:ΓÇ»
+
+```
+function FindProxyForURL(url, host) {  // basic function; do not change
+ if (isPlainHostName(host) ||   
+ dnsDomainIs(host, ".contoso.com") || //tunneledΓÇ»
+      dnsDomainIs(host, ".fabrikam.com")) // tunneled 
+      return "DIRECT";                    // If true, sets "DIRECT" connection 
+      else                                   // for all other destinations
+      return "PROXY 10.1.0.10:8080";  // transfer the traffic to the proxy.
+}ΓÇ»
+```
+
+Organizations must then create a system variable named `grpc_proxy` with a value like `http://10.1.0.10:8080` that matches your proxy server's configuration on end-user machines to allow the Global Secure Access Client services to use the proxy by configuring the following.
+
+### Block QUIC when tunneling Exchange Online traffic
+
+Since UDP traffic isn't supported in the current preview, organizations that plan to tunnel their Exchange Online traffic should disable the QUIC protocol (443 UDP). Administrators can disable this protocol triggering clients to fall back to HTTPS (443 TCP) with the following Windows Firewall rule:
+
+```powershell
+@New-NetFirewallRule -DisplayName "Block QUIC for Exchange Online" -Direction Outbound -Action Block -Protocol UDP -RemoteAddress 13.107.6.152/31,13.107.18.10/31,13.107.128.0/22,23.103.160.0/20,40.96.0.0/13,40.104.0.0/15,52.96.0.0/14,131.253.33.215/32,132.245.0.0/16,150.171.32.0/22,204.79.197.215/32,6.6.0.0/16 -RemotePort 443
+```
+
+This list of IPv4 addresses is based on the [Office 365 URLs and IP address ranges](/microsoft-365/enterprise/urls-and-ip-address-ranges#exchange-online) and the IPv4 block used by the Global Secure Access Client.
++
+## Next steps
+
+The next step for getting started with Microsoft Entra Internet Access is to [enable universal tenant restrictions](how-to-universal-tenant-restrictions.md).
global-secure-access How To List Remote Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-list-remote-networks.md
+
+ Title: How to list remote networks for Global Secure Access (preview)
+description: Learn how to list remote networks for Global Secure Access (preview).
++++ Last updated : 06/01/2023++++
+# How to list remote networks for Global Secure Access (preview)
+
+Reviewing your remote networks is an important part of managing your Global Secure Access (preview) deployment. As your organization grows, you may need to add new remote networks. You can use the Microsoft Entra admin center or the Microsoft Graph API.
+
+## Prerequisites
+
+- A **Global Secure Access Administrator** role in Microsoft Entra ID
+- The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+
+## List all remote networks using the Microsoft Entra admin center
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)**.
+1. Go to **Global Secure Access** > **Devices** > **Remote network**.
+
+All remote networks are listed. Select a remote network to view its details.
+
+## List all remote networks using the Microsoft Graph API
+
+1. Sign in to theΓÇ»[Graph Explorer](https://aka.ms/ge).
+1. Select `GET` as the HTTP method from the dropdown.
+1. Set the API version to beta.
+1. Enter the following query:
+ ```
+ GET https://graph.microsoft.com/beta/networkaccess/connectivity/branches
+ ```
+1. Select the **Run query** button to list the remote networks.
++
+## Next steps
+- [Create remote networks](how-to-manage-remote-networks.md)
global-secure-access How To Manage Microsoft 365 Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-manage-microsoft-365-profile.md
+
+ Title: How to enable and manage the Microsoft 365 profile
+description: Learn how to enable and manage the Microsoft 365 traffic forwarding profile for Global Secure Access (preview).
++++ Last updated : 07/03/2023+++
+# How to enable and manage the Microsoft 365 traffic forwarding profile
+
+With the Microsoft 365 profile enabled, Microsoft Entra Internet Access acquires the traffic going to all Microsoft 365 services. The **Microsoft 365** profile manages the following policy groups:
+
+- Exchange Online
+- SharePoint Online and OneDrive for Business
+- Microsoft 365 Common and Office Online (only Microsoft Entra ID and Microsoft Graph)
+
+## Prerequisites
+
+To enable the Microsoft 365 traffic forwarding profile for your tenant, you must have:
+
+- A **Global Secure Access Administrator** role in Microsoft Entra ID
+- The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+- To use the Microsoft 365 traffic forwarding profile, a Microsoft 365 E3 license is recommended.
+
+### Known limitations
+
+- Teams is currently not supported as part of the Microsoft 365 Common endpoints. Only Microsoft Entra ID and Microsoft Graph are supported.
+- For details on limitations for the Microsoft 365 traffic profile, see [Windows Client known limitations](how-to-install-windows-client.md#known-limitations)
+## Enable the Microsoft 365 traffic profile
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)**.
+1. Go to **Global Secure Access** > **Connect** > **Traffic forwarding**.
+1. Select the checkbox for **Microsoft 365 access profile**.
+
+![Screenshot of the traffic forwarding page with the Private access profile enabled.](media/how-to-manage-microsoft-365-profile/microsoft-365-traffic-profile.png)
+
+## Microsoft 365 traffic policies
+
+To manage the details included in the Microsoft 365 traffic forwarding policy, select the **View** link for **Microsoft 365 traffic policies**.
++
+The policy groups are listed, with a checkbox to indicate if the policy group is enabled. Expand a policy group to view all of the IPs and FQDNs included in the group.
+
+![Screenshot of the Microsoft 365 profile details.](media/how-to-manage-microsoft-365-profile/microsoft-365-profile-details.png)
+
+The policy groups include the following details:
+
+- **Destination type**: FQDN or IP subnet
+- **Destination**: The details of the FQDN or IP subnet
+- **Ports**: TCP or UDP ports that are combined with the IP addresses to form the network endpoint
+- **Protocol**: TCP (Transmission Control Protocol) or UDP (User Datagram Protocol)
+- **Action**: Forward or Bypass
+
+You can choose to bypass certain traffic. Users can still access the site; however, the service doesn't process the traffic. You can bypass traffic to a specific FQDN or IP address, an entire policy group within the profile, or the entire Microsoft 365 profile itself. If you only need to forward some of the Microsoft 365 resources within a policy group, enable the group then change the **Action** in the details accordingly.
+
+The following example shows setting the `*.sharepoint.com` FQDN to **Bypass** so the traffic won't be forwarded to the service.
+
+![Screenshot of the Action dropdown menu.](media/how-to-manage-microsoft-365-profile/microsoft-365-policies-forward-bypass.png)
+
+If the Global Secure Access client isn't able to connect to the service (for example due to an authorization or Conditional Access failure), the service *bypasses* the traffic. Traffic is sent direct-and-local instead of being blocked. In this scenario, you can create a Conditional Access policy for the [compliant network check](how-to-compliant-network.md), to block traffic if the client isn't able to connect to the service.
+
+## Linked Conditional Access policies
+
+[Conditional Access policies](../active-directory/conditional-access/overview.md) are created and applied to the traffic forwarding profile in the Conditional Access area of Microsoft Entra ID. For example, you can create a policy that requires using compliant devices when accessing Microsoft 365 services.
+
+If you see "None" in the **Linked Conditional Access policies** section, there isn't a Conditional Access policy linked to the traffic forwarding profile. To create a Conditional Access policy, see [Universal Conditional Access through Global Secure Access.](how-to-target-resource-microsoft-365-profile.md).
+
+### Edit an existing Conditional Access policy
+
+If the traffic forwarding profile has a linked Conditional Access policy, you can view and edit that policy.
+
+1. Select the **View** link for **Linked Conditional Access policies**.
+
+ ![Screenshot of traffic forwarding profiles with Conditional Access link highlighted.](media/how-to-manage-microsoft-365-profile/microsoft-365-conditional-access-policy-link.png)
+
+1. Select a policy from the list. The details of the policy open in Conditional Access.
+
+ ![Screenshot of the applied Conditional Access policies.](media/how-to-manage-microsoft-365-profile/conditional-access-applied-policies.png)
+
+## Microsoft 365 remote network assignments
+
+Traffic profiles can be assigned to remote networks, so that the network traffic is forwarded to Global Secure Access without having to install the client on end user devices. As long as the device is behind the customer premises equipment (CPE), the client isn't required. You must create a remote network before you can add it to the profile. For more information, see [How to create remote networks](how-to-create-remote-networks.md).
+
+**To assign a remote network to the Microsoft 365 profile**:
+
+1. Go to **Microsoft Entra ID** > **Global Secure Access** > **Traffic forwarding**.
+1. Select the **Add assignments** button for the profile.
+ - If you're editing the remote network assignments, select the **Add/edit assignments** button.
+1. Select a remote network from the list and select **Add**.
++
+## Next steps
+
+The next step for getting started with Microsoft Entra Internet Access is to [install and configure the Global Secure Access Client on end-user devices](how-to-install-windows-client.md)
+
+For more information about traffic forwarding, see the following article:
+
+- [Learn about traffic forwarding profiles](concept-traffic-forwarding.md)
global-secure-access How To Manage Private Access Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-manage-private-access-profile.md
+
+ Title: How to manage the Private access profile
+description: Learn how to manage the Private access traffic forwarding profile for Microsoft Entra Private Access.
++++ Last updated : 06/01/2023+++++
+# How to manage the Private access traffic forwarding profile
+
+The Private Access traffic forwarding profile routes traffic to your private network through the Global Secure Access Client. Enabling this traffic forwarding profile allows remote workers to connect to internal resources without a VPN. With the features of Microsoft Entra Private Access, you can control which private resources to tunnel through the service and apply Conditional Access policies to secure access to those services. Once your configurations are in place, you can view and manage all of those configurations from one place.
+
+## Prerequisites
+
+To enable the Microsoft 365 traffic forwarding profile for your tenant, you must have:
+
+- A **Global Secure Access Administrator** role in Microsoft Entra ID
+- The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+
+### Known limitations
+
+- At this time, Private Access traffic can only be acquired with the Global Secure Access Client. Private Access traffic can't be acquired from remote networks.
+- Tunneling traffic to Private Access destinations by IP address is supported only for IP ranges outside of the end-user device local subnet.
+- You must disable DNS over HTTPS (Secure DNS) to tunnel network traffic based on the rules of the fully qualified domain names (FQDNs) in the traffic forwarding profile.
+
+## Enable the Private access traffic forwarding profile
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)**.
+1. Go to **Global Secure Access** > **Connect** > **Traffic forwarding**.
+1. Select the checkbox for **Private Access profile**.
+
+![Screenshot of the traffic forwarding page with the Private access profile enabled.](media/how-to-manage-private-access-profile/private-access-traffic-profile.png)
+
+## Private Access policies
+
+To enable the Private Access traffic forwarding profile, we recommend you first configure Quick Access. Quick Access includes the IP addresses, IP ranges, and fully qualified domain names (FQDNs) for the private resources you want to include in the policy. For more information, see [Configure Quick Access](how-to-configure-quick-access.md).
+
+You can also configure per-app access to your private resources by creating a Private Access app. Similar to Quick Access, you create a new Enterprise app, which can then be assigned to the Private Access traffic forwarding profile. Quick Access contains the main group of private resources you always want to route through the service. Private Access apps can be enabled and disabled as needed without impacting the FQDNs and IP addresses included in Quick Access.
+
+To manage the details included in the Private access traffic forwarding policy, select the **View** link for **Private access policies**.
++
+Details of your Quick Access and enterprise apps for Private Access are displayed. Select the link for the application to view the details from the Enterprise applications area of Microsoft Entra ID.
+
+![Screenshot of the private access application details.](media/how-to-manage-private-access-profile/private-access-app-details.png)
+
+## Linked Conditional Access policies
+
+[Conditional Access policies](../active-directory/conditional-access/overview.md) are created and applied to the traffic forwarding profile in the Conditional Access area of Microsoft Entra ID. For example, you can create a policy that requires multifactor authentication to access private resources.
+
+If you see "None" in the **Linked Conditional Access policies** section, there isn't a Conditional Access policy linked to the traffic forwarding profile. To create a Conditional Access policy, see [Universal Conditional Access through Global Secure Access](how-to-target-resource-microsoft-365-profile.md).
+
+![Screenshot of the linked Conditional Access policies area of Private Access.](media/how-to-manage-private-access-profile/private-access-conditional-access-policies.png)
+
+### Edit an existing Conditional Access policy
+
+1. Select the **View** link for **Linked Conditional Access policies**.
+1. Select a policy from the list. The details of the policy open in Conditional Access.
+++
+## Next steps
+
+The next step for getting started with Microsoft Entra Internet Access is to [install and configure the Global Secure Access Client on end-user devices](how-to-install-windows-client.md).
+
+For more information about Private Access, see the following articles:
+- [Learn about traffic forwarding](concept-traffic-forwarding.md)
+- [Configure Quick Access](how-to-configure-quick-access.md)
global-secure-access How To Manage Remote Network Device Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-manage-remote-network-device-links.md
+
+ Title: How to add device links to remote networks for Global Secure Access (preview)
+description: Learn how to add device links to remote networks for Global Secure Access (preview).
++++ Last updated : 06/29/2023++++
+# Add and delete remote networks device links
+
+You can create device links when you create a new remote network or add them after the remote network is created.
+
+This article explains how to add and delete device links for remote networks for Global Secure Access.
+
+## Prerequisites
+
+To configure remote networks, you must have:
+
+- A **Global Secure Access Administrator** role in Microsoft Entra ID.
+- Completed the [onboarding process](how-to-create-remote-networks.md#onboard-your-tenant-for-remote-networks) for remote networks.
+- Created a remote network.
+- The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+
+## Add a device link using the Microsoft Entra admin center
+
+You can add a device link to a remote network at any time.
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a Global Secure Access Administrator.
+1. Go to **Global Secure Access (preview)** > **Devices** > **Remote network**.
+1. Select a remote network from the list.
+1. Select **Links** from the menu.
+1. Select the **+ Add a link** button.
+
+**General**
+
+1. Enter the following details:
+ - **Link name**: Name of your CPE.
+ - **Device type**: Choose one of the options from the dropdown list.
+ - **IP address**: Public IP address of your device.
+ - **Peer BGP address**: The border gateway protocol address of the CPE.
+ - **Link ASN**: Provide the autonomous system number of the CPE. For more information, see the **Valid ASNs** section of the [Remote network configurations](reference-remote-network-configurations.md) article.
+ - **Redundancy**: Select either *No redundancy* or *Zone redundancy* for your IPSec tunnel.
+ - **Bandwidth capacity (Mbps)**: Choose the bandwidth for your IPSec tunnel.
+1. Select the **Next** button.
+
+![Screenshot of the general tab of the create device link process.](media/how-to-manage-remote-network-device-links/device-link-general-tab.png)
+
+**Details**
+
+1. **IKEv2** is selected by default. Currently only IKEv2 is supported.
+1. The IPSec/IKE policy is set to **Default** but you can change to **Custom**.
+ - If you select **Custom**, you must use a combination of settings that are supported by Global Secure Access.
+ - The valid configurations you can use are mapped out in the [Remote network valid configurations](reference-remote-network-configurations.md) reference article.
+ - Whether you choose Default or Custom, the IPSec/IKE policy you specify must match the policy on your CPE.
+ - View the [remote network valid configurations](reference-remote-network-configurations.md).
+
+1. Select the **Next** button.
+
+![Screenshot of the custom details for the device link.](media/how-to-manage-remote-network-device-links/device-link-details.png)
+
+**Security**
+
+1. Enter the Preshared key (PSK): `<Enter the secret key. The same secret key must be used on your CPE.>`
+1. Select **Add link**.
+
+## Add a device link using Microsoft Graph API
+
+1. Sign in to theΓÇ»[Graph Explorer](https://aka.ms/ge).
+1. Select `GET` as the HTTP method from the dropdown.
+1. Set the API version to beta.
+1. Enter the following query:
+
+```http
+POST https://graph.microsoft.com/beta/networkaccess/connectivity/branches/BRANCH_ID/deviceLinks
+ {
+ "name": "CPE Link 2",
+ "ipAddress": "20.125.118.220",
+ "version": "1.0.0",
+ "deviceVendor": "Other",
+ "bgpConfiguration": {
+ "ipAddress": "172.16.11.6",
+ "asn": 8888
+ },
+ "tunnelConfiguration": {
+ "@odata.type": "#microsoft.graph.networkaccess.tunnelConfigurationIKEv2Default",
+ "preSharedKey": "Detective5OutgrowDiligence"
+ }
+ }
+
+```
+
+## Delete device links
+
+If your remote network has device links added, they appear in the **Links** column on the list of remote networks. Select the link from the column to navigate directly to the device link details page.
+
+To delete a device link, navigate to the device link details page and select the **Delete** icon. A confirmation dialog appears. Select **Delete** to confirm the deletion.
+
+![Screenshot of the delete icon for remote network device links.](media/how-to-manage-remote-network-device-links/delete-device-link.png)
++
+## Next steps
+- [List remote networks](how-to-list-remote-networks.md)
global-secure-access How To Manage Remote Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-manage-remote-networks.md
+
+ Title: How to update and delete remote networks for Global Secure Access (preview)
+description: Learn how to update and delete remote networks for Global Secure Access (preview).
++++ Last updated : 06/01/2023+++
+# Manage remote networks
+
+Remote networks connect your users in remote locations to Global Secure Access (preview). Adding, updating, and removing remote networks from your environment are likely common tasks for many organizations.
+
+This article explains how to manage your existing remote networks for Global Secure Access.
+
+## Prerequisites
+
+- A **Global Secure Access Administrator** role in Microsoft Entra ID
+- The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+- To use the Microsoft 365 traffic forwarding profile, a Microsoft 365 E3 license is recommended.
+
+### Known limitations
+
+- At this time, remote networks can only be assigned to the Microsoft 365 traffic forwarding profile.
+
+## Update remote networks
+
+To update the details of your remote networks:
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Secure Access Administrator](../active-directory/roles/permissions-reference.md).
+1. Go to **Global Secure Access (preview)** > **Devices** > **Remote networks**.
+1. Select the remote network you need to update.
+ - **Basics**: Select the pencil icon to edit the name of the remote network.
+ - **Links**: Select the trash can icon to delete a remote network device link.
+ - **Traffic profiles**: Enable or disable the available traffic forwarding profile.
+
+### Update remote network details with the Microsoft Graph API
+
+To edit the details of a remote network:
+
+1. Sign in to [Graph Explorer](https://aka.ms/ge).
+1. SelectΓÇ»**PATCH** as the HTTP method from the dropdown.
+1. Select the API version toΓÇ»**beta**.
+1. Enter the query:
+ ```
+ PATCH https://graph.microsoft.com/beta/networkaccess/connectivity/branches/8d2b05c5-1e2e-4f1d-ba5a-1a678382ef16
+ {
+ "@odata.context": "#$delta",
+ "name": "ContosoRemoteNetwork2"
+ }
+ ```
+1. Select **Run query** to update the remote network.
+
+## Delete a remote network
+
+1. Sign in to the Microsoft Entra admin center atΓÇ»[https://entra.microsoft.com](https://entra.microsoft.com).
+1. Go to **Global Secure Access (preview)** > **Devices** > **Remote networks**.
+1. Select the remote network you need to delete.
+1. Select the **Delete** button.
+1. Select **Delete** from the confirmation message.
+
+![Screenshot of the delete remote network button.](media/how-to-manage-remote-networks/delete-remote-network.png)
+
+### Delete a remote network using the API
+
+1. Sign in to [Graph Explorer](https://aka.ms/ge).
+1. SelectΓÇ»**PATCH** as the HTTP method from the dropdown.
+1. Select the API version toΓÇ»**beta**.
+1. Enter the query:
+ ```
+ DELETE https://graph.microsoft.com/beta/networkaccess/connectivity/branches/97e2a6ea-c6c4-4bbe-83ca-add9b18b1c6b
+ ```
+1. Select **Run query** to delete the remote network.
++
+## Next steps
+
+- [List remote networks](how-to-list-remote-networks.md)
+
global-secure-access How To Source Ip Restoration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-source-ip-restoration.md
+
+ Title: Enable source IP restoration with the Global Secure Access preview
+description: Learn how to enable source IP restoration to ensure source IP match in downstream resources.
++++ Last updated : 06/09/2023++++++
+# Source IP restoration
+
+With a cloud based network proxy between users and their resources, the IP address that the resources see doesn't match the actual source IP address. In place of the end-usersΓÇÖ source IP, the resource endpoints see the cloud proxy as the source IP address. Customers with these cloud proxy solutions can't use this source IP information.
+
+Source IP restoration in Global Secure Access (preview) allows backward compatibility for Microsoft Entra ID customers to continue using original user Source IP. Administrators can benefit from the following capabilities:
+
+- Continue to enforce Source IP-based location policies across both [Conditional Access](../active-directory/conditional-access/overview.md) and [continuous access evaluation](../active-directory/conditional-access/concept-continuous-access-evaluation.md)
+- [Identity Protection risk detections](../active-directory/identity-protection/concept-identity-protection-risks.md) get a consistent view of original user Source IP address for assessing various risk scores.
+- Original user Source IP is also made available in [Microsoft Entra ID sign-in logs](../active-directory/reports-monitoring/concept-all-sign-ins.md).
+
+## Prerequisites
+
+* Administrators who interact with **Global Secure Access preview** features must have both of the following role assignments depending on the tasks they're performing.
+ * A **Global Secure Access Administrator** role to manage the Global Secure Access preview features
+ * [Conditional Access Administrator](/azure/active-directory/roles/permissions-reference#conditional-access-administrator) or [Security Administrator](/azure/active-directory/roles/permissions-reference#security-administrator) to create and interact with Conditional Access policies and named locations.
+* The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+
+### Known limitations
+
+- When source IP restoration is enabled, you can only see the source IP. The IP address of the Global Secure Access service isn't visible. If you want to see the Global Secure Access service IP address, disable source IP restoration.
+## Enable Global Secure Access signaling for Conditional Access
+
+To enable the required setting to allow source IP restoration, an administrator must take the following steps.
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a Global Secure Access Administrator.
+1. Browse to **Global Secure Access** > **Session management** > **Adaptive Access**.
+1. Select the toggle to **Enable Global Secure Access signaling in Conditional Access**.
+
+This functionality allows services like Microsoft Graph, Microsoft Entra ID, SharePoint Online, and Exchange Online to see the actual source IP address.
++
+> [!CAUTION]
+> If your organization has active Conditional Access policies based on IP location checks, and you disable Global Secure Access signaling in Conditional Access, you may unintentionally block targeted end-users from being able to access the resources. If you must disable this feature, first delete any corresponding Conditional Access policies.
+
+## Sign-in log behavior
+
+To see source IP restoration in action, administrators can take the following steps.
+
+1. Sign in to the **Microsoft Entra admin center** as a [Security Reader](/azure/active-directory/roles/permissions-reference#security-reader).
+1. Browse to **Identity** > **Users** > **All users** > select one of your test users > **Sign-in logs**.
+1. With source IP restoration enabled, you see IP addresses that include their actual IP address.
+ - If source IP restoration is disabled, you can't see their actual IP address.
+
+Sign-in log data may take some time to appear, this delay is normal as there's some processing that must take place.
+++
+## Next steps
+
+- [Set up tenant restrictions V2 (Preview)](../active-directory/external-identities/tenant-restrictions-v2.md)
+- [Enable compliant network check with Conditional Access](how-to-compliant-network.md)
global-secure-access How To Target Resource Microsoft 365 Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-target-resource-microsoft-365-profile.md
+
+ Title: How to apply Conditional Access policies to the Microsoft 365 traffic profile
+description: Learn how to apply Conditional Access policies to the Microsoft 365 traffic profile.
++++ Last updated : 07/07/2023++++++
+# Apply Conditional Access policies to the Microsoft 365 traffic profile
+
+With a devoted traffic forwarding profile for all your Microsoft 365 traffic, you can apply Conditional Access policies to all of your Microsoft 365 traffic. With Conditional Access, you can require multifactor authentication and device compliance for accessing Microsoft 365 resources.
+
+This article describes how to apply Conditional Access policies to your Microsoft 365 traffic forwarding profile.
+
+## Prerequisites
+
+* Administrators who interact with **Global Secure Access preview** features must have one or more of the following role assignments depending on the tasks they're performing.
+ * [Global Secure Access Administrator role](../active-directory/roles/permissions-reference.md)
+ * [Conditional Access Administrator](../active-directory/roles/permissions-reference.md#conditional-access-administrator) or [Security Administrator](../active-directory/roles/permissions-reference.md#security-administrator) to create and interact with Conditional Access policies.
+* The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+* To use the Microsoft 365 traffic forwarding profile, a Microsoft 365 E3 license is recommended.
+
+## Create a Conditional Access policy targeting the Microsoft 365 traffic profile
+
+The following example policy targets all users except for your break-glass accounts and guest/external users, requiring multifactor authentication, device compliance, or a hybrid Azure AD joined device when accessing Microsoft 365 traffic.
++
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a Conditional Access Administrator or Security Administrator.
+1. Browse to **Identity** > **Protection** > **Conditional Access**.
+1. Select **Create new policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users or workload identities**.
+ 1. Under **Include**, select **All users**.
+ 1. Under **Exclude**:
+ 1. Select **Users and groups** and choose your organization's [emergency access or break-glass accounts](#user-exclusions).
+ 1. Select **Guest or external users** and select all checkboxes.
+1. Under **Target resources** > **Network Access (Preview)***.
+ 1. Choose **Microsoft 365 traffic**.
+1. Under **Access controls** > **Grant**.
+ 1. Select **Require multifactor authentication**, **Require device to be marked as compliant**, and **Require hybrid Azure AD joined device**
+ 1. **For multiple controls** select **Require one of the selected controls**.
+ 1. Select **Select**.
+
+After administrators confirm the policy settings using [report-only mode](../active-directory/conditional-access/howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+
+### User exclusions
++
+## Next steps
+
+The next step for getting started with Microsoft Entra Internet Access is to [review the Global Secure Access logs](concept-global-secure-access-logs-monitoring.md).
+
+For more information about traffic forwarding, see the following articles:
+
+- [Learn about traffic forwarding profiles](concept-traffic-forwarding.md)
+- [Manage the Microsoft 365 traffic profile](how-to-manage-microsoft-365-profile.md)
global-secure-access How To Target Resource Private Access Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-target-resource-private-access-apps.md
+
+ Title: How to apply Conditional Access policies to Microsoft Entra Private Access apps
+description: How to apply Conditional Access policies to Microsoft Entra Private Access apps.
++++ Last updated : 07/07/2023++++++
+# Apply Conditional Access policies to Private Access apps
+
+Applying Conditional Access policies to your Microsoft Entra Private Access apps is a powerful way to enforce security policies for your internal, private resources. You can apply Conditional Access policies to your Quick Access and Private Access apps from Global Secure Access (preview).
+
+This article describes how to apply Conditional Access policies to your Quick Access and Private Access apps.
+
+## Prerequisites
+
+* Administrators who interact with **Global Secure Access preview** features must have one or more of the following role assignments depending on the tasks they're performing.
+ * [Global Secure Access Administrator role](../active-directory/roles/permissions-reference.md)
+ * [Conditional Access Administrator](../active-directory/roles/permissions-reference.md#conditional-access-administrator) or [Security Administrator](../active-directory/roles/permissions-reference.md#security-administrator) to create and interact with Conditional Access policies.
+* You need to have configured Quick Access or Private Access.
+* The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+
+### Known limitations
+
+- At this time, connecting through the Global Secure Access Client is required to acquire Private Access traffic.
+
+## Conditional Access and Global Secure Access
+
+You can create a Conditional Access policy for your Quick Access or Private Access apps from Global Secure Access. Starting the process from Global Secure Access automatically adds the selected app as the **Target resource** for the policy. All you need to do is configure the policy settings.
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a Conditional Access Administrator or Security Administrator.
+1. Go to **Global Secure Access (preview)** > **Applications** > **Enterprise applications.**
+1. Select an application from the list.
+
+ ![Screenshot of the Enterprise applications details.](media/how-to-target-resource-private-access-apps/enterprise-apps.png)
+
+1. Select **Conditional Access** from the side menu. Any existing Conditional Access policies appear in a list.
+
+ ![Screenshot of the Conditional Access menu option.](media/how-to-target-resource-private-access-apps/conditional-access-policies.png)
+
+1. Select **Create new policy**. The selected app appears in the **Target resources** details.
+
+ ![Screenshot of the Conditional Access policy with the Quick Access app selected.](media/how-to-target-resource-private-access-apps/quick-access-target-resource.png)
+
+1. Configure the conditions, access controls, and assign users and groups as needed.
+
+You can also apply Conditional Access policies to a group of applications based on custom attributes. To learn more, go to [Filter for applications in Conditional Access policy (Preview)](../active-directory/conditional-access/concept-filter-for-applications.md).
+
+### Assignments and Access controls example
+
+Adjust the following policy details to create a Conditional Access policy requiring multifactor authentication, device compliance, or a hybrid Azure AD joined device for your Quick Access application. The user assignments ensure that your organization's emergency access or break-glass accounts are excluded from the policy.
+
+1. Under **Assignments**, select **Users**:
+ 1. Under **Include**, select **All users**.
+ 1. Under **Exclude**, select **Users and groups** and choose your organization's [emergency access or break-glass accounts](#user-exclusions).
+1. Under **Access controls** > **Grant**:
+ 1. Select **Require multifactor authentication**, **Require device to be marked as compliant**, and **Require hybrid Azure AD joined device**
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+
+After administrators confirm the policy settings using [report-only mode](../active-directory/conditional-access/howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+
+### User exclusions
++
+## Next steps
+
+- [Enable the Private Access traffic forwarding profile](how-to-manage-private-access-profile.md)
+- [Enable source IP restoration](how-to-source-ip-restoration.md)
global-secure-access How To Universal Tenant Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-universal-tenant-restrictions.md
+
+ Title: Global Secure Access (preview) and universal tenant restrictions
+description: What are universal tenant restrictions
++++ Last updated : 06/09/2023++++++
+# Universal tenant restrictions
+
+Universal tenant restrictions enhance the functionality of [tenant restriction v2](https://aka.ms/tenant-restrictions-enforcement) using Global Secure Access (preview) to tag all traffic no matter the operating system, browser, or device form factor. It allows support for both client and remote network connectivity. Administrators no longer have to manage proxy server configurations or complex network configurations.
+
+Universal Tenant Restrictions does this enforcement using Global Secure Access based policy signaling for both the authentication and data plane. Tenant restrictions v2 enables enterprises to prevent data exfiltration by users using external tenant identities for Microsoft Entra ID integrated applications like Microsoft Graph, SharePoint Online, and Exchange Online. These technologies work together to prevent data exfiltration universally across all devices and networks.
++
+The following table explains the steps taken at each point in the previous diagram.
+
+| Step | Description |
+| | |
+| **1** | Contoso configures a **tenant restrictions v2** policy in their cross-tenant access settings to block all external accounts and external apps. Contoso enforces the policy using Global Secure Access universal tenant restrictions. |
+| **2** | A user with a Contoso-managed device tries to access a Microsoft Entra ID integrated app with an unsanctioned external identity. |
+| **3** | When the traffic reaches Microsoft's Security Service Edge, an HTTP header is added to the request. The header contains Contoso's tenant ID and the tenant restrictions policy ID. |
+| **4** | *Authentication plane protection:* Microsoft Entra ID uses the header in the authentication request to look up the tenant restrictions policy. Contoso's policy blocks unsanctioned external accounts from accessing external tenants. |
+| **5** | *Data plane protection:* If the user again tries to access an external unsanctioned application by copying an authentication response token they obtained outside of Contoso's network and pasting it into the device, they're blocked. The resource provider checks that the claim in the token and the header in the packet match. Any mismatch in the token and header triggers reauthentication and blocks access. |
+
+Universal tenant restrictions help to prevent data exfiltration across browsers, devices, and networks in the following ways:
+
+- It injects the following attributes into the header of outbound HTTP traffic at the client level in both the authentication control and data path to Microsoft 365 endpoints:
+ - Cloud ID of the device tenant
+ - Tenant ID of the device tenant
+ - Tenant restrictions v2 policy ID of the device tenant
+- It enables Microsoft Entra ID, Microsoft Accounts, and Microsoft 365 applications to interpret this special HTTP header enabling lookup and enforcement of the associated tenant restrictions v2 policy. This lookup enables consistent policy application.
+- Works with all Microsoft Entra ID integrated third-party apps at the auth plane during sign in.
+- Works with Exchange, SharePoint, and Microsoft Graph for data plane protection.
+
+## Prerequisites
+
+* Administrators who interact with **Global Secure Access preview** features must have one or more of the following role assignments depending on the tasks they're performing.
+ * The **Global Secure Access Administrator** role to manage the Global Secure Access preview features
+ * [Conditional Access Administrator](/azure/active-directory/roles/permissions-reference#conditional-access-administrator) or [Security Administrator](/azure/active-directory/roles/permissions-reference#security-administrator) to create and interact with Conditional Access policies and named locations.
+* The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+
+### Known limitations
+
+- If you have enabled universal tenant restrictions and you are accessing the Microsoft Entra admin center for one of the allow listed tenants, you may see an "Access denied" error. Add the following feature flag to the Microsoft Entra admin center:
+ - `?feature.msaljs=true&exp.msaljsexp=true`
+ - For example, you work for Contoso and you have allow listed Fabrikam as a partner tenant. You may see the error message for the Fabrikam tenant's Microsoft Entra admin center.
+ - If you received the "access denied" error message for this URL: `https://entra.microsoft.com/#home` then add the feature flag as follows: `https://entra.microsoft.com/?feature.msaljs%253Dtrue%2526exp.msaljsexp%253Dtrue#home`
++
+Outlook uses the QUIC protocol for some communications. We don't currently support the QUIC protocol. Organizations can use a firewall policy to block QUIC and fallback to non-QUIC protocol. The following PowerShell command creates a firewall rule to block this protocol.
+
+```PowerShell
+@New-NetFirewallRule -DisplayName "Block QUIC for Exchange Online" -Direction Outbound -Action Block -Protocol UDP -RemoteAddress 13.107.6.152/31,13.107.18.10/31,13.107.128.0/22,23.103.160.0/20,40.96.0.0/13,40.104.0.0/15,52.96.0.0/14,131.253.33.215/32,132.245.0.0/16,150.171.32.0/22,204.79.197.215/32,6.6.0.0/16 -RemotePort 443
+```
+## Configure tenant restrictions v2 policy
+
+Before an organization can use universal tenant restrictions, they must configure both the default tenant restrictions and tenant restrictions for any specific partners.
+
+For more information to configure these policies, see the article [Set up tenant restrictions V2 (Preview)](../active-directory/external-identities/tenant-restrictions-v2.md).
++
+## Enable tagging for tenant restrictions v2
+
+Once you have created the tenant restriction v2 policies, you can utilize Global Secure Access to apply tagging for tenant restrictions v2. An administrator with both the [Global Secure Access Administrator](../active-directory/roles/permissions-reference.md) and [Security Administrator](../active-directory/roles/permissions-reference.md#security-administrator) roles must take the following steps to enable enforcement with Global Secure Access.
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a Global Secure Access Administrator.
+1. Browse to **Global Secure Access** > **Global Settings** > **Session Management**.
+1. Select the **Tenant Restrictions** tab.
+1. Select the toggle to **Enable tagging to enforce tenant restrictions on your network**.
+1. Select **Save**.
++
+## Try Universal tenant restrictions with SharePoint Online.
+
+This capability works the same for Exchange Online and Microsoft Graph in the following examples we explain how to see it in action in your own environment.
+
+### Try the authentication path:
+
+1. With universal tenant restrictions turned off in Global Secure Access global settings.
+1. Go to SharePoint Online, `https://yourcompanyname.sharepoint.com/`, with an external identity that isn't allow-listed in a tenant restrictions v2 policy.
+ 1. For example, a Fabrikam user in the Fabrikam tenant.
+ 1. The Fabrikam user should be able to access SharePoint Online.
+1. Turn on universal tenant restrictions.
+1. As an end-user, with the Global Secure Access Client running, go to SharePoint Online with an external identity that hasn't been explicitly allow-listed.
+ 1. For example, a Fabrikam user in the Fabrikam tenant.
+ 1. The Fabrikam user should be blocked from accessing SharePoint Online with an error message saying:
+ 1. **Access is blocked, The Contoso IT department has restricted which organizations can be accessed. Contact the Contoso IT department to gain access.**
+
+### Try the data path
+
+1. With universal tenant restrictions turned off in Global Secure Access global settings.
+1. Go to SharePoint Online, `https://yourcompanyname.sharepoint.com/`, with an external identity that isn't allow-listed in a tenant restrictions v2 policy.
+ 1. For example, a Fabrikam user in the Fabrikam tenant.
+ 1. The Fabrikam user should be able to access SharePoint Online.
+1. In the same browser with SharePoint Online open, go to Developer Tools, or press F12 on the keyboard. Start capturing the network logs. You should see Status 200, when everything is working as expected.
+1. Ensure the **Preserve log** option is checked before continuing.
+1. Keep the browser window open with the logs.
+1. Turn on universal tenant restrictions.
+1. As the Fabrikam user, in the browser with SharePoint Online open, within a few minutes, new logs appear. Also, the browser may refresh itself based on the request and responses happening in the back-end. If the browser doesn't automatically refresh after a couple of minutes, hit refresh on the browser with SharePoint Online open.
+ 1. The Fabrikam user sees that their access is now blocked saying:
+ 1. **Access is blocked, The Contoso IT department has restricted which organizations can be accessed. Contact the Contoso IT department to gain access.**
+1. In the logs, look for a **Status** of 302. This row shows universal tenant restrictions being applied to the traffic.
+ 1. In the same response, check the headers for the following information identifying that universal tenant restrictions were applied:
+ 1. `Restrict-Access-Confirm: 1`
+ 1. `x-ms-diagnostics: 2000020;reason="xms_trpid claim was not present but sec-tenant-restriction-access-policy header was in requres";error_category="insufficiant_claims"`
++
+## Next steps
+
+The next step for getting started with Microsoft Entra Internet Access is to [Enable enhanced Global Secure Access signaling](how-to-source-ip-restoration.md#enable-global-secure-access-signaling-for-conditional-access).
+
+For more information on Conditional Access policies for Global Secure Access (preview), see the following articles:
+
+- [Set up tenant restrictions V2 (Preview)](../active-directory/external-identities/tenant-restrictions-v2.md)
+- [Source IP restoration](how-to-source-ip-restoration.md)
+- [Enable compliant network check with Conditional Access](how-to-compliant-network.md)
global-secure-access How To View Enriched Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-view-enriched-logs.md
+
+ Title: How to use enriched Microsoft 365 logs
+description: Learn how to use enriched Microsoft 365 logs for Global Secure Access (preview).
++++ Last updated : 06/27/2023++++
+# How to use the Global Secure Access (preview) enriched Microsoft 365 logs
+
+With your Microsoft 365 traffic flowing through the Microsoft Entra Private Access service, you want to gain insights into the performance, experience, and availability of the Microsoft 365 apps your organization uses. The enriched Microsoft 365 logs provide you with the information you need to gain these insights. You can integrate the logs with a third-party security information and event management (SIEM) tool for further analysis.
+
+This article describes the information in the logs and how to export them.
+
+## Prerequisites
+
+To use the enriched logs, you need the following roles and subscriptions:
+
+* A **Global Administrator** role is required to enable the enriched Microsoft 365 logs.
+* The preview requires a Microsoft Entra ID Premium P1 license. If needed, you can [purchase licenses or get trial licenses](https://aka.ms/azureadlicense).
+* To use the Microsoft 365 traffic forwarding profile, a Microsoft 365 E3 license is recommended.
+
+You must configure the endpoint for where you want to route the logs prior to configuring Diagnostic settings. The requirements for each endpoint vary and are described in the [Configure Diagnostic settings](#configure-diagnostic-settings) section.
+
+## What the logs provide
+
+The enriched Microsoft 365 logs provide information about Microsoft 365 workloads, so you can review network diagnostic data, performance data, and security events relevant to Microsoft 365 apps. For example, if access to Microsoft 365 is blocked for a user in your organization, you need visibility into how the user's device is connecting to your network.
+
+These logs provide:
+- Improved latency and predictability
+- Additional information added to original logs
+- Accurate IP address
+
+These logs are a subset of the logs available in the [Microsoft 365 audit logs](/microsoft-365/compliance/search-the-audit-log-in-security-and-compliance?view=0365-worldwide&preserve-view=true). The logs are enriched with additional information, such as the user's IP address, device name, and device type. The enriched logs also contain information about the Microsoft 365 app, such as the app name, app ID, and app version.
+
+## How to view the logs
+
+Viewing the enriched Microsoft 365 logs is a two-step process. First, you need to enable the log enrichment from Global Secure Access. Second, you need to configure Microsoft Entra ID Diagnostic settings to route the logs to an endpoint, such as a Log Analytics workspace.
+
+> [!NOTE]
+> At this time, only SharePoint Online logs are available for log enrichment.
+### Enable the log enrichment
+
+To enable the Enriched Microsoft 365 logs:
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a Global Administrator.
+1. Go to **Global Secure Access** > **Global settings** > **Logging**.
+1. Select the type of Microsoft 365 logs you want to enable.
+1. Select **Save**.
+
+ :::image type="content" source="media/how-to-view-enriched-logs/enriched-logs-sharepoint.png" alt-text="Screenshot of the Logging area of Global Secure Access." lightbox="media/how-to-view-enriched-logs/enriched-logs-sharepoint-expanded.png":::
+
+The enriched logs may take up to 72 hours to fully integrate with the service.
+
+### Configure Diagnostic settings
+
+To view the enriched Microsoft 365 logs, you must export or stream the logs to an endpoint, such as a Log Analytics workspace or a SIEM tool. The endpoint must be configured before you can configure Diagnostic settings.
+
+* To integrate logs with Log Analytics, you need a **Log Analytics workspace**.
+ - [Create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md).
+ - [Integrate logs with Log Analytics](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
+* To stream logs to a SIEM tool, you need to create an Azure event hub and an event hub namespace.
+ - [Set up an Event Hubs namespace and an event hub](../event-hubs/event-hubs-create.md).
+ - [Stream logs to an event hub](../active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md)
+* To archive logs to a storage account, you need an Azure storage account that you have `ListKeys` permissions for.
+ - [Create an Azure storage account](../storage/common/storage-account-create.md).
+ - [Archive logs to a storage account](../active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md)
+
+With your endpoint created, you can configure Diagnostic settings.
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a Global Administrator or Security Administrator.
+1. Go to **Microsoft Entra ID** > **Monitoring and health** > **Diagnostic settings**.
+1. Select **Add Diagnostic setting**.
+1. Give your diagnostic setting a name.
+1. Select `EnrichedOffice365AuditLogs`.
+1. Select the **Destination details** for where you'd like to send the logs. Choose any or all of the following destinations. Additional fields appear, depending on your selection.
+
+ * **Send to Log Analytics workspace:** Select the appropriate details from the menus that appear.
+ * **Archive to a storage account:** Provide the number of days you'd like to retain the data in the **Retention days** boxes that appear next to the log categories. Select the appropriate details from the menus that appear.
+ * **Stream to an event hub:** Select the appropriate details from the menus that appear.
+ * **Send to partner solution:** Select the appropriate details from the menus that appear.
+
+The following example is sending the enriched logs to a Log Analytics workspace, which requires selecting the Subscription and Log Analytics workspace from the menus that appear.
+++
+## Next steps
+
+- [Explore the Global Secure Access logs and monitoring options](concept-global-secure-access-logs-monitoring.md)
+- [Learn about Global Secure Access audit logs](how-to-access-audit-logs.md)
global-secure-access How To View Traffic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-view-traffic-logs.md
+
+ Title: How to use Global Secure Access (preview) traffic logs
+description: Learn how to use traffic logs for Global Secure Access (preview).
++++ Last updated : 06/27/2023++++
+# How to use the Global Secure Access (preview) traffic logs
+
+Monitoring the traffic for Global Secure Access (preview) is an important activity for ensuring your tenant is configured correctly and that your users are getting the best experience possible. The Global Secure Access traffic logs provide insight into who is accessing what resources, where they're accessing them from, and what action took place.
+
+This article describes how to use the traffic logs for Global Secure Access.
+
+## How the traffic logs work
+
+Viewing traffic logs requires a Reports Reader role in Microsoft Entra ID.
+
+The Global Secure Access logs provide details of your network traffic. To better understand those details and how you can analyze those details to monitor your environment, it's helpful to look at the three levels of the logs and their relationship to each other.
+
+A user accessing a website represents one *session*, and within that session there may be multiple *connections*, and within that connection there may be multiple *transactions*.
+
+- **Session**: A session is identified by the first URL a user accesses. That session could then open many connections, for example a news site that contains multiple ads from several different sites.
+- **Connection**: A connection includes the source and destination IP, source and destination port, and fully qualified domain name (FQDN). The connection components comprise the 5 tuple.
+- **Transaction**: A transaction is a unique request and response pair.
+
+Within each log instance, you can see the connection ID and transaction ID in the details. By using the filters, you can look at all connections and transactions for a single session.
+
+## How to view the traffic logs
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** using a Reports Reader role.
+1. **Global Secure Access (Preview)** > **Monitor** > **Traffic logs**.
+
+The top of the page displays a summary of all transactions as well as a breakdown for each type of traffic. Select the **Microsoft 365** or **Private access** buttons to filter the logs to each traffic type.
+
+> [!NOTE]
+> At this time, Session ID information is not available in the log details.
+
+### View the log details
+
+Select any log from the list to view the details. These details provide valuable information that can be used to filter the logs for specific details or to troubleshoot a scenario. The details can be added as a column and used to filter the logs.
+
+![Screenshot of the traffic log activity details.](media/how-to-view-traffic-logs/traffic-activity-details.png)
+
+### Filter and column options
+
+The traffic logs can provide many details, so to start only some columns are visible. Enable and disable the columns based on the analysis or troubleshooting tasks you're performing, as the logs could be difficult to view with too many columns selected. The column and filter options align with each item in the Activity details.
+
+Select **Columns** from the top of the page to change the columns that are displayed.
+
+![Screenshot of the traffic logs with the columns button highlighted.](media/how-to-view-traffic-logs/traffic-logs-columns-button.png)
+
+To filter the traffic logs to a specific detail, select the **Add filter** button and then enter the detail you want to filter by.
+
+For example if you want to look at all the logs from a specific connection:
+
+1. Select the log detail and copy the `connectionId` from the Activity details.
+1. Select **Add filter** and choose **Connection ID**.
+1. In the field that appears, paste the `connectionId` and select **Apply**.
+
+ ![Screenshot of the traffic log filter.](media/how-to-view-traffic-logs/traffic-log-filter.png)
+
+### Troubleshooting scenarios
+
+The following details may be helpful for troubleshooting and analysis:
+
+- If you're interesting in the size of the traffic being sent and received, enable the **Sent Bytes** and **Received Bytes** columns. Select the column header to sort the logs by the size of the logs.
+- If you are reviewing the network activity for a risky user, you can filter the results by user principal name and then review the sites they're accessing.
+- To look at traffic associated with specific
+
+The log details provide valuable information about your network traffic. Not all details are defined in the list below, but the following details are useful for troubleshooting and analysis:
+
+- **Transaction ID**: Unique identifier representing the request/response pair.
+- **Connection ID**: Unique identifier representing the connection that initiated the log.
+- **Device category**: Device type where the transaction initiated from. Either **client** or **remote network**.
+- **Action**: The action taken on the network session. Either **Allowed** or **Denied**.
+
+## Configure Diagnostic settings to export logs
+
+You can export the Global Secure Access traffic logs to an endpoint for further analysis and alerting. This integration is configured in Microsoft Entra ID Diagnostic settings.
+
+1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a Global Administrator or Security Administrator.
+1. Go to **Microsoft Entra ID** > **Monitoring and health** > **Diagnostic settings**.
+1. Select **Add Diagnostic setting**.
+1. Give your diagnostic setting a name.
+1. Select `NetworkAccessTrafficLogs`.
+1. Select the **Destination details** for where you'd like to send the logs. Choose any or all of the following destinations. Additional fields appear, depending on your selection.
+
+ * **Send to Log Analytics workspace:** Select the appropriate details from the menus that appear.
+ * **Archive to a storage account:** Provide the number of days you'd like to retain the data in the **Retention days** boxes that appear next to the log categories. Select the appropriate details from the menus that appear.
+ * **Stream to an event hub:** Select the appropriate details from the menus that appear.
+ * **Send to partner solution:** Select the appropriate details from the menus that appear.
++
+## Next steps
+
+- [Learn about the traffic dashboard](concept-traffic-dashboard.md)
+- [View the audit logs for Global Secure Access](how-to-access-audit-logs.md)
+- [View the enriched Microsoft 365 logs](how-to-view-enriched-logs.md)
+
global-secure-access Overview What Is Global Secure Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/overview-what-is-global-secure-access.md
+
+ Title: What is Global Secure Access (preview)?
+description: Learn how Global Secure Access (preview) provides control and visibility to users and devices both inside and outside of a traditional office.
++++ Last updated : 06/23/2023++++
+# What is Global Secure Access (preview)?
+
+The way people work has changed. Instead of working in traditional offices, people now work from nearly anywhere. With applications and data moving to the cloud, an identity-aware, cloud-delivered network perimeter for the modern workforce is needed. This new network security category is called Security Service Edge (SSE).
+
+Microsoft Entra Internet Access and Microsoft Entra Private Access comprise Microsoft's Security Service Edge solution. Global Secure Access (preview) is the unifying term used for both Microsoft Entra Internet Access and Microsoft Entra Private Access. Global Secure Access is the unified location in the Microsoft Entra admin center and is built upon the core principles of Zero Trust to use least privilege, verify explicitly, and assume breach.
+
+![Diagram of the Global Secure Access solution, illustrating how identities and remote networks can connect to Microsoft 365, private, and public resources through the service.](media/overview-what-is-global-secure-access/global-secure-access-diagram.png)
+
+Microsoft Entra Internet Access and Microsoft Entra Private Access - coupled with Microsoft Defender for Cloud Apps, our SaaS-security focused Cloud Access Security Broker (CASB) - are uniquely built as a solution that converges network, identity, and endpoint access controls so you can secure access to any app or resource, from anywhere. With the addition of these Global Secure Access products, Microsoft Entra simplifies access policy management and enables access orchestration for employees, business partners, and digital workloads. You can continuously monitor and adjust user access in real time if permissions or risk level changes.
+
+The Global Secure Access features streamline the roll-out and management of the access control capabilities with a unified portal. These features are delivered from Microsoft's Wide Area Network, spanning 140+ countries and 190+ network edge locations. This private network, which is one of the largest in the world, enables organizations to optimally connect users and devices to public and private resources seamlessly and securely. For a list of the current points of presence, see [Global Secure Access points of presence article](reference-points-of-presence.md).
+
+## Microsoft Entra Internet Access
+
+Microsoft Entra Internet Access secures access to Microsoft 365, SaaS, and public internet apps while protecting users, devices, and data against internet threats. Best-in-class security and visibility, along with fast and seamless access to Microsoft 365 apps is currently available in public preview. Secure access to public internet apps through the identity-centric, device-aware, cloud-delivered Secure Web Gateway (SWG) of Microsoft Entra Internet Access is in private preview.
+
+### Key features
+
+- Prevent stolen tokens from being replayed with the compliant network check in Conditional Access.
+- Apply universal tenant restrictions to prevent data exfiltration to other tenants or personal accounts including anonymous access.
+- Enriched logs with network and device signals currently supported for SharePoint Online traffic.
+- Improve the precision of risk assessments on users, locations, and devices.
+- Deploy side-by-side with third party SSE solutions.
+- Acquire network traffic from the desktop client or from a remote network, such as a branch location.
+
+#### Private preview features
+The following new capabilities are available in the private preview of Microsoft Entra Internet Access. To request access to the private preview, complete [the private preview interest form](https://aka.ms/entra-ia-preview).
+
+- Dedicated public internet traffic forwarding profile.
+- Protect user access to the public internet while leveraging Microsoft's cloud-delivered, identity-aware SWG solution.
+- Enable web content filtering to regulate access to websites based on their content categories through secure web gateway.
+- Apply universal Conditional Access policies for all internet destinations, even if not federated with Microsoft Entra ID.
+
+## Microsoft Entra Private Access
+
+Microsoft Entra Private Access provides your users - whether in an office or working remotely - secured access to your private, corporate resources. Microsoft Entra Private Access builds on the capabilities of Microsoft Entra ID App Proxy and extends access to any private resource, port, and protocol.
+
+Remote users can connect to private apps across hybrid and multicloud environments, private networks, and data centers from any device and network without requiring a VPN. The service offers per-app adaptive access based on Conditional Access policies, for more granular security than a VPN.
+
+### Key features
+
+- Quick Access: Zero Trust based access to a range of IP addresses and/or FQDNs without requiring a legacy VPN.
+- Per-app access for TCP apps (UDP support in development).
+- Modernize legacy app authentication with deep Conditional Access integration.
+- Provide a seamless end-user experience by acquiring network traffic from the desktop client and deploying side-by-side with your existing third-party SSE solutions.
++
+## Next steps
+
+- [Get started with Global Secure Access](how-to-get-started-with-global-secure-access.md)
+- [Stay in the loop with the latest Microsoft Entra updates](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/bg-p/Identity)
global-secure-access Reference Points Of Presence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/reference-points-of-presence.md
+
+ Title: Global Secure Access points of presence
+description: Global Secure Access points of presence.
++++ Last updated : 07/03/2023+++
+# Global Secure Access (preview) points of presence
+
+During the preview, Global Secure Access (preview) is available in limited points of presence, with new locations added periodically. The service routes traffic through one of the following nearby locations, so even if you're not in a listed location, you can still access the service.
+
+## Microsoft Entra Internet Access
+
+Tunneling Microsoft 365 traffic, which is part of Microsoft Entra Internet Access, is currently supported in the following locations:
+
+| Europe | North America | South America | Africa | Asia |
+||||||
+| Amsterdam, Netherlands | Columbia, Washington, USA | Rio de Janeiro, Brazil | Johannesburg, South Africa | Dubai, UAE|
+| Berlin, Germany | Des Moines, Iowa, USA | Sao Paulo, Brazil | | |
+| Dublin, Ireland | Manassas, Virginia, USA | | | |
+| Gavle, Sweden | Montreal, Quebec, Canada | | | |
+| London, UK | Phoenix, Arizona, USA | | | |
+| Paris, France | San Antonio, Texas, USA | | | |
+| | San Jose, California, USA | | | |
+| | Toronto, Ontario, Canada | | | |
+
+## Microsoft Entra Private Access
+
+Microsoft Entra Private Access is currently supported in the following locations:
+
+| Europe | North America | South America | Africa |
+|||||
+| Amsterdam, Netherlands |Manassas, Virginia, USA | Rio de Janeiro, Brazil | Johannesburg, South Africa |
+| Berlin, Germany | Montreal, Quebec, Canada | Sao Paulo, Brazil | |
+| Dublin, Ireland |Phoenix, Arizona, USA| | |
+| Gavle, Sweden | San Antonio, Texas, USA | | |
+| London, UK | San Jose, California, USA | | |
+| Paris, France | Toronto, Ontario, Canada | | |
global-secure-access Reference Remote Network Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/reference-remote-network-configurations.md
+
+ Title: Global Secure Access remote network configurations
+description: Global Secure Access configurations for remote network device links.
++++ Last updated : 06/01/2023++++
+# Global Secure Access remote network configurations
+
+Device links are the physical routers that connect your remote networks, such as branch locations, to Global Secure Access (preview). There's a specific set of combinations you must use if you choose the **Custom** option when adding device links. If you choose the **Default** option, you must enter a specific combination of properties on the customer premises equipment (CPE) device.
+
+## Default IPSec/IKE configurations
+
+When you select **Default** as your IPsec/IKE policy when configuring remote network device links in the Microsoft Entra admin center, we expect the following combinations in the tunnel handshake.
+
+*You must specify one of these combinations on your customer premise equipment (CPE).*
+
+### IKE Phase 1 combinations
+
+| Properties | Combination 1 | Combination 2 | Combination 3 | Combination 4 | Combination 5 |
+| | | | | | |
+| IKE encryption | GCMAES256 | GCMAES128 | AES256 | AES128 | AES256 |
+| IKEv2 integrity | SHA384 | SHA256 | SHA384 | SHA256 | SHA256 |
+| DH group | DHGroup24 | DHGroup24 | DHGroup24 | DHGroup24 | DHGroup2 |
+
+### IKE Phase 2 combinations
+
+| Properties | Combination 1 | Combination 2 | Combination 3 |
+| | | | |
+| IPSec encryption | GCMAES256 | GCMAES256 | GCMAES128 |
+| IPSec integrity | GCMAES192 | GCMAES192 | GCMAES128 |
+| PFS Group | None | None | None |
+
+## Custom IPSec/IKE combinations
+
+When you select **Custom** as IPSec/IKE configuration when configuring remote network device links in the Microsoft Entra admin center, you must use one of the following combinations.
+
+### IKE Phase 1 combinations
+
+There no limitations for the IKE phase 1 combinations. Any mix and match of encryption, integrity, and DH group is valid.
+
+### IKE Phase 2 combinations
+
+The IPSec encryption and integrity configurations are provided in the following table:
+
+| IPSec integrity | IPSec encryption |
+| | |
+| GCMAES128 | GCMAES128 |
+| GCMAES192 | GCMAES192 |
+| GCMAES256 | GCMAES256 |
+| None | SHA24 |
+
+- PFS group - No limitation.
+- SA lifetime - must be >300 seconds.
+
+### Valid autonomous system number (ASN)
+
+You can use any values *except* for the following reserved ASNs:
+
+- Azure reserved ASNs: 12076, 65517,65518, 65519, 65520, 8076, 8075
+- IANA reserved ASNs: 23456, >= 64496 && <= 64511, >= 65535 && <= 65551, 4294967295
+
+### Valid enums
+
+#### IKE encryption
+
+| Value | Enum |
+| | |
+| AES128 | 0 |
+| AES192 | 1 |
+| AES256 | 2 |
+| GCMAES128 | 3 |
+| GCMAES256 | 4 |
+
+#### IKE integrity
+
+| Value | Enum |
+| | |
+| SHA256 | 0 |
+| SHA384 | 1 |
+| GCMAES256 | 2 |
+| GCMAES256 | 3 |
+
+#### DH group
+
+| Value | Enum |
+| | |
+| DHGroup14 | 0 |
+| DHGroup2048 | 1 |
+| ECP256 | 2 |
+| ECP384 | 3 |
+| DHGroup24 | 4 |
+
+#### IPSec encryption
+
+| Value | Enum |
+| | |
+| GCMAES128 | 0 |
+| GCMAES192 | 1 |
+| GCMAES256 | 2 |
+| None | 3 |
+
+#### IPSec integrity
+
+| Value | Enum |
+| | |
+| GCMAES128 | 0 |
+| GCMAES192 | 1 |
+| GCMAES256 | 2 |
+| SHA256 | 3 |
+
+#### PFS group
+
+| Value | Enum |
+| | |
+| PFS1 | 0 |
+| None | 1 |
+| PFS2 | 2 |
+| PFS2048 | 3 |
+| ECP256 | 4 |
+| ECP384 | 5 |
+| PFSMM | 6 |
+| PFS24 | 7 |
+| PFS14 | 8 |
+
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/overview.md
Title: Overview of Azure Blueprints description: Understand how the Azure Blueprints service enables you to create, define, and deploy artifacts in your Azure environment. Previously updated : 03/08/2022 Last updated : 05/31/2022 # What is Azure Blueprints?
governance Samples By Category https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-category.md
Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature
[!INCLUDE [authorization-resources-role-definitions-permissions-list](../../includes/resource-graph/query/authorization-resources-role-definitions-permissions-list.md)] + ## Azure Service Health [!INCLUDE [azure-resource-graph-samples-cat-azure-service-health](../../../../includes/resource-graph/samples/bycat/azure-service-health.md)]
governance Samples By Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-table.md
details, see [Resource Graph tables](../concepts/query-language.md#resource-grap
[!INCLUDE [authorization-resources-role-definitions-permissions-list](../../includes/resource-graph/query/authorization-resources-role-definitions-permissions-list.md)] + ## ExtendedLocationResources [!INCLUDE [azure-resource-graph-samples-table-extendedlocationresources](../../../../includes/resource-graph/samples/bytable/extendedlocationresources.md)]
iot-edge Development Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/development-environment.md
Title: Azure IoT Edge development environment | Microsoft Docs
+ Title: Azure IoT Edge development environment
description: Learn about the supported systems and first-party development tools that will help you create IoT Edge modules Previously updated : 11/28/2022 Last updated : 07/10/2023
The only supported container engine for IoT Edge devices in production is Moby.
## Development tools
-Both Visual Studio and Visual Studio Code have add-on extensions to help develop IoT Edge solutions. These extensions provide language-specific templates to help create and deploy new IoT Edge scenarios. The Azure IoT Edge extensions for Visual Studio and Visual Studio Code help you code, build, deploy, and debug your IoT Edge solutions. You can create an entire IoT Edge solution that contains multiple modules, and the extensions automatically update a deployment manifest template with each new module addition. The extensions also enable management of IoT devices from within Visual Studio or Visual Studio Code. You can deploy modules to a device, monitor the status, and view messages as they arrive at IoT Hub. Finally, both extensions use the [IoT EdgeHub dev tool](#iot-edgehub-dev-tool) to enable local running and debugging of modules on your development machine.
+The [Azure IoT Edge development tool](#iot-edge-dev-tool) is a command line tool to develop and test IoT Edge modules. You can create new IoT Edge scenarios, build module images, run modules in a simulator, and monitor messages sent to IoT Hub. The *iotedgedev* tool is the recommended tool for developing IoT Edge modules.
-If you prefer to develop with other editors or from the CLI, the Azure IoT Edge dev tool provides commands so that you can develop and test from the command line. You can create new IoT Edge scenarios, build module images, run modules in a simulator, and monitor messages sent to IoT Hub.
+Both Visual Studio and Visual Studio Code have add-on extensions to help develop IoT Edge solutions. These extensions provide language-specific templates to help create and deploy new IoT Edge scenarios. The Azure IoT Edge extensions for Visual Studio and Visual Studio Code help you code, build, deploy, and debug your IoT Edge solutions. You can create an entire IoT Edge solution that contains multiple modules, and the extensions automatically update a deployment manifest template with each new module addition. The extensions also enable management of IoT devices from within Visual Studio or Visual Studio Code. You can deploy modules to a device, monitor the status, and view messages as they arrive at IoT Hub. Finally, both extensions use the IoT EdgeHub dev tool to enable local running and debugging of modules on your development machine.
+
+### IoT Edge dev tool
+
+The Azure IoT Edge dev tool simplifies IoT Edge development with command-line abilities. This tool provides CLI commands to develop, debug, and test modules. The IoT Edge dev tool works with your development system, whether you've manually installed the dependencies on your machine or are using the prebuilt [IoT Edge Dev Container](#iot-edge-dev-container) to run the *iotedgedev* tool in a container.
+
+For more information and to get started, see [IoT Edge dev tool wiki](https://github.com/Azure/iotedgedev/wiki).
### Visual Studio Code extension The Azure IoT Edge extension for Visual Studio Code provides IoT Edge module templates built on programming languages including C, C#, Java, Node.js, and Python. Templates for Azure functions in C# are also included.
+> [!IMPORTANT]
+> The Azure IoT Edge Visual Studio Code extension is in [maintenance mode](https://github.com/microsoft/vscode-azure-iot-edge/issues/639). The *iotedgedev* tool is the recommended tool for developing IoT Edge modules.
+ For more information and to download, see [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge). In addition to the IoT Edge extensions, you may find it helpful to install additional extensions for developing. For example, you can use [Docker Support for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=PeterJausovec.vscode-docker) to manage your images, containers, and registries. Additionally, all the major supported languages have extensions for Visual Studio Code that can help when you're developing modules. The [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extension is useful as a companion for the Azure IoT Edge extension.
-#### Prerequisites
-
-The module templates for some languages and services have prerequisites that are necessary to build the project folders on your development machine with Visual Studio Code.
-
-| Module template | Prerequisite |
-| | |
-| Azure Functions | [.NET Core SDK](https://dotnet.microsoft.com/download) |
-| C | [Git](https://git-scm.com/) |
-| C# | [.NET Core SDK](https://dotnet.microsoft.com/download) |
-| Java | <ul><li>[Java SE Development Kit 10](/azure/developer/java/fundamentals/java-support-on-azure) <li> [Set the JAVA_HOME environment variable](https://docs.oracle.com/cd/E19182-01/820-7851/inst_cli_jdk_javahome_t/) <li> [Maven](https://maven.apache.org/)</ul> |
-| Node.js | <ul><li>[Node.js](https://nodejs.org/) <li> [Yeoman](https://www.npmjs.com/package/yo) <li> [Azure IoT Edge Node.js module generator](https://www.npmjs.com/package/generator-azure-iot-edge-module)</ul> |
-| Python |<ul><li> [Python](https://www.python.org/downloads/) <li> [Pip](https://pip.pypa.io/en/stable/installing/#installation) <li> [Git](https://git-scm.com/) </ul> |
- ### Visual Studio 2017/2019 extension The Azure IoT Edge tools for Visual Studio provide an IoT Edge module template built on C# and C.
-For more information and to download, see [Azure IoT Edge Tools for Visual Studio 2017](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vsiotedgetools) or [Azure IoT Edge Tools for Visual Studio 2019](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools).
-
-### IoT Edge dev tool
-
-The Azure IoT Edge dev tool simplifies IoT Edge development with command-line abilities. This tool provides CLI commands to develop, debug, and test modules. The IoT Edge dev tool works with your development system, whether you've manually installed the dependencies on your machine or are using the IoT Edge dev container.
+> [!IMPORTANT]
+> The Azure IoT Edge Visual Studio extensions are in maintenance mode. The *iotedgedev* tool is the recommended tool for developing IoT Edge modules.
-For more information and to get started, see [IoT Edge dev tool wiki](https://github.com/Azure/iotedgedev/wiki).
+For more information and to download, see [Azure IoT Edge Tools for Visual Studio 2017](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vsiotedgetools) or [Azure IoT Edge Tools for Visual Studio 2019](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools).
## Testing tools
The Azure IoT EdgeHub dev tool provides a local development and debug experience
The IoT EdgeHub dev tool was designed to work in tandem with the Visual Studio and Visual Studio Code extensions, as well as with the IoT Edge dev tool. The dev tool supports inner loop development as well as outer loop testing, so it integrates with other DevOps tools too.
+> [!IMPORTANT]
+> The IoT EdgeHub dev tool is in [maintenance mode](https://github.com/Azure/iotedgehubdev/issues/396). Consider using a [Linux virtual machine with IoT Edge runtime installed](quickstart-linux.md), physical device, or [EFLOW](https://github.com/Azure/iotedge-eflow).
+ For more information and to install, see [Azure IoT EdgeHub dev tool](https://pypi.org/project/iotedgehubdev/). ### IoT Edge dev container
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
Once you move into a production scenario, or you want to create a gateway device
One option is to provide your own certificates and manage them manually. However, to avoid the risky and error-prone manual certificate management process, use an EST server whenever possible.
+> [!CAUTION]
+> The common name (CN) of Edge CA certificate can't match device hostname parameter defined in the device's configuration file *config.toml* or the device ID registered in IoT Hub.
+ ### Plan for Edge CA renewal When the Edge CA certificate renews, all the certificates it issued like module server certificates are regenerated. To give the modules new server certificates, IoT Edge restarts all modules when Edge CA certificate renews.
iot-edge Iot Edge Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-certs.md
description: How Azure IoT Edge uses certificate to validate devices, modules, a
Previously updated : 11/03/2022 Last updated : 07/05/2023
flowchart TB
### Hostname specificity
-The certificate common name **CN = edgegateway.local** is listed at the top of the chain. **edgegateway.local** is the hostname for *EdgeGateway* on the local network (LAN or VNet) where *TempSensor* and *EdgeGateway* are connected. It could be a private IP address such as *192.168.1.23* or a fully-qualified domain name (FQDN) similar to the diagram. The important parts are:
+The certificate common name **CN = edgegateway.local** is listed at the top of the chain. **edgegateway.local** is *edgeHub*'s server certificate common name. **edgegateway.local** is also the hostname for *EdgeGateway* on the local network (LAN or VNet) where *TempSensor* and *EdgeGateway* are connected. It could be a private IP address such as *192.168.1.23* or a fully qualified domain name (FQDN) like the diagram. The *edgeHub server certificate* is generated using the **hostname** parameter defined in the [IoT Edge config.toml file](configure-device.md#hostname). Don't confuse the *edgeHub server certificate* with *Edge CA certificate*. For more information about managing the Edge CA certificate, see [Manage IoT Edge certificates](how-to-manage-device-certificates.md#manage-edge-ca).
-* *TempSensor's* OS could resolve the hostname to reach *EdgeGateway*
-* The hostname is explicitly configured in *EdgeGateway's* `config.toml` as follows:
-
- ```toml
- hostname = 'edgegateway.local'
- ```
-
-The two values must *match exactly*. As in the example, **CN = 'edgegateway.local'** and **hostname = 'edgegateway.local'**.
+When *TempSensor* connects to *EdgeGateway*, *TempSensor* uses the hostname **edgegateway.local** to connect to *EdgeGateway*. *TempSensor* checks the certificate presented by *EdgeGateway* and verifies that the certificate common name is **edgegateway.local**. If the certificate common name is different, *TempSensor* rejects the connection.
> [!NOTE] > For simplicity, the example shows subject certificate common name (CN) as property that is validated. In practice, if a certificate has a subject alternative name (SAN), SAN is validated instead of CN. Generally, because SAN can contain multiple values, it has both the main domain/hostname for the certificate holder as well as any alternate domains.
logic-apps Sap Generate Schemas For Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/sap-generate-schemas-for-artifacts.md
The following example creates a bank object using the `CREATE` method. This exam
<STREET xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc">ExampleStreetAddress</STREET> <CITY xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc">Redmond</CITY> </BANK_ADDRESS>
- <BANK_COUNTRY>US</BANK_COUNTRY>
+ <BANK_CTRY>US</BANK_CTRY>
<BANK_KEY>123456789</BANK_KEY> </CREATE> ```
The following example gets details for a bank using the bank routing number, whi
```xml <GETDETAIL xmlns="http://Microsoft.LobServices.Sap/2007/03/Bapi/BUS1011">
- <BANK_COUNTRY>US</BANK_COUNTRY>
- <BANK_KEY>123456789</BANK_KEY>
+ <BANKCOUNTRY>US</BANKCOUNTRY>
+ <BANKKEY>123456789</BANKKEY>
</GETDETAIL> ```
machine-learning Apache Spark Azure Ml Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/apache-spark-azure-ml-concepts.md
To access data and other resources, a Spark job can use either a user identity p
> [!NOTE] > - To ensure successful Spark job execution, assign **Contributor** and **Storage Blob Data Contributor** roles (on the Azure storage account used for data input and output) to the identity that will be used for the Spark job submission. > - If an [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md) points to a Synapse Spark pool in an Azure Synapse workspace, and that workspace has an associated managed virtual network, [configure a managed private endpoint to a storage account](../synapse-analytics/security/connect-to-a-secure-storage-account.md). This configuration will help ensure data access.
-> - Both serverless Spark compute and attached Synapse Spark pool do not work in a notebook created in a private link enabled workspace.
## Next steps
migrate Deploy Appliance Script Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/deploy-appliance-script-government.md
ms. Previously updated : 06/15/2023 Last updated : 07/11/2023
If you want to set up an appliance in the public cloud, follow [this article](d
You can use the script to deploy the Azure Migrate appliance on an existing physical or a virtualized server. -- The server that will act as the appliance must be running Windows Server 2016 and meet other requirements for [VMware](migrate-appliance.md#appliancevmware), [Hyper-V](migrate-appliance.md#appliancehyper-v), and [physical servers](migrate-appliance.md#appliancephysical).
+- The server that will act as the appliance must be running Windows Server 2022 and meet other requirements for [VMware](migrate-appliance.md#appliancevmware), [Hyper-V](migrate-appliance.md#appliancehyper-v), and [physical servers](migrate-appliance.md#appliancephysical).
- If you run the script on a server with Azure Migrate appliance already set up, you can choose to clean up the existing configuration and set up a fresh appliance of the desired configuration. When you execute the script, you will get a notification as shown below: :::image type="content" source="./media/deploy-appliance-script/script-reconfigure-appliance.png" alt-text="Screenshot that shows how to reconfigure an appliance.":::
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 7134EF5B61D3560A102DF4814CB91C95E44EAE9677AAF1CC68AE0A04A6DBD613
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 967FC3B8A5C467C303D86C8889EB4E0D4A8A7798865CBFBDF23E425D4EE425CA
### Run the script
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 7134EF5B61D3560A102DF4814CB91C95E44EAE9677AAF1CC68AE0A04A6DBD613
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 967FC3B8A5C467C303D86C8889EB4E0D4A8A7798865CBFBDF23E425D4EE425CA
### Run the script
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 7134EF5B61D3560A102DF4814CB91C95E44EAE9677AAF1CC68AE0A04A6DBD613
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 967FC3B8A5C467C303D86C8889EB4E0D4A8A7798865CBFBDF23E425D4EE425CA
> [!NOTE] > The same script can be used to set up Physical appliance for Azure Government cloud with either public or private endpoint connectivity.
migrate Deploy Appliance Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/deploy-appliance-script.md
ms. Previously updated : 06/15/2023 Last updated : 07/10/2023
You can use the script to deploy the Azure Migrate appliance on an existing serv
Scenario | Requirements |
-VMware | Windows Server 2016, with 32 GB of memory, eight vCPUs, around 80 GB of disk storage.
-Hyper-V | Windows Server 2016, with 16 GB of memory, eight vCPUs, around 80 GB of disk storage.
+VMware | Windows Server 2022, with 32 GB of memory, eight vCPUs, around 80 GB of disk storage.
+Hyper-V | Windows Server 2022, with 16 GB of memory, eight vCPUs, around 80 GB of disk storage.
- The server also needs an external virtual switch. It requires a static or dynamic IP address. - Before you deploy the appliance, review detailed appliance requirements for [VMware](migrate-appliance.md#appliancevmware) and [Hyper-V](migrate-appliance.md#appliancehyper-v).
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 7134EF5B61D3560A102DF4814CB91C95E44EAE9677AAF1CC68AE0A04A6DBD613
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 967FC3B8A5C467C303D86C8889EB4E0D4A8A7798865CBFBDF23E425D4EE425CA
> [!NOTE] > The same script can be used to set up VMware appliance for either Azure public or Azure Government cloud.
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 7134EF5B61D3560A102DF4814CB91C95E44EAE9677AAF1CC68AE0A04A6DBD613
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 967FC3B8A5C467C303D86C8889EB4E0D4A8A7798865CBFBDF23E425D4EE425CA
> [!NOTE] > The same script can be used to set up Hyper-V appliance for either Azure public or Azure Government cloud.
migrate Discover And Assess Using Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/discover-and-assess-using-private-endpoints.md
Previously updated : 06/15/2023 Last updated : 07/10/2023
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 7134EF5B61D3560A102DF4814CB91C95E44EAE9677AAF1CC68AE0A04A6DBD613
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 967FC3B8A5C467C303D86C8889EB4E0D4A8A7798865CBFBDF23E425D4EE425CA
> [!NOTE] > The same script can be used to set up an appliance with private endpoint connectivity for any of the chosen scenarios, such as VMware, Hyper-V, physical or other to deploy an appliance with the desired configuration.
migrate How To Scale Out For Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-scale-out-for-migration.md
ms. Previously updated : 06/15/2023 Last updated : 07/10/2023
To add a scale-out appliance, follow the steps mentioned below:
### 2. Download the installer for the scale-out appliance
-In **Download Azure Migrate appliance**, click **Download**. You need to download the PowerShell installer script to deploy the scale-out appliance on an existing server running Windows Server 2016 and with the required hardware configuration (32-GB RAM, 8 vCPUs, around 80 GB of disk storage and internet access, either directly or through a proxy).
+In **Download Azure Migrate appliance**, click **Download**. You need to download the PowerShell installer script to deploy the scale-out appliance on an existing server running Windows Server 2022 and with the required hardware configuration (32-GB RAM, 8 vCPUs, around 80 GB of disk storage and internet access, either directly or through a proxy).
:::image type="content" source="./media/how-to-scale-out-for-migration/download-scale-out.png" alt-text="Download script for scale-out appliance":::
In **Download Azure Migrate appliance**, click **Download**. You need to downlo
- ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]``` - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ``` > 3. Download the [latest version](https://go.microsoft.com/fwlink/?linkid=2191847) of the scale-out appliance installer from the portal if the computed hash value doesn't match this string:
-7134EF5B61D3560A102DF4814CB91C95E44EAE9677AAF1CC68AE0A04A6DBD613
+967FC3B8A5C467C303D86C8889EB4E0D4A8A7798865CBFBDF23E425D4EE425CA
### 3. Run the Azure Migrate installer script
migrate How To Set Up Appliance Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-set-up-appliance-physical.md
ms. Previously updated : 06/15/2023 Last updated : 07/10/2023
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 7134EF5B61D3560A102DF4814CB91C95E44EAE9677AAF1CC68AE0A04A6DBD613
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 967FC3B8A5C467C303D86C8889EB4E0D4A8A7798865CBFBDF23E425D4EE425CA
> [!NOTE] > The same script can be used to set up Physical appliance for either Azure public or Azure Government cloud.
migrate Migrate Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-appliance.md
The following table summarizes the Azure Migrate appliance requirements for VMwa
**Appliance services** | The appliance has the following **Project limits** | An appliance can only be registered with a single project.<br> A single project can have multiple registered appliances. **Discovery limits** | An appliance can discover up to 10,000 severs running across multiple vCenter Servers.<br>A single appliance can connect to up to 10 vCenter Servers.
-**Supported deployment** | Deploy as new server running on vCenter Server using OVA template.<br><br> Deploy on an existing server running Windows Server 2016 using PowerShell installer script.
-**OVA template** | Download from project or from [here](https://go.microsoft.com/fwlink/?linkid=2140333)<br><br> Download size is 11.9 GB.<br><br> The downloaded appliance template comes with a Windows Server 2016 evaluation license, which is valid for 180 days.<br>If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance using OVA template, or you activate the operating system license of the appliance server.
+**Supported deployment** | Deploy as new server running on vCenter Server using OVA template.<br><br> Deploy on an existing server running Windows Server 2022 using PowerShell installer script.
+**OVA template** | Download from project or from [here](https://go.microsoft.com/fwlink/?linkid=2140333)<br><br> Download size is 11.9 GB.<br><br> The downloaded appliance template comes with a Windows Server 2022 evaluation license, which is valid for 180 days.<br>If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance using OVA template, or you activate the operating system license of the appliance server.
**OVA verification** | [Verify](tutorial-discover-vmware.md#verify-security) the OVA template downloaded from project by checking the hash values. **PowerShell script** | Refer to this [article](./deploy-appliance-script.md#set-up-the-appliance-for-vmware) on how to deploy an appliance using the PowerShell installer script.<br/><br/>
-**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 32-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/> The appliance requires internet access, either directly or through a proxy.<br/><br/> If you deploy the appliance using OVA template, you need enough resources on the vCenter Server to create a server that meets the hardware requirements.<br/><br/> If you run the appliance on an existing server, make sure that it is running Windows Server 2016, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
+**Hardware and network requirements** | The appliance should run on server with Windows Server 2022, 32-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/> The appliance requires internet access, either directly or through a proxy.<br/><br/> If you deploy the appliance using OVA template, you need enough resources on the vCenter Server to create a server that meets the hardware requirements.<br/><br/> If you run the appliance on an existing server, make sure that it is running Windows Server 2022, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2022.)_
**VMware requirements** | If you deploy the appliance as a server on vCenter Server, it must be deployed on a vCenter Server running 5.5, 6.0, 6.5, 6.7 or 7.0 and an ESXi host running version 5.5 or later.<br/><br/> **VDDK (agentless migration)** | To use the appliance for agentless migration of servers, the VMware vSphere VDDK must be installed on the appliance server.
The following table summarizes the Azure Migrate appliance requirements for VMwa
**Appliance services** | The appliance has the following **Project limits** | An appliance can only be registered with a single project.<br> A single project can have multiple registered appliances. **Discovery limits** | An appliance can discover up to 5000 servers running in Hyper-V environment.<br> An appliance can connect to up to 300 Hyper-V hosts.
-**Supported deployment** | Deploy as server running on a Hyper-V host using a VHD template.<br><br> Deploy on an existing server running Windows Server 2016 using PowerShell installer script.
-**VHD template** | Zip file that includes a VHD. Download from project or from [here](https://go.microsoft.com/fwlink/?linkid=2140422).<br><br> Download size is 8.91 GB.<br><br> The downloaded appliance template comes with a Windows Server 2016 evaluation license, which is valid for 180 days. If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance, or that you activate the operating system license of the appliance server.
+**Supported deployment** | Deploy as server running on a Hyper-V host using a VHD template.<br><br> Deploy on an existing server running Windows Server 2022 using PowerShell installer script.
+**VHD template** | Zip file that includes a VHD. Download from project or from [here](https://go.microsoft.com/fwlink/?linkid=2140422).<br><br> Download size is 8.91 GB.<br><br> The downloaded appliance template comes with a Windows Server 2022 evaluation license, which is valid for 180 days. If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance, or that you activate the operating system license of the appliance server.
**VHD verification** | [Verify](tutorial-discover-hyper-v.md#verify-security) the VHD template downloaded from project by checking the hash values. **PowerShell script** | Refer to this [article](./deploy-appliance-script.md#set-up-the-appliance-for-hyper-v) on how to deploy an appliance using the PowerShell installer script.<br/>
-**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 16-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/> The appliance needs a static or dynamic IP address, and requires internet access, either directly or through a proxy.<br/><br/> If you run the appliance as a server running on a Hyper-V host, you need enough resources on the host to create a server that meets the hardware requirements.<br/><br/> If you run the appliance on an existing server, make sure that it is running Windows Server 2016, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
+**Hardware and network requirements** | The appliance should run on server with Windows Server 2022, 16-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/> The appliance needs a static or dynamic IP address, and requires internet access, either directly or through a proxy.<br/><br/> If you run the appliance as a server running on a Hyper-V host, you need enough resources on the host to create a server that meets the hardware requirements.<br/><br/> If you run the appliance on an existing server, make sure that it is running Windows Server 2022, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2022.)_
**Hyper-V requirements** | If you deploy the appliance with the VHD template, the appliance provided by Azure Migrate is Hyper-V VM version 5.0.<br/><br/> The Hyper-V host must be running Windows Server 2012 R2 or later.
The following table summarizes the Azure Migrate appliance requirements for VMwa
**Appliance services** | The appliance has the following **Project limits** | An appliance can only be registered with a single project.<br> A single project can have multiple registered appliances.<br> **Discovery limits** | An appliance can discover up to 1000 physical servers.
-**Supported deployment** | Deploy on an existing server running Windows Server 2016 using PowerShell installer script.
+**Supported deployment** | Deploy on an existing server running Windows Server 2022 using PowerShell installer script.
**PowerShell script** | Download the script (AzureMigrateInstaller.ps1) in a zip file from the project or from [here](https://go.microsoft.com/fwlink/?linkid=2140334). [Learn more](tutorial-discover-physical.md).<br><br> Download size is 85.8 MB. **Script verification** | [Verify](tutorial-discover-physical.md#verify-security) the PowerShell installer script downloaded from project by checking the hash values.
-**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 16-GB RAM, 8 vCPUs, around 80 GB of disk storage.<br/> The appliance needs a static or dynamic IP address, and requires internet access, either directly or through a proxy.<br/><br/> If you run the appliance on an existing server, make sure that it is running Windows Server 2016, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
+**Hardware and network requirements** | The appliance should run on server with Windows Server 2022, 16-GB RAM, 8 vCPUs, around 80 GB of disk storage.<br/> The appliance needs a static or dynamic IP address, and requires internet access, either directly or through a proxy.<br/><br/> If you run the appliance on an existing server, make sure that it is running Windows Server 2022, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2022.)_
## URL access
migrate Migrate Replication Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-replication-appliance.md
The replication appliance is deployed when you set up agent-based migration of V
## Appliance requirements
-When you set up the replication appliance using the OVA template provided in the Azure Migrate hub, the appliance runs Windows Server 2016 and complies with the support requirements. If you set up the replication appliance manually on a physical server, then make sure that it complies with the requirements.
+When you set up the replication appliance using the OVA template provided in the Azure Migrate hub, the appliance runs Windows Server 2022 and complies with the support requirements. If you set up the replication appliance manually on a physical server, then make sure that it complies with the requirements.
**Component** | **Requirement** |
RAM | 16 GB
Number of disks | Two: The OS disk and the process server cache disk. Free disk space (cache) | 600 GB **Software settings** |
-Operating system | Windows Server 2016 or Windows Server 2012 R2
-License | The appliance comes with a Windows Server 2016 evaluation license, which is valid for 180 days. <br>If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance, or that you activate the operating system license of the appliance VM.
+Operating system | Windows Server 2022 or Windows Server 2012 R2
+License | The appliance comes with a Windows Server 2022 evaluation license, which is valid for 180 days. <br>If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance, or that you activate the operating system license of the appliance VM.
Operating system locale | English (en-us) TLS | TLS 1.2 should be enabled. .NET Framework | .NET Framework 4.6 or later should be installed on the machine (with strong cryptography enabled.
migrate Migrate Support Matrix Hyper V Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v-migration.md
You can select up to 10 VMs at once for replication. If you want to migrate more
| :- | :- | | **Deployment** | The Hyper-V host can be standalone or deployed in a cluster. <br>Azure Migrate replication software (Hyper-V Replication provider) is installed on the Hyper-V hosts.| | **Permissions** | You need administrator permissions on the Hyper-V host. |
-| **Host operating system** | Windows Server 2022, Windows Server 2019, Windows Server 2016, or Windows Server 2012 R2 with latest updates. Note that Server core installation of these operating systems is also supported. |
+| **Host operating system** | Windows Server 2022, Windows Server 2019, or Windows Server 2012 R2 with latest updates. Note that Server core installation of these operating systems is also supported. |
| **Other Software requirements** | .NET Framework 4.7 or later | | **Port access** | Outbound connections on HTTPS port 443 to send VM replication data.
migrate Migrate Support Matrix Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v.md
To set up discovery and assessment of servers running on Hyper-V, you create a p
| **Support** | **Details** | :- | :- |
-| **Hyper-V host** | The Hyper-V host can be standalone or deployed in a cluster.<br/><br/> The Hyper-V host can run Windows Server 2019, Windows Server 2016, or Windows Server 2012 R2. Server core installations of these operating systems are also supported. <br/>You can't assess servers located on Hyper-V hosts running Windows Server 2012.
+| **Hyper-V host** | The Hyper-V host can be standalone or deployed in a cluster.<br/><br/> The Hyper-V host can run Windows Server 2022, Windows Server 2019, or Windows Server 2012 R2. Server core installations of these operating systems are also supported. <br/>You can't assess servers located on Hyper-V hosts running Windows Server 2012.
| **Permissions** | You need Administrator permissions on the Hyper-V host. <br/> If you don't want to assign Administrator permissions, create a local or domain user account, and add the user account to these groups- Remote Management Users, Hyper-V Administrators, and Performance Monitor Users. | | **PowerShell remoting** | [PowerShell remoting](/powershell/module/microsoft.powershell.core/enable-psremoting) must be enabled on each Hyper-V host. | | **Hyper-V Replica** | If you use Hyper-V Replica (or you have multiple servers with the same server identifiers), and you discover both the original and replicated servers using Azure Migrate, the assessment generated by Azure Migrate might not be accurate. |
migrate Migrate Support Matrix Physical Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical-migration.md
The table summarizes support for physical servers, AWS VMs, and GCP VMs that you
## Replication appliance requirements
-If you set up the replication appliance manually, then make sure that it complies with the requirements summarized in the table. When you set up the Azure Migrate replication appliance as an VMware VM using the OVA template provided in the Azure Migrate hub, the appliance is set up with Windows Server 2016, and complies with the support requirements.
+If you set up the replication appliance manually, then make sure that it complies with the requirements summarized in the table. When you set up the Azure Migrate replication appliance as an VMware VM using the OVA template provided in the Azure Migrate hub, the appliance is set up with Windows Server 2022, and complies with the support requirements.
- Learn about [replication appliance requirements](migrate-replication-appliance.md#appliance-requirements). - Install MySQL on the appliance. Learn about [installation options](migrate-replication-appliance.md#mysql-installation).
migrate Migrate Support Matrix Vmware Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware-migration.md
The table summarizes VMware vSphere VM support for VMware vSphere VMs you want t
### Appliance requirements (agent-based)
-When you set up the replication appliance using the OVA template provided in the Azure Migrate hub, the appliance runs Windows Server 2016 and complies with the support requirements. If you set up the replication appliance manually on a physical server, then make sure that it complies with the requirements.
+When you set up the replication appliance using the OVA template provided in the Azure Migrate hub, the appliance runs Windows Server 2022 and complies with the support requirements. If you set up the replication appliance manually on a physical server, then make sure that it complies with the requirements.
- Learn about [replication appliance requirements](migrate-replication-appliance.md#appliance-requirements) for VMware vSphere. - Install MySQL on the appliance. Learn about [installation options](migrate-replication-appliance.md#mysql-installation).
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware.md
Support | ASP.NET web apps | Java web apps
Support | Details | **Supported servers** | You can enable agentless dependency analysis on up to 1000 servers (across multiple vCenter Servers), discovered per appliance.
-**Windows servers** | Windows Server 2019<br />Windows Server 2016<br /> Windows Server 2012 R2<br /> Windows Server 2012<br /> Windows Server 2008 R2 (64-bit)<br />Microsoft Windows Server 2008 (32-bit)
+**Windows servers** | Windows Server 2022 <br/> Windows Server 2019<br /> Windows Server 2012 R2<br /> Windows Server 2012<br /> Windows Server 2008 R2 (64-bit)<br />Microsoft Windows Server 2008 (32-bit)
**Linux servers** | Red Hat Enterprise Linux 5.1, 5.3, 5.11, 6.x, 7.x, 8.x <br /> Cent OS 5.1, 5.9, 5.11, 6.x, 7.x, 8.x <br /> Ubuntu 12.04, 14.04, 16.04, 18.04, 20.04 <br /> OracleLinux 6.1, 6.7, 6.8, 6.9, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8, 8.1, 8.3, 8.5 <br /> SUSE Linux 10, 11 SP4, 12 SP1, 12 SP2, 12 SP3, 12 SP4, 15 SP2, 15 SP3 <br /> Debian 7, 8, 9, 10, 11 **Server requirements** | VMware Tools (10.2.1 and later) must be installed and running on servers you want to analyze.<br /><br /> Servers must have PowerShell version 2.0 or later installed.<br /><br /> WMI should be enabled and available on Windows servers. **vCenter Server account** | The read-only account used by Azure Migrate for assessment must have privileges for guest operations on VMware VMs.
migrate Troubleshoot Appliance Diagnostic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-appliance-diagnostic.md
You can run **Diagnose and solve** at any time from the appliance configuration
**Appliance-specific checks** | Key Vault certificate availability* | Checks if the certificate downloaded from Key Vault during appliance registration is still available on the appliance server. <br> *If not, appliance will auto-resolve by downloading the certificate again, provided the Key Vault is available and accessible*. || Credential store availability | Checks if the Credential store resources on the appliance server have not been moved/deleted/edited. || Replication appliance/ASR components | Checks if the same server has also been used to install any ASR/replication appliance components. *It is currently not supported to install both Azure Migrate and replication appliance (for agent-based migration) on the same server.*
-|| OS license availability | Checks if the evaluation license on the appliance server created from OVA/VHD is still valid. *The Windows Server 2016 evaluation license is valid for 180 days.*
+|| OS license availability | Checks if the evaluation license on the appliance server created from OVA/VHD is still valid. *The Windows Server 2022 evaluation license is valid for 180 days.*
|| CPU & memory utilization | Checks the CPU and memory utilized by the Migrate agents on the appliance server. **checked and reported only if the appliance has already been registered. These checks are run in the context of the current Azure user logged in the appliance*.
migrate Tutorial Discover Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-aws.md
ms. Previously updated : 06/29/2023 Last updated : 07/10/2023 #Customer intent: As a server admin I want to discover my AWS instances.
Check that the zipped file is secure, before you deploy it.
**Scenario** | **Download*** | **Hash value** | |
- Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 7134EF5B61D3560A102DF4814CB91C95E44EAE9677AAF1CC68AE0A04A6DBD613
+ Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 967FC3B8A5C467C303D86C8889EB4E0D4A8A7798865CBFBDF23E425D4EE425CA
- For Azure Government: **Scenario** | **Download*** | **Hash value** | |
- Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 7134EF5B61D3560A102DF4814CB91C95E44EAE9677AAF1CC68AE0A04A6DBD613
+ Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 967FC3B8A5C467C303D86C8889EB4E0D4A8A7798865CBFBDF23E425D4EE425CA
### 3. Run the Azure Migrate installer script
migrate Tutorial Discover Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-gcp.md
ms. Previously updated : 06/29/2023 Last updated : 07/10/2023 #Customer intent: As a server admin I want to discover my GCP instances.
Check that the zipped file is secure before you deploy it.
**Scenario** | **Download** | **Hash value** | |
- Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 7134EF5B61D3560A102DF4814CB91C95E44EAE9677AAF1CC68AE0A04A6DBD613
+ Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 967FC3B8A5C467C303D86C8889EB4E0D4A8A7798865CBFBDF23E425D4EE425CA
- For Azure Government: **Scenario** | **Download** | **Hash value** | |
- Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 7134EF5B61D3560A102DF4814CB91C95E44EAE9677AAF1CC68AE0A04A6DBD613
+ Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 967FC3B8A5C467C303D86C8889EB4E0D4A8A7798865CBFBDF23E425D4EE425CA
### 3. Run the Azure Migrate installer script
migrate Tutorial Discover Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-hyper-v.md
ms. Previously updated : 06/29/2023 Last updated : 07/10/2023 #Customer intent: As a Hyper-V admin, I want to discover my on-premises servers on Hyper-V.
Check that the zipped file is secure, before you deploy it.
**Scenario*** | **Download** | **SHA256** | |
- Hyper-V (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 7134EF5B61D3560A102DF4814CB91C95E44EAE9677AAF1CC68AE0A04A6DBD613
+ Hyper-V (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 967FC3B8A5C467C303D86C8889EB4E0D4A8A7798865CBFBDF23E425D4EE425CA
### 3. Create an appliance
migrate Tutorial Discover Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-physical.md
ms. Previously updated : 06/29/2023 Last updated : 07/10/2023 #Customer intent: As a server admin I want to discover my on-premises server inventory.
Check that the zipped file is secure, before you deploy it.
**Download** | **Hash value** |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 7134EF5B61D3560A102DF4814CB91C95E44EAE9677AAF1CC68AE0A04A6DBD613
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 967FC3B8A5C467C303D86C8889EB4E0D4A8A7798865CBFBDF23E425D4EE425CA
> [!NOTE] > The same script can be used to set up Physical appliance for either Azure public or Azure Government cloud with public or private endpoint connectivity.
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md
ms. Previously updated : 06/29/2023 Last updated : 07/10/2023 #Customer intent: As an VMware admin, I want to discover my on-premises servers running in a VMware environment.
Before you deploy the OVA file, verify that the file is secure:
**Algorithm** | **Download** | **SHA256** | |
- VMware (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 7134EF5B61D3560A102DF4814CB91C95E44EAE9677AAF1CC68AE0A04A6DBD613
+ VMware (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 967FC3B8A5C467C303D86C8889EB4E0D4A8A7798865CBFBDF23E425D4EE425CA
#### Create the appliance server
network-watcher Diagnose Vm Network Traffic Filtering Problem Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem-powershell.md
Previously updated : 06/22/2023 Last updated : 07/10/2023 #Customer intent: I need to diagnose a virtual machine (VM) network traffic filter problem that prevents communication to and from a VM.
Azure allows and denies network traffic to and from a virtual machine based on i
In this quickstart, you deploy a virtual machine and use Network Watcher [IP flow verify](network-watcher-ip-flow-verify-overview.md) to test the connectivity to and from different IP addresses. Using the IP flow verify results, you determine the cause of a communication failure and learn how you can resolve it. + If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
partner-solutions Add Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/add-connectors.md
To set up your connector, see [Azure Cosmos DB Sink Connector for Confluent Clou
## Next steps
-For help with troubleshooting, see [Troubleshooting Apache Kafka for Confluent Cloud solutions](troubleshoot.md).
+- For help with troubleshooting, see [Troubleshooting Apache Kafka on Confluent Cloud solutions](troubleshoot.md).
+- Get started with Apache Kafka on Confluent Cloud - Azure Native ISV Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Confluent%2Forganizations)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/confluentinc.confluent-cloud-azure-prod?tab=Overview)
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/create.md
-# QuickStart: Get started with Apache Kafka for Confluent Cloud - Azure portal
+# QuickStart: Get started with Apache Kafka on Confluent Cloud - Azure portal
-In this quickstart, you'll use the Azure portal to create an instance of Apache Kafka for Confluent Cloud.
+In this quickstart, you'll use the Azure portal to create an instance of Apache Kafka on Confluent Cloud.
## Prerequisites
In this quickstart, you'll use the Azure portal to create an instance of Apache
## Find offer
-Use the Azure portal to find the Apache Kafka for Confluent Cloud application.
+Use the Azure portal to find the Apache Kafka on Confluent Cloud application.
1. In a web browser, go to the [Azure portal](https://portal.azure.com/) and sign in.
After you've selected the offer for Apache Kafka on Confluent Cloud, you're read
| **Subscription** | From the drop-down menu, select the Azure subscription to deploy to. You must have _Owner_ or _Contributor_ access. | | **Resource group** | Specify whether you want to create a new resource group or use an existing resource group. A resource group is a container that holds related resources for an Azure solution. For more information, see [Azure Resource Group overview](../../azure-resource-manager/management/overview.md). | | **Confluent organization name** | To create a new Confluent organization, select **Create a new organization** and provide a name for the Confluent organization. To link to an existing Confluent organization, select **Link Subscription to an existing organization** option. Select the option **Link to an existing organization**. Sign in to your Confluent account, and select the existing organization. |
- | **Region** | From the drop-down menu, select one of these regions: <br/><br/> Australia East, Canada Central, Central US, East US, East US 2, France Central, North Europe, Southeast Asia, UK South, West Central US, West Europe, West US 2 |
+ | **Region** | From the drop-down menu, select one of these regions: Australia East, Canada Central, Central US, East US, East US 2, France Central, North Europe, Southeast Asia, UK South, West Central US, West Europe, West US 2 |
| **Plan** | Select **Pay as you go** or **Commitment**. | | **Billing term** | Prefilled based on the selected billing plan. | | **Price** | Prefilled based on the selected Confluent plan. |
After you've selected the offer for Apache Kafka on Confluent Cloud, you're read
## Next steps
-> [!div class="nextstepaction"]
-> [Manage the Confluent Cloud resource](manage.md)
+ > [!div class="nextstepaction"]
+ > [Manage the Confluent Cloud resource](manage.md)
+
+- Get started with Apache Kafka on Confluent Cloud - Azure Native ISV Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Confluent%2Forganizations)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/confluentinc.confluent-cloud-azure-prod?tab=Overview)
partner-solutions Get Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/get-support.md
To submit a request from your resource, follow these steps:
## Next steps
-For help with troubleshooting, see [Troubleshooting Apache Kafka for Confluent Cloud solutions](troubleshoot.md).
+- For help with troubleshooting, see [Troubleshooting Apache Kafka on Confluent Cloud solutions](troubleshoot.md).
+- Get started with Apache Kafka on Confluent Cloud - Azure Native ISV Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Confluent%2Forganizations)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/confluentinc.confluent-cloud-azure-prod?tab=Overview)
partner-solutions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/manage.md
You're billed for prorated usage up to the time of cluster deletion. After your
## Next steps
-For help with troubleshooting, see [Troubleshooting Apache Kafka for Confluent Cloud solutions](troubleshoot.md).
+- For help with troubleshooting, see [Troubleshooting Apache Kafka on Confluent Cloud solutions](troubleshoot.md).
-If you need to contact support, see [Get support for Confluent Cloud resource](get-support.md).
+- If you need to contact support, see [Get support for Confluent Cloud resource](get-support.md).
+
+- Get started with Apache Kafka on Confluent Cloud - Azure Native ISV Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Confluent%2Forganizations)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/confluentinc.confluent-cloud-azure-prod?tab=Overview)
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/overview.md
-# What is Apache Kafka for Confluent Cloud?
+# What is Apache Kafka on Confluent Cloud - Azure Native ISV Service?
-Apache Kafka for Confluent Cloud is an Azure Marketplace offering that provides Apache Kafka as a service. It's fully managed so you can focus on building your applications rather than managing the clusters.
+Azure Native ISV Services enable you to easily provision, manage, and tightly integrate independent software vendor (ISV) software and services on Azure. This Azure Native ISV Service is developed and managed by Microsoft and Confluent.
+
+You can find Apache Kafka on Confluent Cloud - Azure Native ISV Service in the [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Confluent%2Forganizations) or get it on [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/confluentinc.confluent-cloud-azure-prod?tab=Overview).
+
+Apache Kafka on Confluent Cloud is an Azure Marketplace offering that provides Apache Kafka as a service. It's fully managed so you can focus on building your applications rather than managing the clusters.
To reduce the burden of cross-platform management, Microsoft partnered with Confluent Cloud to build an integrated provisioning layer from Azure to Confluent Cloud. It provides a consolidated experience for using Confluent Cloud on Azure. You can easily integrate and manage Confluent Cloud with your Azure applications.
To learn more, see Confluent blog articles about Azure services that integrate w
## Next steps
-To create an instance of Apache Kafka for Confluent Cloud, see [QuickStart: Get started with Confluent Cloud on Azure](create.md).
+- To create an instance of Apache Kafka on Confluent Cloud, see [QuickStart: Get started with Confluent Cloud on Azure](create.md).
+- Get started with Apache Kafka on Confluent Cloud - Azure Native ISV Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Confluent%2Forganizations)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/confluentinc.confluent-cloud-azure-prod?tab=Overview)
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/troubleshoot.md
-# Troubleshooting Apache Kafka for Confluent Cloud solutions
+# Troubleshooting Apache Kafka on Confluent Cloud solutions
-This document contains information about troubleshooting your solutions that use Apache Kafka for Confluent Cloud.
+This document contains information about troubleshooting your solutions that use Apache Kafka on Confluent Cloud.
If you don't find an answer or can't resolve a problem, [create a request through the Azure portal](get-support.md) or contact [Confluent support](https://support.confluent.io).
If the problem persists, contact [Confluent support](https://support.confluent.i
## Next steps
-Learn about [managing your instance](manage.md) of Apache Kafka for Confluent Cloud.
+- Learn about [managing your instance](manage.md) of Apache Kafka on Confluent Cloud.
+- Get started with Apache Kafka on Confluent Cloud - Azure Native ISV Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Confluent%2Forganizations)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/confluentinc.confluent-cloud-azure-prod?tab=Overview)
partner-solutions Dynatrace Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-create.md
Use the Azure portal to find Azure Native Dynatrace Service application.
## Next steps - [Manage the Dynatrace resource](dynatrace-how-to-manage.md)-- Get started with Azure Native Dynatrace Service on [Azure Portal](https://aka.ms/partners/Dynatrace/portal) or [Azure Marketplace](https://aka.ms/partners/Dynatrace/AMPOffer).
+- Get started with Azure Native Dynatrace Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Dynatrace.Observability%2Fmonitors)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dynatrace.dynatrace_portal_integration?tab=Overview)
partner-solutions Dynatrace How To Configure Prereqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-configure-prereqs.md
To use the Security Assertion Markup Language (SAML) based single sign-on (SSO)
## Next steps - [Quickstart: Create a new Dynatrace environment](dynatrace-create.md)-- Get started with Azure Native Dynatrace Service on [Azure Portal](https://aka.ms/partners/Dynatrace/portal) or [Azure Marketplace](https://aka.ms/partners/Dynatrace/AMPOffer).
+- Get started with Azure Native Dynatrace Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Dynatrace.Observability%2Fmonitors)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dynatrace.dynatrace_portal_integration?tab=Overview)
partner-solutions Dynatrace How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-manage.md
If more than one Dynatrace resource is mapped to the Dynatrace environment using
## Next steps - For help with troubleshooting, see [Troubleshooting Dynatrace integration with Azure](dynatrace-troubleshoot.md).-- Get started with Azure Native Dynatrace Service on [Azure Portal](https://aka.ms/partners/Dynatrace/portal) or [Azure Marketplace](https://aka.ms/partners/Dynatrace/AMPOffer).
+- Get started with Azure Native Dynatrace Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Dynatrace.Observability%2Fmonitors)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dynatrace.dynatrace_portal_integration?tab=Overview)
partner-solutions Dynatrace Link To Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-link-to-existing.md
When you've finished adding tags, select **Next: Review+Create.**
## Next steps - [Manage the Dynatrace resource](dynatrace-how-to-manage.md)-- Get started with Azure Native Dynatrace Service on [Azure Portal](https://aka.ms/partners/Dynatrace/portal) or [Azure Marketplace](https://aka.ms/partners/Dynatrace/AMPOffer).
+- Get started with Azure Native Dynatrace Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Dynatrace.Observability%2Fmonitors)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dynatrace.dynatrace_portal_integration?tab=Overview)
partner-solutions Dynatrace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-overview.md
Last updated 02/02/2023
Azure Native ISV Services enable you to easily provision, manage, and tightly integrate independent software vendor (ISV) software and services on Azure. This Azure Native ISV Service is developed and managed by Microsoft and Dynatrace.
-You can find Azure Native Dynatrace Service in the [Azure Portal](https://aka.ms/partners/Dynatrace/portal) or get it on [Azure Marketplace](https://aka.ms/partners/Dynatrace/AMPOffer).
+You can find Azure Native Dynatrace Service in the [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Dynatrace.Observability%2Fmonitors) or get it on [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dynatrace.dynatrace_portal_integration?tab=Overview).
Dynatrace is a monitoring solution that provides deep cloud observability, advanced AIOps, and continuous runtime application security capabilities in Azure.
For more help using Azure Native Dynatrace Service, visit the [Dynatrace](https:
## Next steps - To create an instance of Dynatrace, see [QuickStart: Get started with Dynatrace](dynatrace-create.md).-- Get started with Azure Native Dynatrace Service on [Azure Portal](https://aka.ms/partners/Dynatrace/portal) or [Azure Marketplace](https://aka.ms/partners/Dynatrace/AMPOffer).
+- Get started with Azure Native Dynatrace Service on
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Dynatrace.Observability%2Fmonitors)
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dynatrace.dynatrace_portal_integration?tab=Overview)
partner-solutions Dynatrace Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-troubleshoot.md
This document contains information about troubleshooting your solutions that use
## Next steps - Learn about [managing your instance](dynatrace-how-to-manage.md) of Dynatrace.-- Get started with Azure Native Dynatrace Service on [Azure Portal](https://aka.ms/partners/Dynatrace/portal) or [Azure Marketplace](https://aka.ms/partners/Dynatrace/AMPOffer).
+- Get started with Azure Native Dynatrace Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Dynatrace.Observability%2Fmonitors)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dynatrace.dynatrace_portal_integration?tab=Overview)
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/create.md
After you've selected the offer for Elastic, you're ready to set up the applicat
## Next steps - [Manage the Elastic resource](manage.md)
+- Get started with Elastic Cloud (Elasticsearch) - An Azure Native ISV Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Elastic%2Fmonitors)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/elastic.ec-azure-pp?tab=Overview)
partner-solutions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/manage.md
When the Elastic resource is deleted, logs are no longer sent to Elastic. All bi
## Next steps - For help with troubleshooting, see [Troubleshooting Elastic integration with Azure](troubleshoot.md).
+- Get started with Elastic Cloud (Elasticsearch) - An Azure Native ISV Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Elastic%2Fmonitors)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/elastic.ec-azure-pp?tab=Overview)
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/overview.md
# What is Elastic Cloud (Elasticsearch) - An Azure Native ISV Service?
+Azure Native ISV Services enable you to easily provision, manage, and tightly integrate independent software vendor (ISV) software and services on Azure. This Azure Native ISV Service is developed and managed by Microsoft and Elastic.
+
+You can find Elastic Cloud (Elasticsearch) - An Azure Native ISV Service in the [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Elastic%2Fmonitors) or get it on [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/elastic.ec-azure-pp?tab=Overview).
+ This article describes the Elastic software as a service (SaaS) application that is available through the Azure Marketplace. The offering enables deeper integration of the Elastic service with Azure. Elastic's Cloud-Native Observability Platform centralizes log, metric, and tracing analytics in one place. You can more easily monitor the health and performance of your Azure environment. This information helps you troubleshoot your services more quickly.
For more help with using the Elastic service, see the [Elastic documentation](ht
## Next steps
-To create an instance of Elastic, see [QuickStart: Get started with Elastic](create.md).
+- To create an instance of Elastic, see [QuickStart: Get started with Elastic](create.md).
+- Get started with Elastic Cloud (Elasticsearch) - An Azure Native ISV Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Elastic%2Fmonitors)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/elastic.ec-azure-pp?tab=Overview)
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/troubleshoot.md
In the Elastic site, open a support request.
## Next steps - Learn about [managing your instance](manage.md) of Elastic
+- Get started with Elastic Cloud (Elasticsearch) - An Azure Native ISV Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Elastic%2Fmonitors)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/elastic.ec-azure-pp?tab=Overview)
partner-solutions New Relic Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-create.md
You can also skip this step and go directly to the **Review and Create** tab.
- [Manage the New Relic resource](new-relic-how-to-manage.md) - [Setting up your New Relic account config](https://docs.newrelic.com/docs/infrastructure/microsoft-azure-integrations/get-started/azure-native/#view-your-data-in-new-relic)
+- Get started with Azure Native New Relic Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/NewRelic.Observability%2Fmonitors)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/newrelicinc1635200720692.newrelic_liftr_payg?tab=Overview)
partner-solutions New Relic How To Configure Prereqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-how-to-configure-prereqs.md
To set up New Relic on Azure, you need to register the `NewRelic.Observability`
- [Quickstart: Get started with New Relic](new-relic-create.md) - [Troubleshoot Azure Native New Relic Service](new-relic-troubleshoot.md)
+- Get started with Azure Native New Relic Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/NewRelic.Observability%2Fmonitors)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/newrelicinc1635200720692.newrelic_liftr_payg?tab=Overview)
partner-solutions New Relic How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-how-to-manage.md
If you map more than one New Relic resource to the New Relic account by using th
- [Troubleshoot Azure Native New Relic Service](new-relic-troubleshoot.md) - [Quickstart: Get started with New Relic](new-relic-create.md)
+- Get started with Azure Native New Relic Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/NewRelic.Observability%2Fmonitors)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/newrelicinc1635200720692.newrelic_liftr_payg?tab=Overview)
partner-solutions New Relic Link To Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-link-to-existing.md
Your next step is to configure metrics and logs on the **Metrics + Logs** tab. W
- [Manage the New Relic resource](new-relic-how-to-manage.md) - [Quickstart: Get started with New Relic](new-relic-create.md)
+- Get started with Azure Native New Relic Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/NewRelic.Observability%2Fmonitors)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/newrelicinc1635200720692.newrelic_liftr_payg?tab=Overview)
partner-solutions New Relic Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-overview.md
Last updated 06/23/2023
# What is Azure Native New Relic Service?
+Azure Native ISV Services enable you to easily provision, manage, and tightly integrate independent software vendor (ISV) software and services on Azure. This Azure Native ISV Service is developed and managed by Microsoft and New Relic.
+
+You can find Azure Native New Relic Service in the [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/NewRelic.Observability%2Fmonitors) or get it on [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/newrelicinc1635200720692.newrelic_liftr_payg?tab=Overview).
+ New Relic is a full-stack observability platform that enables a single source of truth for application performance, infrastructure monitoring, log management, error tracking, real-user monitoring, and more. Combined with the Azure platform, use Azure Native New Relic Service to help monitor, troubleshoot, and optimize Azure services and applications. Azure Native New Relic Service in Marketplace enables you to create and manage New Relic accounts by using the Azure portal with a fully integrated experience. Integration with Azure enables you to use New Relic as a monitoring solution for your Azure workloads through a streamlined workflow, starting from procurement and moving all the way to configuration and management.
For more help with using Azure Native New Relic Service, see the New Relic docum
- [Quickstart: Get started with New Relic](new-relic-create.md) - [Quickstart: Link to an existing New Relic account](new-relic-link-to-existing.md)
+- Get started with Azure Native New Relic Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/NewRelic.Observability%2Fmonitors)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/newrelicinc1635200720692.newrelic_liftr_payg?tab=Overview)
partner-solutions New Relic Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-troubleshoot.md
New Relic manages the APIs for creating and managing resources, and for the stor
## Next steps - [Manage Azure Native New Relic Service](new-relic-how-to-manage.md)
+- Get started with Azure Native New Relic Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/NewRelic.Observability%2Fmonitors)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/newrelicinc1635200720692.newrelic_liftr_payg?tab=Overview)
partner-solutions Nginx Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-create.md
-# QuickStart: Get started with NGINXaaS
+# QuickStart: Get started with NGINXaaS ΓÇô An Azure Native ISV Service
In this quickstart, you'll use the Azure Marketplace to find and create an instance of **NGINXaaS**.
You can specify custom tags for the new NGINXaaS resource in Azure by adding cus
## Next steps - [Manage the NGINXaaS resource](nginx-manage.md)
+- Get started with NGINXaaS ΓÇô An Azure Native ISV Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/NGINX.NGINXPLUS%2FnginxDeployments)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/f5-networks.f5-nginx-for-azure?tab=Overview)
partner-solutions Nginx Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-manage.md
Enable CI/CD deployments via GitHub Actions integrations.
## Next steps
-For help with troubleshooting, see [Troubleshooting NGINXaaS integration with Azure](nginx-troubleshoot.md).
+- For help with troubleshooting, see [Troubleshooting NGINXaaS integration with Azure](nginx-troubleshoot.md).
+- Get started with NGINXaaS ΓÇô An Azure Native ISV Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/NGINX.NGINXPLUS%2FnginxDeployments)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/f5-networks.f5-nginx-for-azure?tab=Overview)
partner-solutions Nginx Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-overview.md
Last updated 01/18/2023
-# What is NGINXaaS?
+# What is NGINXaaS ΓÇô An Azure Native ISV Service?
+
+Azure Native ISV Services enable you to easily provision, manage, and tightly integrate independent software vendor (ISV) software and services on Azure. This Azure Native ISV Service is developed and managed by Microsoft and F5.
+
+You can find NGINXaaS ΓÇô An Azure Native ISV Service in the [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/NGINX.NGINXPLUS%2FnginxDeployments) or get it on [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/f5-networks.f5-nginx-for-azure?tab=Overview).
In this article you learn how to enable deeper integration of the NGINXSaaS service with Azure.
The NGINXaaS integration can only be set up by users who have Owner access on th
## Next steps
-To create an instance of NGINXaaS, see [QuickStart: Get started with NGINXaaS](nginx-create.md).
+- To create an instance of NGINXaaS ΓÇô An Azure Native ISV Service, see [QuickStart: Get started with NGINXaaS](nginx-create.md).
+- Get started with NGINXaaS ΓÇô An Azure Native ISV Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/NGINX.NGINXPLUS%2FnginxDeployments)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/f5-networks.f5-nginx-for-azure?tab=Overview)
partner-solutions Nginx Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-troubleshoot.md
The NGINXaaS integration can only be set up by users who have Owner access on th
## Next steps
-Learn about [managing your instance](nginx-manage.md) of NGINXaaS.
+- Learn about [managing your instance](nginx-manage.md) of NGINXaaS.
+- Get started with NGINXaaS ΓÇô An Azure Native ISV Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/NGINX.NGINXPLUS%2FnginxDeployments)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/f5-networks.f5-nginx-for-azure?tab=Overview)
partner-solutions Qumulo Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-create.md
In this quickstart, you create an instance of Azure Native Qumulo Scalable File
Only virtual networks in the specified region with subnets delegated to `Qumulo.Storage/fileSystems` appear on this page. If an expected virtual network is not listed, verify that it's in the chosen region and that the virtual network includes a subnet delegated to Qumulo.
-1. Select **Review + Create** to create the resource.
+1. Select **Review + Create** to create the resource.
+
+## Next steps
+
+- Get started with Azure Native Qumulo Scalable File Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Qumulo.Storage%2FfileSystems)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas-mpp?tab=Overview)
partner-solutions Qumulo How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-how-to-manage.md
To delete your Qumulo file system, you delete your deployment of Azure Native Qu
## Next steps - [Quickstart: Get started with Azure Native Qumulo Scalable File Service](qumulo-create.md) - [Troubleshoot Azure Native Qumulo Scalable File Service](qumulo-troubleshoot.md)
+- Get started with Azure Native Qumulo Scalable File Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Qumulo.Storage%2FfileSystems)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas-mpp?tab=Overview)
partner-solutions Qumulo Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-overview.md
Last updated 01/18/2023
# What is Azure Native Qumulo Scalable File Service?
+Azure Native ISV Services enable you to easily provision, manage, and tightly integrate independent software vendor (ISV) software and services on Azure. This Azure Native ISV Service is developed and managed by Microsoft and Qumulo.
+
+You can find Azure Native Qumulo Scalable File Service in the [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Qumulo.Storage%2FfileSystems) or get it on [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas-mpp?tab=Overview).
+ Qumulo is an industry leader in distributed file system and object storage. Qumulo provides a scalable, performant, and simple-to-use cloud-native file system that can support a wide variety of data workloads. The file system uses standard file-sharing protocols, such as NFS, SMB, FTP, and S3. The Azure Native Qumulo Scalable File Service offering on Azure Marketplace enables you to create and manage a Qumulo file system by using the Azure portal with a seamlessly integrated experience. You can also create and manage Qumulo resources by using the Azure portal through the resource provider `Qumulo.Storage/fileSystem`. Qumulo manages the service while giving you full admin rights to configure details like file system shares, exports, quotas, snapshots, and Active Directory users.
Azure Native Qumulo Scalable File Service provides:
- For more help with using Azure Native Qumulo Scalable File Service, see the [Qumulo documentation](https://docs.qumulo.com/cloud-guide/azure/). - To create an instance of the service, see the [quickstart](qumulo-create.md).
+- Get started with Azure Native Qumulo Scalable File Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Qumulo.Storage%2FfileSystems)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas-mpp?tab=Overview)
partner-solutions Qumulo Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-troubleshoot.md
For successful creation of a Qumulo service, custom role-based access control (R
## Next steps -- [Manage Azure Native Qumulo Scalable File Service](qumulo-how-to-manage.md)
+- [Manage Azure Native Qumulo Scalable File Service](qumulo-how-to-manage.md)
+- Get started with Azure Native Qumulo Scalable File Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Qumulo.Storage%2FfileSystems)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas-mpp?tab=Overview)
postgresql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-business-continuity.md
When you create support ticket from **Help + support** or **Support + troublesho
The **Service Health** page in the Azure portal contains information about Azure data center status globally. Search for "service health" in the search bar in the Azure portal, then view Service issues in the Active events category. You can also view the health of individual resources in the **Resource health** page of any resource under the Help menu. A sample screenshot of the Service Health page follows, with information about an active service issue in Southeast Asia. :::image type="content" source="./media/business-continuity/service-health-service-issues-example-map.png" alt-text=" Screenshot showing service outage in Service Health portal.":::
+* **Email notification**
+If you've set up alerts, an email notification will arrive when a service outage impacts your subscription and resource. The emails arrive from "azure-noreply@microsoft.com". The body of the email begins with "The activity log alert ... was triggered by a service issue for the Azure subscription...". For more information on service health alerts, see [Receive activity log alerts on Azure service notifications using Azure portal](/azure/service-health/alerts-activity-log-service-notifications-portal).
+ > [!IMPORTANT] > As the name implies, temporary tablespaces in PostgreSQL are used for temporary objects, as well as other internal database operations, such as sorting. Therefore we do not recommend creating user schema objects in temporary tablespace, as we dont guarantee durability of such objects after Server restarts, HA failovers, etc.
Below are some unplanned failure scenarios and the recovery process.
| <b> Availability zone failure | To recover from a zone-level failure, you can perform point-in-time restore using the backup and choosing a custom restore point with the latest time to restore the latest data. A new flexible server will be deployed in another non-impacted zone. The time taken to restore depends on the previous backup and the volume of transaction logs to recover. | Flexible server is automatically failed over to the standby server within 60-120s with zero data loss. For more information, see [HA concepts page](./concepts-high-availability.md). | | <b> Region failure | If your server is configured with geo-redundant backup, you can perform geo-restore in the paired region. A new server will be provisioned and recovered to the last available data that was copied to this region. <br /> <br /> You can also use cross region read replicas. In the event of region failure you can perform disaster recovery operation by promoting your read replica to be a standalone read-writeable server. RPO is expected to be up to 5 minutes (data loss possible) except in the case of severe regional failure when the RPO can be close to the replication lag at the time of failure. | Same process. | +
+### Configure your database after recovery from regional failure
+
+* If you are using geo-restore or geo-replica to recover from an outage, you must make sure that the connectivity to the new server is properly configured so that the normal application function can be resumed. You can follow the [Post-restore tasks](concepts-backup-restore.md#geo-redundant-backup-and-restore).
+* If you've previously set up a diagnostic setting on the original server, make sure to do the same on the target server if necessary as explained in [Configure and Access Logs in Azure Database for PostgreSQL - Flexible Server](howto-configure-and-access-logs.md).
+* Setup telemetry alerts, you need to make sure your existing alert rule settings are updated to map to the new server. For more information about alert rules, see [Use the Azure portal to set up alerts on metrics for Azure Database for PostgreSQL - Flexible Server](howto-alert-on-metrics.md).
++ > [!IMPORTANT]
-> Deleted servers **cannot** be restored. If you delete the server, all databases that belong to the server are also deleted and cannot be recovered. Use [Azure resource lock](../../azure-resource-manager/management/lock-resources.md) to help prevent accidental deletion of your server.
+> Deleted servers can be restored. If you delete the server, you can follow our guidance [Restore a dropped Azure Database for PostgreSQL Flexible server](how-to-restore-dropped-server.md) to recover. Use Azure resource lock to help prevent accidental deletion of your server.
## Next steps
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| East US 2 | :heavy_check_mark: (v3/v4 only) | :x: $ | :heavy_check_mark: | :heavy_check_mark: | | France Central | :heavy_check_mark: | :heavy_check_mark:| :heavy_check_mark: | :heavy_check_mark: | | France South | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |
-| Germany West Central | :heavy_check_mark: (v3/v4 only) | :x: $ | :x: $ | :heavy_check_mark: |
+| Germany West Central | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Japan East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Japan West | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: | | Jio India West | :heavy_check_mark: (v3 only)| :x: | :heavy_check_mark: | :x: |
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-audit.md
To use the [portal](https://portal.azure.com):
:::image type="content" source="./media/concepts-audit/share-preload-parameter.png" alt-text="Screenshot that shows Azure Database for PostgreSQL enabling shared_preload_libraries for PGAUDIT."::: 1. Restart the server to apply the change.
- 1. Check that `pgaudit` is loaded in `shared_preload_libraries` by executing the following query in psql:
-
- ```SQL
- show shared_preload_libraries;
- ```
- You should see `pgaudit` in the query result that will return `shared_preload_libraries`.
1. Connect to your server by using a client like psql, and enable the pgAudit extension:
search Search Get Started Vector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-vector.md
You should get a status HTTP 201 success.
**Key points:**
-+ The "fields" collection includes a required key field, a category field, and pairs of fields (such as "title", "titleVector") for keyword and vector search. Colocating vector and nonvector fields in the same index enables hybrid queries. For instance, you can combine filters, keyword search with semantic ranking, and vectors into a single query operation.
++ The "fields" collection includes a required key field, a category field, and pairs of fields (such as "title", "titleVector") for keyword and vector search. Colocating vector and non-vector fields in the same index enables hybrid queries. For instance, you can combine filters, keyword search with semantic ranking, and vectors into a single query operation. + Vector fields must be `"type": "Collection(Edm.Single)"` with `"dimensions"` and `"vectorSearchConfiguration"` properties. See [this article](/rest/api/searchservice/preview-api/create-or-update-index) for property descriptions.
The response includes 5 results, and each result provides a search score, title,
### Single vector search with filter
-You can add filters, but the filters are applied to the nonvector content in your index. In this example, the filter applies to the "category" field.
+You can add filters, but the filters are applied to the non-vector content in your index. In this example, the filter applies to the "category" field.
The response is 10 Azure services, with a search score, title, and category for each one. Notice the `select` property. It's used to select specific fields for the response.
api-key: {{admin-api-key}}
### Cross-field vector search
-A cross-field vector query sends a single query across multiple vector fields in your search index. This query example looks for similarity in both `titleVector` and `contentVector`:
+A cross-field vector query sends a single query across multiple vector fields in your search index. This query example looks for similarity in both "titleVector" and "contentVector" and displays scores using [Reciprocal Rank Fusion (RRF)](vector-search-ranking.md#reciprocal-rank-fusion-rrf-for-hybrid-queries):
```http POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}}
api-key: {{admin-api-key}}
### Multi-query vector search
-Multi-query vector search sends multiple queries across multiple vector fields in your search index. This query example looks for similarity in both `titleVector` and `contentVector`, but sends in two different query embeddings respectively. This scenario is ideal for multi-modal use cases where you want to search over a `textVector` field and an `imageVector` field. You can also use this scenario if you have different embedding models with different dimensions in your search index.
+Multi-query vector search sends multiple queries across multiple vector fields in your search index. This query example looks for similarity in both `titleVector` and `contentVector`, but sends in two different query embeddings respectively. This scenario is ideal for multi-modal use cases where you want to search over a `textVector` field and an `imageVector` field. You can also use this scenario if you have different embedding models with different dimensions in your search index. This also displays scores using [Reciprocal Rank Fusion (RRF)](vector-search-ranking.md#reciprocal-rank-fusion-rrf-for-hybrid-queries).
```http POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}}
api-key: {{admin-api-key}}
### Semantic hybrid search
-In Cognitive Search, semantic search and vector search are separate features, but you can use them together as described in this example. Semantic search adds language representation models that rerank search results based on query intent. This feature is optional and billable for the transactions against the language models.
- Assuming that you've [enabled semantic search](semantic-search-overview.md#enable-semantic-search) and your index definition includes a [semantic configuration](semantic-how-to-query-request.md), you can formulate a query that includes vector search, plus keyword search with semantic ranking, caption, answers, and spell check. ```http
api-key: {{admin-api-key}}
### Semantic hybrid search with filter
-Here's the last query in the collection. It's the same hybrid query as the previous example, but with a filter.
+Here's the last query in the collection. It's the same semantic hybrid query as the previous example, but with a filter.
```http POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}}
Azure Cognitive Search is a billable resource. If it's no longer needed, delete
## Next steps As a next step, we recommend reviewing the demo code for [Python](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python), or [C#](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet).++
search Vector Search How To Chunk Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-chunk-documents.md
Last updated 06/29/2023
> [!IMPORTANT] > Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [alpha SDKs](https://github.com/Azure/cognitive-search-vector-pr#readme).
-This article describes several approaches for chunking large documents so that you can generate embeddings for vector search.
+This article describes several approaches for chunking large documents so that you can generate embeddings for vector search. Chunking is only required if source documents are too large for the maximum input size imposed by models.
## Why is chunking important?
-The models used to generate embedding vectors have maximum limits on the text fragments provided as input. For example, the maximum length of input text for the [Azure OpenAI](/azure/cognitive-services/openai/how-to/embeddings) embedding models is 8,191 tokens. Given that each token is around 4 tokens for common OpenAI models, this maximum limit is equivalent to around 6000 words of text. If you're using these models to generate embeddings, it's critical that the input text stays under the limit. Partitioning your content into chunks ensures that your data can be processed by the Large Language Models (LLM) used for indexing and queries.
+The models used to generate embedding vectors have maximum limits on the text fragments provided as input. For example, the maximum length of input text for the [Azure OpenAI](/azure/cognitive-services/openai/how-to/embeddings) embedding models is 8,191 tokens. Given that each token is around 4 characters of text for common OpenAI models, this maximum limit is equivalent to around 6000 words of text. If you're using these models to generate embeddings, it's critical that the input text stays under the limit. Partitioning your content into chunks ensures that your data can be processed by the Large Language Models (LLM) used for indexing and queries.
## How chunking fits into the workflow
Here are some common chunking techniques, starting with the most widely used met
### Content overlap considerations
-When chunking data, overlapping a small amount of text between chunks can help preserve context. We recommend starting with an overlap of approximately 10%. For example, given a fixed chunk size of 256 tokens, you would begin testing with an overlap of 25 tokens. The actual amount of overlap varies depending on the type of data and the specific use case, but we have found that 10-15% works for many scenarios.
+When you chunk data, overlapping a small amount of text between chunks can help preserve context. We recommend starting with an overlap of approximately 10%. For example, given a fixed chunk size of 256 tokens, you would begin testing with an overlap of 25 tokens. The actual amount of overlap varies depending on the type of data and the specific use case, but we have found that 10-15% works for many scenarios.
### Factors for chunking data
This sample is built on LangChain, Azure OpenAI, and Azure Cognitive Search.
+ [Learn how to generate embeddings](/azure/cognitive-services/openai/how-to/embeddings?tabs=console) + [Tutorial: Explore Azure OpenAI Service embeddings and document search](/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line) +
search Vector Search How To Create Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-create-index.md
Prior to indexing, assemble a document payload that includes vector data. The do
Vector fields contain vector data generated by embedding models. We recommend the embedding models in [Azure OpenAI](https://aka.ms/oai/access), such as **text-embedding-ada-002** for text documents or the [Image Retrieval REST API](/rest/api/computervision/2023-02-01-preview/image-retrieval/vectorize-image) for images.
-1. Optionally, include other fields with alphanumeric content for hybrid query scenarios that include full text search or semantic ranking.
+1. Provide any other fields with alphanumeric content for any nonvector queries you want to support, as well as for hybrid query scenarios that include full text search or semantic ranking in the same request.
+
+Your search index should include fields and content for all of the query scenarios you want to support. Suppose you want to search or filter over product names, versions, metadata, or addresses. In this case, similarity search isn't especially helpful and keyword search, geo-search, or filters would be a better choice. A search index that includes a comprehensive field collection of vector and non-vector data provides maximum flexibility for query construction.
## Add a vector field to the fields collection
search Vector Search How To Generate Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-generate-embeddings.md
Previously updated : 06/29/2023 Last updated : 07/10/2023 # Create and use embeddings for search queries and documents
Dimension attributes have a minimum of 2 and a maximum of 2048 dimensions per ve
+ Query inputs require that you submit user-provided input to an embedding model that quickly converts human readable text into a vector.
- + We used **text-embedding-ada-002** to generate text embeddings and [Image Retrieval REST API](/rest/api/computervision/2023-02-01-preview/image-retrieval/vectorize-image) for image embeddings.
+ + For example, you can use **text-embedding-ada-002** to generate text embeddings and [Image Retrieval REST API](/rest/api/computervision/2023-02-01-preview/image-retrieval/vectorize-image) for image embeddings.
- + To avoid [rate limiting](/azure/cognitive-services/openai/quotas-limits), we implemented retry logic in our workload. For the Python demo, we used [tenacity](https://pypi.org/project/tenacity/).
+ + To avoid [rate limiting](/azure/cognitive-services/openai/quotas-limits), you can implement retry logic in your workload. For the Python demo, we used [tenacity](https://pypi.org/project/tenacity/).
+ Query outputs are any matching documents found in a search index. Your search index must have been previously loaded with documents having one or more vector fields with embeddings. Whatever model you used for indexing, use the same model for queries.
print(embeddings)
+ **Identify use cases:** Evaluate the specific use cases where embedding model integration for vector search features can add value to your search solution. This can include matching image content with text content, cross-lingual searches, or finding similar documents. + **Optimize cost and performance**: Vector search can be resource-intensive and is subject to maximum limits, so consider only vectorizing the fields that contain semantic meaning.
-+ **Choose the right embedding model:** Select an appropriate model for your specific use case, such as word embeddings for text-based searches or image embeddings for visual searches. Consider using pre-trained models like **text-embedding-ada-002** from OpenAI or **Image Retreival** REST API from [Azure AI Computer Vision](/azure/cognitive-services/computer-vision/how-to/image-retrieval).
-+ **Normalize Vector lengths**: Ensure that the vector lengths are normalized before storing them in the search index to improve the accuracy and performance of similarity search. Most pre-trained models already are normalized but not all.
++ **Choose the right embedding model:** Select an appropriate model for your specific use case, such as word embeddings for text-based searches or image embeddings for visual searches. Consider using pretrained models like **text-embedding-ada-002** from OpenAI or **Image Retrieval** REST API from [Azure AI Computer Vision](/azure/cognitive-services/computer-vision/how-to/image-retrieval).++ **Normalize Vector lengths**: Ensure that the vector lengths are normalized before storing them in the search index to improve the accuracy and performance of similarity search. Most pretrained models already are normalized but not all. + **Fine-tune the model**: If needed, fine-tune the selected model on your domain-specific data to improve its performance and relevance to your search application. + **Test and iterate**: Continuously test and refine your embedding model integration to achieve the desired search performance and user satisfaction.
search Vector Search How To Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-query.md
Previously updated : 07/07/2023 Last updated : 07/10/2023 # Query vector data in a search index
Last updated 07/07/2023
> [!IMPORTANT] > Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [alpha SDKs](https://github.com/Azure/cognitive-search-vector-pr#readme).
-In Azure Cognitive Search, if you added vector fields to a search index, this article explains how to query those fields. It also explains how to combine vector queries with full text search and semantic search for hybrid query scenarios.
+In Azure Cognitive Search, if you added vector fields to a search index, this article explains how to query those fields. It also explains how to combine vector queries with full text search and semantic search for hybrid query combination scenarios.
## Prerequisites
-+ Azure Cognitive Search, in any region and on any tier. However, if you want to also use [semantic search](semantic-search-overview.md) for hybrid queries, your search service must be Basic tier or higher, with [semantic search enabled](semantic-search-overview.md#enable-semantic-search).
-
- Most existing services support vector search. For a small subset of services created prior to January 2019, an index containing vector fields will fail on creation. In this situation, a new service must be created.
++ Azure Cognitive Search, in any region and on any tier. Most existing services support vector search. For a small subset of services created prior to January 2019, an index containing vector fields will fail on creation. In this situation, a new service must be created. + A search index containing vector fields. See [Add vector fields to a search index](vector-search-how-to-query.md). + Use REST API version 2023-07-01-preview or Azure portal to query vector fields. You can also use alpha versions of the Azure SDKs. For more information, see [this readme](https://github.com/Azure/cognitive-search-vector-pr/blob/main/README.md). ++ (Optional) If you want to also use [semantic search (preview)](semantic-search-overview.md) and vector search together, your search service must be Basic tier or higher, with [semantic search enabled](semantic-search-overview.md#enable-semantic-search).+ ## Check your index for vector fields In the index schema, check for:
The response includes 5 matches, and each result provides a search score, title,
## Query syntax for hybrid search
-A hybrid query combines full text search, semantic search (reranking), and vector search. The search engine runs full text and vector queries in parallel. Semantic ranking is applied to the results from the text search. A single result set is returned in the response.
+A hybrid query combines full text search and vector search. The search engine runs full text and vector queries in parallel. All matches are evaluated for relevance using Reciprocal Rank Fusion (RRF) and a single result set is returned in the response.
+
+You can also write queries that target just the vector fields, or just the text fields, within your search index. For example, besides vector queries, you might also want to write queries that filter by location or search over product names or titles, scenarios for which similarity search isn't a good fit.
+
+The following example is from the [Postman collection of REST APIs](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python) that demonstrate query configurations. It shows a complete request that includes vector search, full text search with filters, and semantic search with captions and answers. Semantic search is an optional premium feature. It's not required for vector search or hybrid search. For content that includes rich descriptive text *and* vectors, it's possible to benefit from all of the search modalities in one request.
```http POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}}
sentinel Enroll Simplified Pricing Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/enroll-simplified-pricing-tier.md
The following sample template sets Microsoft Sentinel to the classic pricing tie
See [Deploying the sample templates](../azure-monitor/resource-manager-samples.md) to learn more about using Resource Manager templates.
+To reference how to implement this in Terraform or Bicep start [here](/azure/templates/microsoft.operationalinsights/2020-08-01/workspaces).
+ ## Simplified pricing tiers for dedicated clusters
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
See these [important announcements](#announcements) about recent changes to feat
Content hub is now generally available (GA)! The [content hub centralization changes announced in February](#out-of-the-box-content-centralization-changes) have also been released. For more information on these changes and their impact, including more details about the tool provided to reinstate **IN USE** gallery templates, see [Out-of-the-box (OOTB) content centralization changes](sentinel-content-centralize.md).
-As part of the deployment for GA, the default view of the content hub is now the **List view**. The install process is streamlined as well. When selecting **Install** or **Install/Update**, the experience behaves like bulk installation.
+As part of the deployment for GA, the default view of the content hub is now the **List view**. The install process is streamlined as well. When selecting **Install** or **Install/Update**, the experience behaves like bulk installation. See our featured [blog post](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-microsoft-sentinel-content-hub-ga-and-ootb-content/ba-p/3854807) for more information.
### Deploy incident response playbooks for SAP
Combining the pricing tiers offers a simplification to the overall billing and c
A slight change to how free trials are offered was made to provide further simplification. There used to be a free trial option that waived Microsoft Sentinel costs and charged Log Analytics costs regularly, this will no longer be offered as an option. Starting July 5, 2023 all new Microsoft Sentinel workspaces will result in a 31 day free trial of 10 GB/day for the combined ingestion and analysis costs on Microsoft Sentinel and Log Analytics. ##### How do I get started with the simplified pricing tier?
-All new Microsoft Sentinel workspaces will automatically default to the simplified pricing tiers. Existing workspaces will have the choice to switch to the new pricing from Microsoft Sentinel settings. For more information, see the [simplified pricing tiers](billing.md#simplified-pricing-tiers) section of our cost planning documentation.
+All new Microsoft Sentinel workspaces will automatically default to the simplified pricing tiers. Existing workspaces will have the choice to switch to the new pricing from Microsoft Sentinel settings. For more information, see the [simplified pricing tiers](billing.md#simplified-pricing-tiers) section of our cost planning documentation and this featured [blog post](https://aka.ms/SentinelSimplifiedPricing).
### Classic alert automation due for deprecation
service-bus-messaging Prepare For Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/prepare-for-planned-maintenance.md
+
+ Title: Azure Service Bus - guidance on maintenance
+description: This article helps you with preparing for planned maintenance events on your namespace in Azure Service Bus.
+ Last updated : 07/10/2023++
+# Guidance on Azure maintenance events for Azure Service Bus
+This article describes how you can prepare for planned maintenance events on your namespace in Azure Service Bus.
+
+## What is a planned maintenance event?
+To keep Azure Service Bus secure, compliant, stable, and performant, updates are being performed through the service components continuously. Thanks to the modern and robust service architecture and innovative technologies, most updates are fully transparent and non-impactful in terms of service availability. Still, a few types of updates cause short service interrupts and require special treatment.
+
+## What to expect during a planned maintenance event
+During planned maintenance, namespaces are moved to a redundant node that contains the latest updates. As this move happens, the clients SDK disconnects and reconnects automatically on the namespace. Usually, the upgrades happen within 30 seconds.
+
+## Retry logic
+Any client production application that connects to a Service Bus namespace should implement a robust connectionΓÇ»[retry logic](/azure/architecture/best-practices/retry-service-specific#service-bus). Therefore, the updates are virtually transparent to the clients, or at least have minimal negative effects on clients.
+
+## Service health alert
+If you want to receive alerts for service issues or planned maintenance activities, you can use service health alerts in the Azure portal with appropriate event type and action groups. For more information, see [Receive alerts on Azure service notifications](/azure/service-health/alerts-activity-log-service-notifications-portal#create-service-health-alert-using-azure-portal).
+
+## Resource health
+If your namespace is experiencing connection failures, check the [Resource Health](/azure/service-health/resource-health-overview#get-started)  window in the [Azure portal](https://portal.azure.com/) for the current status. The **Health History** section contains the downtime reason for each event (when available).
+
+## Next steps
+
+- For more information about retry logic, seeΓÇ»[Retry logic for Azure services](/azure/architecture/best-practices/retry-service-specific).
+- Learn more about handling transient faults in Azure at [Transient fault handling](/azure/architecture/best-practices/transient-faults).
service-fabric Service Fabric Best Practices Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-security.md
By default, Windows Defender antivirus is installed on Windows Server 2016. For
"AntimalwareEnabled": "true", "Exclusions": { "Paths": "[concat(parameters('svcFabData'), ';', parameters('svcFabLogs'), ';', parameters('svcFabRuntime'))]",
- "Processes": "Fabric.exe;FabricHost.exe;FabricInstallerService.exe;FabricSetup.exe;FabricDeployer.exe;ImageBuilder.exe;FabricGateway.exe;FabricDCA.exe;FabricFAS.exe;FabricUOS.exe;FabricRM.exe;FileStoreService.exe"
+ "Processes": "Fabric.exe;FabricHost.exe;FabricInstallerService.exe;FabricSetup.exe;FabricDeployer.exe;ImageBuilder.exe;FabricGateway.exe;FabricDCA.exe;FabricFAS.exe;FabricUOS.exe;FabricRM.exe;FileStoreService.exe;FabricBRS.exe;BackupCopier.exe"
}, "RealtimeProtectionEnabled": "true", "ScheduledScanSettings": {
spring-apps How To Use Accelerator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-accelerator.md
You can see the running instances and resource usage of all the components using
### [Azure portal](#tab/Portal)
-You can view the state of Application Accelerator in the Azure portal on the **Developer Tools (Preview)** page, as shown in the following screenshot:
+You can view the state of Application Accelerator in the Azure portal on the **Developer Tools** page, as shown in the following screenshot:
### [Azure CLI](#tab/Azure-CLI)
After you create your *accelerator.yaml* file, you can create your accelerator.
To create your own accelerator, open the **Accelerators** section and then select **Add Accelerator** under the Customized Accelerators section. #### [Azure CLI](#tab/Azure-CLI)
The following table describes the customizable accelerator fields.
| **Host key algorithm** | `host-key-algorithm` | The host key algorithm to access the accelerator source repository whose authentication type is `SSH`. Can be `ecdsa-sha2-nistp256` or `ssh-rsa`. | Required when authentication type is `SSH`. | | **CA certificate name** | `ca-cert-name` | The CA certificate name to access the accelerator source repository with self-signed certificate whose authentication type is `Public` or `Basic auth`. | Required when a self-signed cert is used for the Git repo URL. |
-To view all published accelerators, see the App Accelerators section of the **Developer Tools (Preview)** page. Select the App Accelerator URL to view the published accelerators in Dev Tools Portal:
+To view all published accelerators, see the App Accelerators section of the **Developer Tools** page. Select the App Accelerator URL to view the published accelerators in Dev Tools Portal:
To view the newly published accelerator, refresh Dev Tools Portal.
To view the newly published accelerator, refresh Dev Tools Portal.
Use the following steps to bootstrap a new project using accelerators:
-1. On the **Developer Tools (Preview)** page, select the App Accelerator URL to open the Dev Tools Portal.
+1. On the **Developer Tools** page, select the App Accelerator URL to open the Dev Tools Portal.
- :::image type="content" source="media/how-to-use-accelerator/tap-gui-url.png" alt-text="Screenshot of the Azure portal showing the Developer Tools (Preview) page with the App Accelerator URL highlighted." lightbox="media/how-to-use-accelerator/tap-gui-url.png":::
+ :::image type="content" source="media/how-to-use-accelerator/tap-gui-url.png" alt-text="Screenshot of the Azure portal showing the Developer Tools page with the App Accelerator URL highlighted." lightbox="media/how-to-use-accelerator/tap-gui-url.png":::
1. On the Dev Tools Portal, select an accelerator.
If a Dev tools public endpoint has already been exposed, you can enable App Acce
Use the following steps to enable App Accelerator under an existing Azure Spring Apps Enterprise plan instance using the Azure portal:
-1. Navigate to your service resource, and then select **Developer Tools (Preview)**.
+1. Navigate to your service resource, and then select **Developer Tools**.
1. Select **Manage tools**. 1. Select **Enable App Accelerator**, and then select **Apply**. :::image type="content" source="media/how-to-use-accelerator/enable-app-accelerator.png" alt-text="Screenshot of the Azure portal showing the Manage tools pane with the Enable App Accelerator option highlighted." lightbox="media/how-to-use-accelerator/enable-app-accelerator.png":::
-You can view whether App Accelerator is enabled or disabled on the **Developer Tools (Preview)** page.
+You can view whether App Accelerator is enabled or disabled on the **Developer Tools** page.
### [Azure CLI](#tab/Azure-CLI)
spring-apps How To Use Application Live View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-application-live-view.md
You can monitor Application Live View using the Azure portal or Azure CLI.
### [Azure portal](#tab/Portal)
-You can view the state of Application Live View in the Azure portal on the **Overview** tab of the **Developer Tools (Preview)** page.
+You can view the state of Application Live View in the Azure portal on the **Overview** tab of the **Developer Tools** page.
### [Azure CLI](#tab/Azure-CLI)
If you have already enabled Dev Tools Portal and exposed a public endpoint, use
Use the following steps to manage Application Live View using the Azure portal:
-1. Navigate to your service resource, and then select **Developer Tools (Preview)**.
+1. Navigate to your service resource, and then select **Developer Tools**.
1. Select **Manage tools**.
- :::image type="content" source="media/how-to-use-application-live-view/manage.png" alt-text="Screenshot of the Developer Tools (Preview) page." lightbox="media/how-to-use-application-live-view/manage.png":::
+ :::image type="content" source="media/how-to-use-application-live-view/manage.png" alt-text="Screenshot of the Developer Tools page." lightbox="media/how-to-use-application-live-view/manage.png":::
1. Select the **Enable App Live View** checkbox, and then select **Save**. :::image type="content" source="media/how-to-use-application-live-view/check-enable.png" alt-text="Screenshot of the Developer Tools section showing the Enable App Live View checkbox." lightbox="media/how-to-use-application-live-view/check-enable.png":::
-1. You can then view the state of Application Live View on the **Developer Tools (Preview)**.
+1. You can then view the state of Application Live View on the **Developer Tools**.
:::image type="content" source="media/how-to-use-application-live-view/check-enable.png" alt-text="Screenshot of the Developer Tools section showing the Enable App Live View checkbox." lightbox="media/how-to-use-application-live-view/check-enable.png":::
spring-apps Quickstart Deploy Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-web-app.md
description: Describes how to deploy a web application to Azure Spring Apps.
Previously updated : 06/21/2023-- Last updated : 07/11/2023++ zone_pivot_groups: spring-apps-plan-selection
This application is a typical three-layers web application with the following la
The following diagram shows the architecture of the system:
-## Prerequisites
--- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.-- [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher. Use the following command to install the Azure Spring Apps extension: `az extension add --name spring`- -- Azure Container Apps extension for the Azure CLI. Use the following commands to register the required namespaces:
+This article provides the following options for deploying to Azure Spring Apps:
- ```azurecli
- az extension add --name containerapp --upgrade
- az provider register --namespace Microsoft.App
- az provider register --namespace Microsoft.OperationalInsights
- az provider register --namespace Microsoft.AppPlatform
- ```
+- Azure portal - This is a more conventional way to create resources and deploy applications step by step. This approach is suitable for Spring developers who are using Azure cloud services for the first time.
+- Azure Developer CLI: This is a more efficient way to automatically create resources and deploy applications through simple commands, and it covers application code and infrastructure as code files needed to provision the Azure resources. This approach is suitable for Spring developers who are familiar with Azure cloud services.
::: zone-end -- [Git](https://git-scm.com/downloads).-- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+## 1. Prerequisites
-- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [View Azure Spring Apps Enterprise tier offering in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
+### [Azure portal](#tab/Azure-portal)
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
-## Clone and run the sample project locally
+### [Azure Developer CLI](#tab/Azure-Developer-CLI)
-Use the following steps to clone and run the app locally.
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+- [Azure Developer CLI](https://aka.ms/azd-install), version 1.0.
-1. Use the following command to clone the sample project from GitHub:
+
- ```bash
- git clone https://github.com/Azure-Samples/ASA-Samples-Web-Application.git
- ```
-1. Use the following command to build the sample project:
- ```bash
- cd ASA-Samples-Web-Application
- ./mvnw clean package -DskipTests
- ```
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+- [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher. Use the following command to install the Azure Spring Apps extension: `az extension add --name spring`
-1. Use the following command to run the sample application by Maven:
- ```bash
- java -jar web/target/simple-todo-web-0.0.1-SNAPSHOT.jar
- ```
-1. Go to `http://localhost:8080` in your browser to access the application.
+- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [View Azure Spring Apps Enterprise tier offering in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
-## Prepare the cloud environment
-The main resources required to run this sample are an Azure Spring Apps instance and an Azure Database for PostgreSQL instance. This section provides the steps to create these resources.
-### Provide names for each resource
-Create variables to hold the resource names by using the following commands. Be sure to replace the placeholders with your own values.
-```azurecli
-export RESOURCE_GROUP=<resource-group-name>
-export LOCATION=<location>
-export POSTGRESQL_SERVER=<server-name>
-export POSTGRESQL_DB=<database-name>
-export AZURE_SPRING_APPS_NAME=<Azure-Spring-Apps-service-instance-name>
-export APP_NAME=<web-app-name>
-export CONNECTION=<connection-name>
-```
::: zone-end ::: zone pivot="sc-consumption-plan"
-```azurecli
-export RESOURCE_GROUP=<resource-group-name>
-export LOCATION=<location>
-export POSTGRESQL_SERVER=<server-name>
-export POSTGRESQL_DB=<database-name>
-export POSTGRESQL_ADMIN_USERNAME=<admin-username>
-export POSTGRESQL_ADMIN_PASSWORD=<admin-password>
-export AZURE_SPRING_APPS_NAME=<Azure-Spring-Apps-service-instance-name>
-export APP_NAME=<web-app-name>
-export MANAGED_ENVIRONMENT="<Azure-Container-Apps-environment-name>"
-export CONNECTION=<connection-name>
-```
::: zone-end
-### Create a new resource group
-
-Use the following steps to create a new resource group.
-
-1. Use the following command to sign in to the Azure CLI.
-
- ```azurecli
- az login
- ```
-
-1. Use the following command to set the default location.
-
- ```azurecli
- az configure --defaults location=${LOCATION}
- ```
-
-1. Use the following command to list all available subscriptions to determine the subscription ID to use.
-
- ```azurecli
- az account list --output table
- ```
-
-1. Use the following command to set the default subscription:
-
- ```azurecli
- az account set --subscription <subscription-ID>
- ```
-
-1. Use the following command to create a resource group.
-
- ```azurecli
- az group create --resource-group ${RESOURCE_GROUP}
- ```
-
-1. Use the following command to set the newly created resource group as the default resource group.
-
- ```azurecli
- az configure --defaults group=${RESOURCE_GROUP}
- ```
-
-### Create an Azure Spring Apps instance
+## 5. Validate the web app
-Azure Spring Apps is used to host the Spring web app. Create an Azure Spring Apps instance and an application inside it.
+Now you can access the deployed app to see whether it works. Use the following steps to validate:
::: zone pivot="sc-consumption-plan"
-An Azure Container Apps environment creates a secure boundary around a group of applications. Apps deployed to the same environment are deployed in the same virtual network and write logs to the same log analytics workspace. For more information, see [Log Analytics workspace overview](../azure-monitor/logs/log-analytics-workspace-overview.md).
-
-1. Use the following command to create the environment:
+1. After the deployment has completed, use the following command to access the app with the URL retrieved:
```azurecli
- az containerapp env create \
- --name ${MANAGED_ENVIRONMENT}
- ```
-
-1. Use the following command to create a variable to store the environment resource ID:
-
- ```azurecli
- export MANAGED_ENV_RESOURCE_ID=$(az containerapp env show \
- --name ${MANAGED_ENVIRONMENT} \
- --query id \
- --output tsv)
+ az spring app show \
+ --service ${AZURE_SPRING_APPS_NAME} \
+ --name ${APP_NAME} \
+ --query properties.url \
+ --output tsv
```
-1. The Azure Spring Apps Standard consumption and dedicated plan instance is built on top of the Azure Container Apps environment. Create your Azure Spring Apps instance by specifying the resource ID of the environment you created. Use the following command to create an Azure Spring Apps service instance with the resource ID:
-
- ```azurecli
- az spring create \
- --name ${AZURE_SPRING_APPS_NAME} \
- --managed-environment ${MANAGED_ENV_RESOURCE_ID} \
- --sku standardGen2
- ```
+ The page should appear as you saw in localhost.
-1. Use the following command to specify the app name on Azure Spring Apps and to allocate required resources:
+1. Use the following command to check the app's log to investigate any deployment issue:
```azurecli
- az spring app create \
+ az spring app logs \
--service ${AZURE_SPRING_APPS_NAME} \
- --name ${APP_NAME} \
- --runtime-version Java_17 \
- --assign-endpoint true
+ --name ${APP_NAME}
``` ::: zone-end ::: zone pivot="sc-enterprise"
-1. Use the following command to create an Azure Spring Apps service instance.
+1. After the deployment has completed, you can access the app with this URL: `https://${AZURE_SPRING_APPS_NAME}-${APP_NAME}.azuremicroservices.io/`. The page should appear as you saw in localhost.
- ```azurecli
- az spring create --name ${AZURE_SPRING_APPS_NAME} --sku enterprise
- ```
-
-1. Use the following command to create an application in the Azure Spring Apps instance.
+1. Use the following command to check the app's log to investigate any deployment issue:
```azurecli
- az spring app create \
+ az spring app logs \
--service ${AZURE_SPRING_APPS_NAME} \
- --name ${APP_NAME} \
- --assign-endpoint true
+ --name ${APP_NAME}
``` ::: zone-end ::: zone pivot="sc-standard"
-1. Use the following command to create an Azure Spring Apps service instance.
+1. Access the application with the output application URL. The page should appear as you saw in localhost.
- ```azurecli
- az spring create --name ${AZURE_SPRING_APPS_NAME}
- ```
+1. From the navigation pane of the Azure Spring Apps instance overview page, select **Logs** to check the app's logs.
-1. Use the following command to create an application in the Azure Spring Apps instance.
-
- ```azurecli
- az spring app create \
- --service ${AZURE_SPRING_APPS_NAME} \
- --name ${APP_NAME} \
- --runtime-version Java_17 \
- --assign-endpoint true
- ```
+ :::image type="content" source="media/quickstart-deploy-web-app/logs.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Apps logs page." lightbox="media/quickstart-deploy-web-app/logs.png":::
::: zone-end
-### Prepare the PostgreSQL instance
+## 6. Clean up resources
-The Spring web app uses H2 for the database in localhost, and Azure Database for PostgreSQL for the database in Azure.
-
-Use the following command to create a PostgreSQL instance:
--
-```azurecli
-az postgres flexible-server create \
- --name ${POSTGRESQL_SERVER} \
- --database-name ${POSTGRESQL_DB} \
- --admin-user ${POSTGRESQL_ADMIN_USERNAME} \
- --admin-password ${POSTGRESQL_ADMIN_PASSWORD} \
- --public-access 0.0.0.0
-```
-Specifying `0.0.0.0` enables public access from any resources deployed within Azure to access your server.
::: zone-end -
-```azurecli
-az postgres flexible-server create \
- --name ${POSTGRESQL_SERVER} \
- --database-name ${POSTGRESQL_DB} \
- --active-directory-auth Enabled
-```
-To ensure that the application is accessible only by PostgreSQL in Azure Spring Apps, enter `n` to the prompts to enable access to a specific IP address and to enable access for all IP addresses.
+If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When you no longer need the resources, delete them by deleting the resource group. Use the following command to delete the resource group:
-```output
-Do you want to enable access to client xxx.xxx.xxx.xxx (y/n) (y/n): n
-Do you want to enable access for all IPs (y/n): n
+```azurecli
+az group delete --name ${RESOURCE_GROUP}
``` ::: zone-end
-### Connect app instance to PostgreSQL instance
--
-After the application instance and the PostgreSQL instance are created, the application instance can't access the PostgreSQL instance directly. Use the following steps to enable the app to connect to the PostgreSQL instance.
-
-1. Use the following command to get the PostgreSQL instance's fully qualified domain name:
-
- ```azurecli
- export PSQL_FQDN=$(az postgres flexible-server show \
- --name ${POSTGRESQL_SERVER} \
- --query fullyQualifiedDomainName \
- --output tsv)
- ```
-
-1. Use the following command to provide the `spring.datasource.` properties to the app through environment variables:
-
- ```azurecli
- az spring app update \
- --service ${AZURE_SPRING_APPS_NAME} \
- --name ${APP_NAME} \
- --env SPRING_DATASOURCE_URL="jdbc:postgresql://${PSQL_FQDN}:5432/${POSTGRESQL_DB}?sslmode=require" \
- SPRING_DATASOURCE_USERNAME="${POSTGRESQL_ADMIN_USERNAME}" \
- SPRING_DATASOURCE_PASSWORD="${POSTGRESQL_ADMIN_PASSWORD}"
- ```
---
-After the application instance and the PostgreSQL instance are created, the application instance can't access the PostgreSQL instance directly. The following steps use Service Connector to configure the needed network settings and connection information. For more information about Service Connector, see [What is Service Connector?](../service-connector/overview.md).
-
-1. If you're using Service Connector for the first time, use the following command to register the Service Connector resource provider.
-
- ```azurecli
- az provider register --namespace Microsoft.ServiceLinker
- ```
-
-1. Use the following command to achieve a passwordless connection:
-
- ```azurecli
- az extension add --name serviceconnector-passwordless --upgrade
- ```
-
-1. Use the following command to create a service connection between the application and the PostgreSQL database:
-
- ```azurecli
- az spring connection create postgres-flexible \
- --resource-group ${RESOURCE_GROUP} \
- --service ${AZURE_SPRING_APPS_NAME} \
- --app ${APP_NAME} \
- --client-type springBoot \
- --target-resource-group ${RESOURCE_GROUP} \
- --server ${POSTGRESQL_SERVER} \
- --database ${POSTGRESQL_DB} \
- --system-identity \
- --connection ${CONNECTION}
- ```
-
- The `--system-identity` parameter is required for the passwordless connection. For more information, see [Bind an Azure Database for PostgreSQL to your application in Azure Spring Apps](how-to-bind-postgres.md).
-
-1. After the connection is created, use the following command to validate the connection:
-
- ```azurecli
- az spring connection validate \
- --resource-group ${RESOURCE_GROUP} \
- --service ${AZURE_SPRING_APPS_NAME} \
- --app ${APP_NAME} \
- --connection ${CONNECTION}
- ```
+## 7. Next steps
- The output should appear similar to the following JSON code:
-
- ```json
- [
- {
- "additionalProperties": {},
- "description": null,
- "errorCode": null,
- "errorMessage": null,
- "name": "The target existence is validated",
- "result": "success"
- },
- {
- "additionalProperties": {},
- "description": null,
- "errorCode": null,
- "errorMessage": null,
- "name": "The target service firewall is validated",
- "result": "success"
- },
- {
- "additionalProperties": {},
- "description": null,
- "errorCode": null,
- "errorMessage": null,
- "name": "The configured values (except username/password) is validated",
- "result": "success"
- },
- {
- "additionalProperties": {},
- "description": null,
- "errorCode": null,
- "errorMessage": null,
- "name": "The identity existence is validated",
- "result": "success"
- }
- ]
- ```
+> [!div class="nextstepaction"]
+> [Structured application log for Azure Spring Apps](./structured-app-log.md)
+> [!div class="nextstepaction"]
+> [Map an existing custom domain to Azure Spring Apps](./tutorial-custom-domain.md)
-## Deploy the app to Azure Spring Apps
+> [!div class="nextstepaction"]
+> [Set up Azure Spring Apps CI/CD with GitHub Actions](./how-to-github-actions.md)
-Now that the cloud environment is prepared, the application is ready to deploy.
+> [!div class="nextstepaction"]
+> [Set up Azure Spring Apps CI/CD with Azure DevOps](./how-to-cicd.md)
-1. Use the following command to deploy the app:
+> [!div class="nextstepaction"]
+> [Use managed identities for applications in Azure Spring Apps](./how-to-use-managed-identities.md)
- ```azurecli
- az spring app deploy \
- --service ${AZURE_SPRING_APPS_NAME} \
- --name ${APP_NAME} \
- --artifact-path web/target/simple-todo-web-0.0.1-SNAPSHOT.jar
- ```
+> [!div class="nextstepaction"]
+> [Create a service connection in Azure Spring Apps with the Azure CLI](../service-connector/quickstart-cli-spring-cloud-connection.md)
-2. After the deployment has completed, you can access the app with this URL: `https://${AZURE_SPRING_APPS_NAME}-${APP_NAME}.azuremicroservices.io/`. The page should appear as you saw in localhost.
+> [!div class="nextstepaction"]
+> [Run the Pet Clinic microservice on Azure Spring Apps](./quickstart-sample-app-introduction.md)
::: zone-end -
-2. After the deployment has completed, use the following command to access the app with the URL retrieved:
-
- ```azurecli
- az spring app show \
- --service ${AZURE_SPRING_APPS_NAME} \
- --name ${APP_NAME} \
- --query properties.url \
- --output tsv
- ```
- The page should appear as you saw in localhost.
+> [!div class="nextstepaction"]
+> [Run the polyglot ACME fitness store apps on Azure Spring Apps](./quickstart-sample-app-acme-fitness-store-introduction.md)
::: zone-end
-3. Use the following command to check the app's log to investigate any deployment issue:
-
- ```azurecli
- az spring app logs \
- --service ${AZURE_SPRING_APPS_NAME} \
- --name ${APP_NAME}
- ```
-
-## Clean up resources
-
-If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When you no longer need the resources, delete them by deleting the resource group. Use the following command to delete the resource group:
-
-```azurecli
-az group delete --name ${RESOURCE_GROUP}
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Create a service connection in Azure Spring Apps with the Azure CLI](../service-connector/quickstart-cli-spring-cloud-connection.md)
- For more information, see the following articles: - [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).
storage Storage Blob Append https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-append.md
Use either of these methods to append data to that append blob:
The maximum size in bytes of each append operation is defined by the [AppendBlobMaxAppendBlockBytes](/dotnet/api/azure.storage.blobs.specialized.appendblobclient.appendblobmaxappendblockbytes) property. The following example creates an append blob and appends log data to that blob. This example uses the [AppendBlobMaxAppendBlockBytes](/dotnet/api/azure.storage.blobs.specialized.appendblobclient.appendblobmaxappendblockbytes) property to determine whether multiple append operations are required. ```csharp
-public static async void AppendToBlob
- (BlobContainerClient containerClient, MemoryStream logEntryStream, string LogBlobName)
+static async Task AppendToBlob(
+ BlobContainerClient containerClient,
+ MemoryStream logEntryStream,
+ string logBlobName)
{
- AppendBlobClient appendBlobClient = containerClient.GetAppendBlobClient(LogBlobName);
+ AppendBlobClient appendBlobClient = containerClient.GetAppendBlobClient(logBlobName);
- appendBlobClient.CreateIfNotExists();
+ await appendBlobClient.CreateIfNotExistsAsync();
- var maxBlockSize = appendBlobClient.AppendBlobMaxAppendBlockBytes;
-
- var buffer = new byte[maxBlockSize];
+ var maxBlockSize = appendBlobClient.AppendBlobMaxAppendBlockBytes;
if (logEntryStream.Length <= maxBlockSize) {
- appendBlobClient.AppendBlock(logEntryStream);
+ await appendBlobClient.AppendBlockAsync(logEntryStream);
} else {
- var bytesLeft = (logEntryStream.Length - logEntryStream.Position);
+ var bytesLeft = logEntryStream.Length;
while (bytesLeft > 0) {
- if (bytesLeft >= maxBlockSize)
- {
- buffer = new byte[maxBlockSize];
- await logEntryStream.ReadAsync
- (buffer, 0, maxBlockSize);
- }
- else
- {
- buffer = new byte[bytesLeft];
- await logEntryStream.ReadAsync
- (buffer, 0, Convert.ToInt32(bytesLeft));
- }
-
- appendBlobClient.AppendBlock(new MemoryStream(buffer));
-
- bytesLeft = (logEntryStream.Length - logEntryStream.Position);
-
+ var blockSize = (int)Math.Min(bytesLeft, maxBlockSize);
+ var buffer = new byte[blockSize];
+ await logEntryStream.ReadAsync(buffer, 0, blockSize);
+ await appendBlobClient.AppendBlockAsync(new MemoryStream(buffer));
+ bytesLeft -= blockSize;
}- }- } ```
storage Storage Use Azurite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azurite.md
Title: Use Azurite emulator for local Azure Storage development
description: The Azurite open-source emulator provides a free local environment for testing your Azure storage applications. Previously updated : 04/26/2023 Last updated : 07/11/2023
azurite --oauth basic --cert path/server.pem --key path/key.pem
> [!NOTE] > OAuth requires an HTTPS endpoint. Make sure HTTPS is enabled by providing `--cert` switch along with the `--oauth` switch.
-Azurite supports basic authentication by specifying the `basic` parameter to the `--oauth` switch. Azurite performs basic authentication, like validating the incoming bearer token, checking the issuer, audience, and expiry. Azurite doesn't check the token signature or permissions.
+Azurite supports basic authentication by specifying the `basic` parameter to the `--oauth` switch. Azurite performs basic authentication, like validating the incoming bearer token, checking the issuer, audience, and expiry. Azurite doesn't check the token signature or permissions. To learn more about authorization, see [Authorization for tools and SDKs](#authorization-for-tools-and-sdks).
### Skip API Version Check
azurite --disableProductStyleUrl
Connect to Azurite from Azure Storage SDKs or tools, like [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/), by using any authentication strategy. Authentication is required. Azurite supports authorization with OAuth, Shared Key, and shared access signatures (SAS). Azurite also supports anonymous access to public containers.
-If you're using the Azure SDKs, start Azurite with the `--oauth basic and --cert --key/--pwd` options.
+If you're using the Azure SDKs, start Azurite with the `--oauth basic and --cert --key/--pwd` options. To learn more about using Azurite with the Azure SDKs, see [Azure SDKs](#azure-sdks).
### Well-known storage account and key
azurite --oauth basic --cert certname.pem --key certname-key.pem
#### Azure Blob Storage
-You can then instantiate a BlobContainerClient, BlobServiceClient, or BlobClient.
+To interact with Blob Storage resources, you can instantiate a `BlobContainerClient`, `BlobServiceClient`, or `BlobClient`.
+
+The following examples show how to authorize a `BlobContainerClient` object using three different authorization mechanisms: [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential), connection string, and shared key. `DefaultAzureCredential` provides a Bearer token-based authentication mechanism, and uses a chain of credential types used for authentication. Once authenticated, this credential provides the OAuth token as part of client instantiation. To learn more, see the [DefaultAzureCredential class reference](/dotnet/api/azure.identity.defaultazurecredential).
```csharp // With container URL and DefaultAzureCredential
var client = new BlobContainerClient(
#### Azure Queue Storage
-You can also instantiate a QueueClient or QueueServiceClient.
+To interact with Queue Storage resources, you can instantiate a `QueueClient` or `QueueServiceClient`.
+
+The following examples show how to create and authorize a `QueueClient` object using three different authorization mechanisms: [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential), connection string, and shared key. `DefaultAzureCredential` provides a Bearer token-based authentication mechanism, and uses a chain of credential types used for authentication. Once authenticated, this credential provides the OAuth token as part of client instantiation. To learn more, see the [DefaultAzureCredential class reference](/dotnet/api/azure.identity.defaultazurecredential).
```csharp // With queue URL and DefaultAzureCredential
var client = new QueueClient(
#### Azure Table Storage
-You can also instantiate a TableClient or TableServiceClient.
+To interact with Table Storage resources, you can instantiate a `TableClient` or `TableServiceClient`.
+
+The following examples show how to create and authorize a `TableClient` object using three different authorization mechanisms: [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential), connection string, and shared key. `DefaultAzureCredential` provides a Bearer token-based authentication mechanism, and uses a chain of credential types used for authentication. Once authenticated, this credential provides the OAuth token as part of client instantiation. To learn more, see the [DefaultAzureCredential class reference](/dotnet/api/azure.identity.defaultazurecredential).
```csharp // With table URL and DefaultAzureCredential
storage Volume Snapshot Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/volume-snapshot-restore.md
Title: Use volume snapshots with Azure Container Storage Preview description: Take a point-in-time snapshot of a persistent volume and restore it. You'll create a volume snapshot class, take a snapshot, create a restored persistent volume claim, and deploy a new pod. -+ Last updated 07/03/2023 - # Use volume snapshots with Azure Container Storage Preview
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/1-design-performance-migration.md
Most Oracle data types have a direct equivalent in Azure Synapse. The following
| LONG RAW | Not supported. Map to VARBINARY(MAX). | | NCHAR | NCHAR | | NVARCHAR2 | NVARCHAR |
-| NUMBER | NUMBER |
+| NUMBER | FLOAT |
| NCLOB | Not directly supported. Replace with NVARCHAR(MAX). | | NUMERIC | NUMERIC | | ORD media data types | Not supported |
The [workload management guide](../../sql-data-warehouse/analyze-your-workload.m
## Next steps
-To learn about ETL and load for Oracle migration, see the next article in this series: [Data migration, ETL, and load for Oracle migrations](2-etl-load-migration-considerations.md).
+To learn about ETL and load for Oracle migration, see the next article in this series: [Data migration, ETL, and load for Oracle migrations](2-etl-load-migration-considerations.md).
synapse-analytics Microsoft Spark Utilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/microsoft-spark-utilities.md
mssparkutils.fs.ls("Your directory path")
### View file properties
-Returns file properties including file name, file path, file size, and whether it is a directory and a file.
+Returns file properties including file name, file path, file size, file modification time, and whether it is a directory and a file.
:::zone pivot = "programming-language-python" ```python files = mssparkutils.fs.ls('Your directory path') for file in files:
- print(file.name, file.isDir, file.isFile, file.path, file.size)
+ print(file.name, file.isDir, file.isFile, file.path, file.size, file.modifyTime)
``` ::: zone-end
for file in files:
```scala val files = mssparkutils.fs.ls("/") files.foreach{
- file => println(file.name,file.isDir,file.isFile,file.size)
+ file => println(file.name,file.isDir,file.isFile,file.size,file.modifyTime)
} ```
foreach(var File in Files) {
```r files <- mssparkutils.fs.ls("/") for (file in files) {
- writeLines(paste(file$name, file$isDir, file$isFile, file$size))
+ writeLines(paste(file$name, file$isDir, file$isFile, file$size, file$modifyTime))
} ```
update-center Prerequsite For Schedule Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/prerequsite-for-schedule-patching.md
To check if the **BypassPlatformSafetyChecksOnUserSchedule** is enabled, go to *
**Enable on Windows VMs** ```
-PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVirtualMachine?api-version=2023-03-01`
+PATCH on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVirtualMachine?api-version=2023-03-01`
``` ```json
PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/
**Enable on Linux VMs** ```
-PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVirtualMachine?api-version=2023-03-01`
+PATCH on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVirtualMachine?api-version=2023-03-01`
``` ```json
To update the patch mode, follow these steps:
**Enable on Windows VMs** ```
-PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVirtualMachine?api-version=2023-03-01`
+PATCH on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVirtualMachine?api-version=2023-03-01`
``` ```json
PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/
**Enable on Linux VMs** ```
-PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVirtualMachine?api-version=2023-03-01`
+PATCH on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVirtualMachine?api-version=2023-03-01`
``` ```json
update-center Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/scheduled-patching.md
To schedule recurring updates on a single VM, follow these steps:
- Start on - Maintenance window (in hours)
+ > [!NOTE]
+ > The upper maintenance window is 3.55 hours.
- Repeats (monthly, daily or weekly) - Add end date - Schedule summary
update-center Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/support-matrix.md
description: Provides a summary of supported regions and operating system settin
Previously updated : 05/31/2023 Last updated : 07/11/2023
Use one of the following options to perform the settings change at scale:
``` - For servers running Server 2016 or later which are not using Update management center scheduled patching (that has the VM PatchSettings set to AutomaticByOS = Azure-Orchestrated) you can use Group Policy to control this by downloading and using the latest Group Policy [Administrative template files](https://learn.microsoft.com/troubleshoot/windows-client/group-policy/create-and-manage-central-store).
+> [!NOTE]
+> Run the following PowerShell script on the server to disable first party updates.
+> ```powershell
+> $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager")
+> $ServiceManager.Services
+> $ServiceID = "7971f918-a847-4430-9279-4a52d1efe18d"
+> $ServiceManager.RemoveService($ServiceId)
+> ```
+ ### Third-party updates **Windows**: Update Management relies on the locally configured update repository to update supported Windows systems, either WSUS or Windows Update. Tools such as [System Center Updates Publisher](/mem/configmgr/sum/tools/updates-publisher) allow you to import and publish custom updates with WSUS. This scenario allows update management to update machines that use Configuration Manager as their update repository with third-party software. To learn how to configure Updates Publisher, see [Install Updates Publisher](/mem/configmgr/sum/tools/install-updates-publisher).
virtual-desktop Add Session Hosts Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/add-session-hosts-host-pool.md
description: Learn how to add session hosts virtual machines to a host pool in A
Previously updated : 01/31/2023 Last updated : 07/11/2023 # Add session hosts to a host pool
Here's how to generate a registration key using the [desktopvirtualization](/cli
> In the following examples, you'll need to change the `<placeholder>` values for your own. [!INCLUDE [include-cloud-shell-local-cli](includes/include-cloud-shell-local-cli.md)]- 2. Use the `az desktopvirtualization workspace update` command with the following example to generate a registration key that is valid for 24 hours. ```azurecli
Here's how to generate a registration key using the [Az.DesktopVirtualization](/
> In the following examples, you'll need to change the `<placeholder>` values for your own. [!INCLUDE [include-cloud-shell-local-powershell](includes/include-cloud-shell-local-powershell.md)]- 2. Use the `New-AzWvdRegistrationInfo` cmdlet with the following example to generate a registration key that is valid for 24 hours. ```azurepowershell
Here's how to create session hosts and register them to a host pool using the Az
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+1. In the search bar, enter *Azure Virtual Desktop* and select the matching service entry.
1. Select **Host pools**, then select the name of the host pool you want to add session hosts to.
Here's how to create session hosts and register them to a host pool using the Az
| Name prefix | Enter a name for your session hosts, for example **aad-hp01-sh**.<br /><br />This will be used as the prefix for your session host VMs. Each session host has a suffix of a hyphen and then a sequential number added to the end, for example **aad-hp01-sh-0**.<br /><br />This name prefix can be a maximum of 11 characters and is used in the computer name in the operating system. The prefix and the suffix combined can be a maximum of 15 characters. Session host names must be unique. | | Virtual machine location | Select the Azure region where your session host VMs will be deployed. This must be the same region that your virtual network is in. | | Availability options | Select from **[availability zones](../reliability/availability-zones-overview.md)**, **[availability set](../virtual-machines/availability-set-overview.md)**, or **No infrastructure dependency required**. If you select availability zones or availability set, complete the extra parameters that appear. |
- | Security type | Select from **Standard**, **[Trusted launch virtual machines](../virtual-machines/trusted-launch.md)**, or **[Confidential virtual machines](../confidential-computing/confidential-vm-overview.md)**. |
+ | Security type | Select from **Standard**, **[Trusted launch virtual machines](../virtual-machines/trusted-launch.md)**, or **[Confidential virtual machines](../confidential-computing/confidential-vm-overview.md)**.<br /><br />- If you select **Trusted launch virtual machines**, options for **secure boot** and **vTPM** are automatically selected.<br /><br />- If you select **Confidential virtual machines**, options for **secure boot**, **vTPM**, and **integrity monitoring** are automatically selected. You can't opt out of vTPM when using a confidential VM. |
| Image | Select the OS image you want to use from the list, or select **See all images** to see more, including any images you've created and stored as an [Azure Compute Gallery shared image](../virtual-machines/shared-image-galleries.md) or a [managed image](../virtual-machines/windows/capture-image-resource.md). | | Virtual machine size | Select a SKU. If you want to use different SKU, select **Change size**, then select from the list. | | Number of VMs | Enter the number of virtual machines you want to deploy. You can deploy up to 400 session host VMs at this point if you wish (depending on your [subscription quota](../quotas/view-quotas.md)), or you can add more later.<br /><br />For more information, see [Azure Virtual Desktop service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-virtual-desktop-service-limits) and [Virtual Machines limits](../azure-resource-manager/management/azure-subscription-service-limits.md#virtual-machines-limitsazure-resource-manager). | | OS disk type | Select the disk type to use for your session hosts. We recommend only **Premium SSD** is used for production workloads. |
+ | Confidential computing encryption | If you're using a confidential VM, you must select the **Confidential compute encryption** check box to enable OS disk encryption.<br /><br />This check box only appears if you selected **Confidential virtual machines** as your security type. |
| Boot Diagnostics | Select whether you want to enable [boot diagnostics](../virtual-machines/boot-diagnostics.md). | | **Network and security** | | | Virtual network | Select your virtual network. An option to select a subnet will appear. |
virtual-desktop Create Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pool.md
Previously updated : 02/28/2023 Last updated : 07/11/2023 # Create a host pool in Azure Virtual Desktop
Here's how to create a host pool using the Azure portal.
| Name prefix | Enter a name for your session hosts, for example **aad-hp01-sh**.<br /><br />This will be used as the prefix for your session host VMs. Each session host has a suffix of a hyphen and then a sequential number added to the end, for example **aad-hp01-sh-0**.<br /><br />This name prefix can be a maximum of 11 characters and is used in the computer name in the operating system. The prefix and the suffix combined can be a maximum of 15 characters. Session host names must be unique. | | Virtual machine location | Select the Azure region where your session host VMs will be deployed. This must be the same region that your virtual network is in. | | Availability options | Select from **[availability zones](../reliability/availability-zones-overview.md)**, **[availability set](../virtual-machines/availability-set-overview.md)**, or **No infrastructure dependency required**. If you select availability zones or availability set, complete the extra parameters that appear. |
- | Security type | Select from **Standard**, **[Trusted launch virtual machines](../virtual-machines/trusted-launch.md)**, or **[Confidential virtual machines](../confidential-computing/confidential-vm-overview.md)**. |
+ | Security type | Select from **Standard**, **[Trusted launch virtual machines](../virtual-machines/trusted-launch.md)**, or **[Confidential virtual machines](../confidential-computing/confidential-vm-overview.md)**.<br /><br />- If you select **Trusted launch virtual machines**, options for **secure boot** and **vTPM** are automatically selected.<br /><br />- If you select **Confidential virtual machines**, options for **secure boot**, **vTPM**, and **integrity monitoring** are automatically selected. You can't opt out of vTPM when using a confidential VM. |
| Image | Select the OS image you want to use from the list, or select **See all images** to see more, including any images you've created and stored as an [Azure Compute Gallery shared image](../virtual-machines/shared-image-galleries.md) or a [managed image](../virtual-machines/windows/capture-image-resource.md). | | Virtual machine size | Select a SKU. If you want to use different SKU, select **Change size**, then select from the list. | | Number of VMs | Enter the number of virtual machines you want to deploy. You can deploy up to 400 session host VMs at this point if you wish (depending on your [subscription quota](../quotas/view-quotas.md)), or you can add more later.<br /><br />For more information, see [Azure Virtual Desktop service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-virtual-desktop-service-limits) and [Virtual Machines limits](../azure-resource-manager/management/azure-subscription-service-limits.md#virtual-machines-limitsazure-resource-manager). | | OS disk type | Select the disk type to use for your session hosts. We recommend only **Premium SSD** is used for production workloads. |
+ | Confidential computing encryption | If you're using a confidential VM, you must select the **Confidential compute encryption** check box to enable OS disk encryption.<br /><br />This check box only appears if you selected **Confidential virtual machines** as your security type. |
| Boot Diagnostics | Select whether you want to enable [boot diagnostics](../virtual-machines/boot-diagnostics.md). | | **Network and security** | | | Virtual network | Select your virtual network. An option to select a subnet will appear. |
Here's how to create a host pool using the [desktopvirtualization](/cli/azure/de
> In the following examples, you'll need to change the `<placeholder>` values for your own. [!INCLUDE [include-cloud-shell-local-cli](includes/include-cloud-shell-local-cli.md)]- 2. Use the `az desktopvirtualization hostpool create` command with the following examples to create a host pool. More parameters are available; for more information, see the [az desktopvirtualization hostpool Azure CLI reference](/cli/azure/desktopvirtualization/hostpool). 1. To create a pooled host pool using the *breadth-first* [load-balancing algorithm](host-pool-load-balancing.md) and *Desktop* as the preferred [app group type](environment-setup.md#app-groups), run the following command:
Here's how to create a host pool using the [Az.DesktopVirtualization](/powershel
> In the following examples, you'll need to change the `<placeholder>` values for your own. [!INCLUDE [include-cloud-shell-local-powershell](includes/include-cloud-shell-local-powershell.md)]- 2. Use the `New-AzWvdHostPool` cmdlet with the following examples to create a host pool. More parameters are available; for more information, see the [New-AzWvdHostPool PowerShell reference](/powershell/module/az.desktopvirtualization/new-azwvdhostpool). 1. To create a pooled host pool using the *breadth-first* [load-balancing algorithm](host-pool-load-balancing.md) and *Desktop* as the preferred [app group type](environment-setup.md#app-groups), run the following command:
virtual-desktop Security Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/security-guide.md
description: Best practices for keeping your Azure Virtual Desktop environment secure. Previously updated : 03/09/2023 Last updated : 07/11/2023
By restricting operating system capabilities, you can strengthen the security of
- Prevent unwanted software from running on session hosts. You can enable App Locker for additional security on session hosts, ensuring that only the apps you allow can run on the host.
-## Azure Virtual Desktop support for Trusted Launch
+## Trusted launch
Trusted launch are Gen2 Azure VMs with enhanced security features aimed to protect against ΓÇ£bottom of the stackΓÇ¥ threats through attack vectors such as rootkits, boot kits, and kernel-level malware. The following are the enhanced security features of trusted launch, all of which are supported in Azure Virtual Desktop. To learn more about trusted launch, visit [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md).
-## Azure Confidential Computing virtual machines (preview)
+### Enable trusted launch as default
-> [!IMPORTANT]
-> Azure Virtual Desktop support for Azure Confidential virtual machines is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+Trusted launch protects against advanced and persistent attack techniques. This feature also allows for secure deployment of VMs with verified boot loaders, OS kernels, and drivers. Trusted launch also protects keys, certificates, and secrets in the VMs. Learn more about trusted launch at [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md).
-Azure Virtual Desktop support for Azure Confidential Computing virtual machines (preview) ensures a userΓÇÖs virtual desktop is encrypted in memory, protected in use, and backed by hardware root of trust. Deploying confidential VMs with Azure Virtual Desktop gives users access to Microsoft 365 and other applications on session hosts that use hardware-based isolation, which hardens isolation from other virtual machines, the hypervisor, and the host OS. These virtual desktops are powered by the latest Third-generation (Gen 3) Advanced Micro Devices (AMD) EPYCΓäó processor with Secure Encrypted Virtualization Secure Nested Paging (SEV-SNP) technology. Memory encryption keys are generated and safeguarded by a dedicated secure processor inside the AMD CPU that can't be read from software. For more information, see the [Azure Confidential Computing overview](../confidential-computing/overview.md).
+When you add session hosts using the Azure portal, the security type automatically changes to **Trusted virtual machines**. This ensures that your VM meets the mandatory requirements for Windows 11. For more information about these requirements, see [Virtual machine support](/windows/whats-new/windows-11-requirements#virtual-machine-support).
+
+## Azure Confidential computing virtual machines
+
+Azure Virtual Desktop support for Azure Confidential computing virtual machines ensures a userΓÇÖs virtual desktop is encrypted in memory, protected in use, and backed by hardware root of trust. Azure Confidential computing VMs for Azure Virtual Desktop are compatible with [supported operating systems](prerequisites.md#operating-systems-and-licenses). Deploying confidential VMs with Azure Virtual Desktop gives users access to Microsoft 365 and other applications on session hosts that use hardware-based isolation, which hardens isolation from other virtual machines, the hypervisor, and the host OS. These virtual desktops are powered by the latest Third-generation (Gen 3) Advanced Micro Devices (AMD) EPYCΓäó processor with Secure Encrypted Virtualization Secure Nested Paging (SEV-SNP) technology. Memory encryption keys are generated and safeguarded by a dedicated secure processor inside the AMD CPU that can't be read from software. For more information, see the [Azure Confidential computing overview](../confidential-computing/overview.md).
+
+The following operating systems are supported for use as session hosts with confidential VMs on Azure Virtual Desktop:
+
+- Windows 11 Enterprise, version 22H2
+- Windows 11 Enterprise multi-session, version 22H2
+- Windows Server 2022
+- Windows Server 2019
+
+You can create session hosts using confidential VMs when you [create a host pool](create-host-pool.md) or [add session hosts to a host pool](add-session-hosts-host-pool.md).
+
+### OS disk encryption
+
+Encrypting the operating system disk is an extra layer of encryption that binds disk encryption keys to the Confidential computing VM's Trusted Platform Module (TPM). This encryption makes the disk content accessible only to the VM. Integrity monitoring allows cryptographic attestation and verification of VM boot integrity and monitoring alerts if the VM didnΓÇÖt boot because attestation failed with the defined baseline. For more information about integrity monitoring, see [Microsoft Defender for Cloud Integration](../virtual-machines/trusted-launch.md#microsoft-defender-for-cloud-integration). You can enable confidential compute encryption when you create session hosts using confidential VMs when you [create a host pool](create-host-pool.md) or [add session hosts to a host pool](add-session-hosts-host-pool.md).
### Secure Boot
virtual-machine-scale-sets Virtual Machine Scale Sets Health Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md
Rich Health States reporting contains four Health States, *Initializing*, *Healt
| -- | | -- | | TCP | Healthy | To send a *Healthy* signal, a successful handshake must be made with the provided application endpoint. | | TCP | Unhealthy | The instance will be marked as *Unhealthy* if a failed or incomplete handshake occurred with the provided application endpoint. |
-| TCP | Unhealthy | The instance automatically enters an *Initializing* state at extension start time. For more information, see [Initializing state](#initializing-state). |
+| TCP | Initializing | The instance automatically enters an *Initializing* state at extension start time. For more information, see [Initializing state](#initializing-state). |
## Initializing state
virtual-machine-scale-sets Virtual Machine Scale Sets Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md
Azure Accelerated Networking improves network performance by enabling single roo
## Azure Virtual Machine Scale Sets with Azure Load Balancer See [Azure Load Balancer and Virtual Machine Scale Sets](../load-balancer/load-balancer-standard-virtual-machine-scale-sets.md) to learn more about how to configure your Standard Load Balancer with Virtual Machine Scale Sets based on your scenario.
-## Create a scale set that references an Application Gateway
-To create a scale set that uses an application gateway, reference the backend address pool of the application gateway in the ipConfigurations section of your scale set as in this ARM template config:
+## Add a Virtual Machine Scale Set to an Application Gateway
+
+To add a scale set to the backend pool of an Application Gateway, reference the Application Gateway backend pool in your scale set's network profile. This can be done either when creating the scale set (see ARM Template below) or on an existing scale set.
+
+### Adding Uniform Orchestration Virtual Machine Scale Sets to an Application Gateway
+
+When adding Uniform Virtual Machine Scale Sets to an Application Gateway's backend pool, the process will differ for new or existing scale sets:
+
+- For new scale sets, reference the Application Gateway's backend pool ID in your scale set model's network profile, under one or more network interface IP configurations. When deployed, instances added to your scale set will be placed in the Application Gateway's backend pool.
+- For existing scale sets, first add the Application Gateway's backend pool ID in your scale set model's network profile, then apply the model your existing instances by an upgrade. If the scale set's upgrade policy is `Automatic` or `Rolling`, instances will be updated for you. If it is `Manual`, you need to upgrade the instances manually.
+
+#### [Portal](#tab/portal1)
+
+1. Create an Application Gateway and backend pool in the same region as your scale set, if you do not already have one
+1. Navigate to the Virtual Machine Scale Set in the Portal
+1. Under **Settings**, open the **Networking** pane
+1. In the Networking pane, select the **Load balancing** tab and click **Add Load Balancing**
+1. Select **Application Gateway** from the Load Balancing Options dropdown, and choose an existing Application Gateway
+1. Select the target backend pool and click **Save**
+1. If your scale set Upgrade Policy is 'Manual', navigate to the **Settings** > **Instances** pane to select and upgrade each of your instances
+
+#### [PowerShell](#tab/powershell1)
+
+```azurepowershell
+ $appGW = Get-AzApplicationGateway -Name <appGWName> -ResourceGroup <AppGWResourceGroupName>
+ $backendPool = Get-AzApplicationGatewayBackendAddressPool -Name <backendAddressPoolName> -ApplicationGateway $appGW
+ $vmss = Get-AzVMSS -Name <VMSSName> -ResourceGroup <VMSSResourceGroupName>
+
+ $backendPoolMembership = New-Object System.Collections.Generic.List[Microsoft.Azure.Management.Compute.Models.SubResource]
+
+ # add existing backend pool membership to new pool membership of first NIC and ip config
+ If ($vmss.VirtualMachineProfile.NetworkProfile.NetworkInterfaceConfigurations[0].IpConfigurations[0].ApplicationGatewayBackendAddressPools) {
+ $backendPoolMembership.AddRange($vmss.VirtualMachineProfile.NetworkProfile.NetworkInterfaceConfigurations[0].IpConfigurations[0].ApplicationGatewayBackendAddressPools)
+ }
+
+ # add new backend pool to pool membership
+ $backendPoolMembership.Add($backendPool.id)
+
+ # set VMSS model to use to backend pool membership
+ $vmss.VirtualMachineProfile.NetworkProfile.NetworkInterfaceConfigurations[0].IpConfigurations[0].ApplicationGatewayBackendAddressPools = $backendPoolMembership
+
+ # update the VMSS model
+ $vmss | Update-AzVMSS
+
+ # update VMSS instances, if necessary
+ If ($vmss.UpgradePolicy.Mode -eq 'Manual') {
+ $vmss | Get-AzVmssVM | Foreach-Object { $_ | Update-AzVMSSInstance -InstanceId $_.instanceId}
+ }
+
+```
+
+#### [CLI](#tab/cli1)
+
+```azurecli-interactive
+appGWName=<appGwName>
+appGWResourceGroup=<appGWRGName>
+backendPoolName=<backendPoolName>
+backendPoolId=$(az network application-gateway address-pool show --gateway-name $appGWName -g $appGWResourceGroup -n $backendPoolName --query id -otsv)
+
+vmssName=<vmssName>
+vmssResourceGroup=<vmssRGName>
+
+# add app gw backend pool to first nic's first ip config
+az vmss update -n $vmssName -g $vmssResourceGroup --add "virtualMachineProfile.NetworkProfile.NetworkInterfaceConfigurations[0].ipConfigurations[0].applicationGatewayBackendAddressPools" "id=$backendPoolId"
+
+# update instances
+az vmss update-instances --instance-ids * --name $vmssName --resource-group $vmssResourceGroup
+```
+
+#### [ARM template](#tab/arm1)
```json "ipConfigurations": [{
To create a scale set that uses an application gateway, reference the backend ad
}] ```
+
+<!-- The three dashes above show that your section of tabbed content is complete. Don't remove them :) -->
+
+### Adding Flexible Orchestration Virtual Machine Scale Sets to an Application Gateway
+
+When adding a Flexible scale set to an Application Gateway, the process is the same as adding standalone VMs to an Application Gateway's backend pool--you update the virtual machine's network interface IP configuration to be part of the backend pool. This can be done either [through the Application Gateway's configuration](/azure/application-gateway/create-multiple-sites-portal#add-backend-servers-to-backend-pools) or by configuring the virtual machine's network interface configuration.
+ >[!NOTE] > Note that the application gateway must be in the same virtual network as the scale set but must be in a different subnet from the scale set.
virtual-machine-scale-sets Virtual Machine Scale Sets Terminate Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md
Scale set instances can opt in to receive instance termination notifications and set a pre-defined delay timeout to the terminate operation. The termination notification is sent through Azure Metadata Service ΓÇô [Scheduled Events](../virtual-machines/windows/scheduled-events.md), which provides notifications for and delaying of impactful operations such as reboots and redeploy. The solution adds another event ΓÇô Terminate ΓÇô to the list of Scheduled Events, and the associated delay of the terminate event will depend on the delay limit as specified by users in their scale set model configurations.
-Once you're enrolled into the feature, scale set instances don't need to wait for specified timeout to expire before the instance is deleted. After receiving a Terminate notification, the instance can choose to be deleted at any time before the terminate timeout expires. Terminate notifications cannot be enabled on Spot instances. For more information on Spot instances, see [Azure Spot Virtual Machines for Virtual Machine Scale Sets](use-spot.md)
+Once you've enrolled into Scheduled Events by [calling the appropriate Metadata Service endpoint](../virtual-machines/linux/scheduled-events.md#enabling-and-disabling-scheduled-events), scale set instances don't need to wait for specified timeout to expire before the instance is deleted. After receiving a Terminate notification, the instance can choose to be deleted at any time before the terminate timeout expires. Terminate notifications cannot be enabled on Spot instances. For more information on Spot instances, see [Azure Spot Virtual Machines for Virtual Machine Scale Sets](use-spot.md)
## Enable terminate notifications There are multiple ways of enabling termination notifications on your scale set instances, as detailed in the examples below.
virtual-machines Availability Set Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/availability-set-overview.md
This article provides you with an overview of the availability features of Azure
## What is an availability set?
-An availability set is a logical grouping of VMs that allows Azure to understand how your application is built to provide for redundancy and availability. We recommended that two or more VMs are created within an availability set to provide for a highly available application and to meet the [99.95% Azure SLA](https://azure.microsoft.com/support/legal/sla/virtual-machines/). There is no cost for the Availability Set itself, you only pay for each VM instance that you create.
+Availability sets are logical groupings of VMs that reduce the chance of correlated failures bringing down related VMs at the same time. Availability sets place VMs in different fault domains for better reliability, especially beneficial if a region doesn't support availability zones. When using availability sets, create two or more VMs within an availability set. Using two or more VMs in an availability set helps highly available applications and meets the 99.95% Azure SLA. There's no extra cost for using availability sets, you only pay for each VM instance you create.
+
+Availability sets offer improved VM to VM latencies compared to availability zones, since VMs in an availability set are allocated in closer proximity. Availability sets have fault isolation for many possible failures, minimizing single points of failure, and offering high availability. Availability sets are still susceptible to certain shared infrastructure failures, like datacenter network failures, which can affect multiple fault domains.
+
+For more reliability than availability sets offer, use [availability zones](availability.md#availability-zones). Availability zones offer the highest reliability since each VM is deployed in multiple datacenters, protecting you from loss of either power, networking, or cooling in an individual datacenter. If your highest priority is the best reliability for your workload, replicate your VMs across multiple availability zones.
## How do availability sets work?
-Each virtual machine in your availability set is assigned an **update domain** and a **fault domain** by the underlying Azure platform. Each availability set can be configured with up to three fault domains and twenty update domains. These configurations can't be changed once the availability set has been created. Update domains indicate groups of virtual machines and underlying physical hardware that can be rebooted at the same time. When more than five virtual machines are configured within a single availability set with five update domains, the sixth virtual machine is placed into the same update domain as the first virtual machine, the seventh in the same update domain as the second virtual machine, and so on. The order of update domains being rebooted may not proceed sequentially during planned maintenance, but only one update domain is rebooted at a time. A rebooted update domain is given 30 minutes to recover before maintenance is initiated on a different update domain.
+Each virtual machine in your availability set is assigned an **update domain** and a **fault domain** by the underlying Azure platform. Each availability set can be configured with up to 3 fault domains and 20 update domains. These configurations can't be changed once the availability set has been created. Update domains indicate groups of virtual machines and underlying physical hardware that can be rebooted at the same time. When more than five virtual machines are configured within a single availability set with five update domains, the sixth virtual machine is placed into the same update domain as the first virtual machine, the seventh in the same update domain as the second virtual machine, and so on. The order of update domains being rebooted may not proceed sequentially during planned maintenance, but only one update domain is rebooted at a time. A rebooted update domain is given 30 minutes to recover before maintenance is initiated on a different update domain.
Fault domains define the group of virtual machines that share a common power source and network switch. By default, the virtual machines configured within your availability set are separated across up to three fault domains. While placing your virtual machines into an availability set doesn't protect your application from operating system or application-specific failures, it does limit the impact of potential physical hardware failures, network outages, or power interruptions.
Fault domains define the group of virtual machines that share a common power sou
VMs are also aligned with disk fault domains. This alignment ensures that all the managed disks attached to a VM are within the same fault domains.
-Only VMs with managed disks can be created in a managed availability set. The number of managed disk fault domains varies by region - either two or three managed disk fault domains per region. The following command will retreive a list of fault domains per region:
+Only VMs with managed disks can be created in a managed availability set. The number of managed disk fault domains varies by region - either two or three managed disk fault domains per region. The following command retrieves a list of fault domains per region:
```azurecli-interactive az vm list-skus --resource-type availabilitySets --query '[?name==`Aligned`].{Location:locationInfo[0].location, MaximumFaultDomainCount:capabilities[0].value}' -o Table
virtual-machines Key Vault Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/key-vault-windows.md
The following JSON shows the schema for the Key Vault VM extension. Before you c
"autoUpgradeMinorVersion": true, "settings": { "secretsManagementSettings": {
- "pollingIntervalInS": <A string that specifies the polling interval in seconds. Example: 3600>,
+ "pollingIntervalInS": <A string that specifies the polling interval in seconds. Example: "3600">,
"certificateStoreName": <The certificate store name. Example: "MY">,
- "linkOnRenewal": <Windows only. Ensures s-channel binding when the certificate renews without necessitating redeployment. Example: true>,"certificateStoreLocation": <The certificate store location, which currently works locally only. Example: "LocalMachine">,
+ "linkOnRenewal": <Windows only. Ensures s-channel binding when the certificate renews without necessitating redeployment. Example: true>,
+ "certificateStoreLocation": <The certificate store location, which currently works locally only. Example: "LocalMachine">,
"requireInitialSync": <Require an initial synchronization of the certificates. Example: true>, "observedCertificates": <A string array of KeyVault URIs that represent the monitored certificates. Example: "[https://myvault.vault.azure.net/secrets/mycertificate"]> },
virtual-machines Hbv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv2-series.md
description: Specifications for the HBv2-series VMs.
Previously updated : 03/04/2023 Last updated : 07/10/2023
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-HBv2-series VMs are optimized for applications that are driven by memory bandwidth, such as fluid dynamics, finite element analysis, and reservoir simulation. HBv2 VMs feature 120 AMD EPYC 7742 processor cores, 4 GB of RAM per CPU core, and no simultaneous multithreading. Each HBv2 VM provides up to 340 GB/sec of memory bandwidth, and up to 4 teraFLOPS of FP64 compute.
+HBv2-series VMs are optimized for applications that are driven by memory bandwidth, such as fluid dynamics, finite element analysis, and reservoir simulation. HBv2 VMs feature 120 AMD EPYC 7742 processor cores, 4 GB of RAM per CPU core, and no simultaneous multithreading. Each HBv2 VM provides up to 350 GB/s of memory bandwidth, and up to 4 teraFLOPS of FP64 compute.
HBv2-series VMs feature 200 Gb/sec Mellanox HDR InfiniBand. These VMs are connected in a non-blocking fat tree for optimized and consistent RDMA performance. These VMs support Adaptive Routing and the Dynamic Connected Transport (DCT, in addition to standard RC and UD transports). These features enhance application performance, scalability, and consistency, and their usage is recommended.
virtual-machines Maintenance Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations.md
Using this scope with maintenance configurations lets you decide when to apply u
This scope is integrated with [update management center](../update-center/overview.md), which allows you to save recurring deployment schedules to install updates for your Windows Server and Linux machines in Azure, in on-premises environments, and in other cloud environments connected using Azure Arc-enabled servers. Some features and limitations unique to this scope include: - [Patch orchestration](automatic-vm-guest-patching.md#patch-orchestration-modes) for virtual machines need to be set to AutomaticByPlatform+
+- A minimum of 1 hour and 10 minutes is required for the maintenance window.
+ :::image type="content" source="./media/maintenance-configurations/add-schedule-maintenance-window.png" alt-text="Screenshot of the upper maintenance window minimum time specification.":::
+- The upper maintenance window is 3.55 hours.
- A minimum of 1 hour and 30 minutes is required for the maintenance window. - There is no limit to the recurrence of your schedule.
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **MicrosoftContainerRegistry** | Container registry for Microsoft container images. <br/><br/>**Note**: This tag has a dependency on the **AzureFrontDoor.FirstParty** tag. | Outbound | Yes | Yes | | **MicrosoftDefenderForEndpoint** | Microsoft Defender for Endpoint <br/></br>**Please note this service tag is currently not available and in progress. We will update once it is ready for use.**| Both | No | Yes | | **MicrosoftPurviewPolicyDistribution** | This tag should be used within the outbound security rules for a data source (e.g. Azure SQL MI) configured with private endpoint to retrieve policies from Microsoft Purview | Outbound| No | No |
-| **PowerBI** | Power BI. | Both | No | Yes |
+| **PowerBI** | Power BI platform backend services and API endpoints.<br/><br/>**Note:** does not include frontend endpoints at the moment (e.g., app.powerbi.com).<br/><br/>Access to frontend endpoints should be provided through AzureCloud tag (Outbound, HTTPS, can be regional). | Both | No | Yes |
| **PowerPlatformInfra** | This tag represents the IP addresses used by the infrastructure to host Power Platform services. | Outbound | Yes | Yes | | **PowerPlatformPlex** | This tag represents the IP addresses used by the infrastructure to host Power Platform extension execution on behalf of the customer. | Inbound | Yes | Yes | | **PowerQueryOnline** | Power Query Online. | Both | No | Yes |
vpn-gateway Openvpn Azure Ad Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant.md
The steps in this article require an Azure AD tenant. If you don't have an Azure
## <a name="enable-authentication"></a>Configure authentication for the gateway
-1. Locate the tenant ID of the directory that you want to use for authentication. It's listed in the properties section of the Active Directory page. For help with finding your tenant ID, see [How to find your Azure Active Directory tenant ID](../active-directory/fundamentals/active-directory-how-to-find-tenant.md).
+1. Locate the tenant ID of the directory that you want to use for authentication. It's listed in the properties section of the Active Directory page. For help with finding your tenant ID, see [How to find your Azure Active Directory tenant ID](../active-directory/fundamentals/how-to-find-tenant.md).
1. If you don't already have a functioning point-to-site environment, follow the instruction to create one. See [Create a point-to-site VPN](vpn-gateway-howto-point-to-site-resource-manager-portal.md) to create and configure a point-to-site VPN gateway.